Scale Space for Camera Invariant Features.
Puig, Luis; Guerrero, José J; Daniilidis, Kostas
2014-09-01
In this paper we propose a new approach to compute the scale space of any central projection system, such as catadioptric, fisheye or conventional cameras. Since these systems can be explained using a unified model, the single parameter that defines each type of system is used to automatically compute the corresponding Riemannian metric. This metric, is combined with the partial differential equations framework on manifolds, allows us to compute the Laplace-Beltrami (LB) operator, enabling the computation of the scale space of any central projection system. Scale space is essential for the intrinsic scale selection and neighborhood description in features like SIFT. We perform experiments with synthetic and real images to validate the generalization of our approach to any central projection system. We compare our approach with the best-existing methods showing competitive results in all type of cameras: catadioptric, fisheye, and perspective.
Jáčová, Jaroslava; Gardlo, Alžběta; Friedecký, David; Adam, Tomáš; Dimandja, Jean-Marie D
2017-08-18
Orthogonality is a key parameter that is used to evaluate the separation power of chromatography-based two-dimensional systems. It is necessary to scale the separation data before the assessment of the orthogonality. Current scaling approaches are sample-dependent, and the extent of the retention space that is converted into a normalized retention space is set according to the retention times of the first and last analytes contained in a unique sample to elute. The presence or absence of a highly retained analyte in a sample can thus significantly influence the amount of information (in terms of the total amount of separation space) contained in the normalized retention space considered for the calculation of the orthogonality. We propose a Whole Separation Space Scaling (WOSEL) approach that accounts for the whole separation space delineated by the analytical method, and not the sample. This approach enables an orthogonality-based evaluation of the efficiency of the analytical system that is independent of the sample selected. The WOSEL method was compared to two currently used orthogonality approaches through the evaluation of in silico-generated chromatograms and real separations of human biofluids and petroleum samples. WOSEL exhibits sample-to-sample stability values of 3.8% on real samples, compared to 7.0% and 10.1% for the two other methods, respectively. Using real analyses, we also demonstrate that some previously developed approaches can provide misleading conclusions on the overall orthogonality of a two-dimensional chromatographic system. Copyright © 2017 Elsevier B.V. All rights reserved.
Tailoring Enterprise Systems Engineering Policy for Project Scale and Complexity
NASA Technical Reports Server (NTRS)
Cox, Renee I.; Thomas, L. Dale
2014-01-01
Space systems are characterized by varying degrees of scale and complexity. Accordingly, cost-effective implementation of systems engineering also varies depending on scale and complexity. Recognizing that systems engineering and integration happen everywhere and at all levels of a given system and that the life cycle is an integrated process necessary to mature a design, the National Aeronautic and Space Administration's (NASA's) Marshall Space Flight Center (MSFC) has developed a suite of customized implementation approaches based on project scale and complexity. While it may be argued that a top-level system engineering process is common to and indeed desirable across an enterprise for all space systems, implementation of that top-level process and the associated products developed as a result differ from system to system. The implementation approaches used for developing a scientific instrument necessarily differ from those used for a space station. .
Nonlinear Image Denoising Methodologies
2002-05-01
53 5.3 A Multiscale Approach to Scale-Space Analysis . . . . . . . . . . . . . . . . 53 5.4...etc. In this thesis, Our approach to denoising is first based on a controlled nonlinear stochastic random walk to achieve a scale space analysis ( as in... stochastic treatment or interpretation of the diffusion. In addition, unless a specific stopping time is known to be adequate, the resulting evolution
Patterns of disturbance at multiple scales in real and simulated landscapes
Giovanni Zurlini; Kurt H. Riitters; Nicola Zaccarelli; Irene Petrosoillo
2007-01-01
We describe a framework to characterize and interpret the spatial patterns of disturbances at multiple scales in socio-ecological systems. Domains of scale are defined in pattern metric space and mapped in geographic space, which can help to understand how anthropogenic disturbances might impact biodiversity through habitat modification. The approach identifies typical...
Potter, Timothy; Corneille, Olivier; Ruys, Kirsten I; Rhodes, Ginwan
2007-04-01
Findings on both attractiveness and memory for faces suggest that people should perceive more similarity among attractive than among unattractive faces. A multidimensional scaling approach was used to test this hypothesis in two studies. In Study 1, we derived a psychological face space from similarity ratings of attractive and unattractive Caucasian female faces. In Study 2, we derived a face space for attractive and unattractive male faces of Caucasians and non-Caucasians. Both studies confirm that attractive faces are indeed more tightly clustered than unattractive faces in people's psychological face spaces. These studies provide direct and original support for theoretical assumptions previously made in the face space and face memory literatures.
Data fusion of multi-scale representations for structural damage detection
NASA Astrophysics Data System (ADS)
Guo, Tian; Xu, Zili
2018-01-01
Despite extensive researches into structural health monitoring (SHM) in the past decades, there are few methods that can detect multiple slight damage in noisy environments. Here, we introduce a new hybrid method that utilizes multi-scale space theory and data fusion approach for multiple damage detection in beams and plates. A cascade filtering approach provides multi-scale space for noisy mode shapes and filters the fluctuations caused by measurement noise. In multi-scale space, a series of amplification and data fusion algorithms are utilized to search the damage features across all possible scales. We verify the effectiveness of the method by numerical simulation using damaged beams and plates with various types of boundary conditions. Monte Carlo simulations are conducted to illustrate the effectiveness and noise immunity of the proposed method. The applicability is further validated via laboratory cases studies focusing on different damage scenarios. Both results demonstrate that the proposed method has a superior noise tolerant ability, as well as damage sensitivity, without knowing material properties or boundary conditions.
Critical spaces for quasilinear parabolic evolution equations and applications
NASA Astrophysics Data System (ADS)
Prüss, Jan; Simonett, Gieri; Wilke, Mathias
2018-02-01
We present a comprehensive theory of critical spaces for the broad class of quasilinear parabolic evolution equations. The approach is based on maximal Lp-regularity in time-weighted function spaces. It is shown that our notion of critical spaces coincides with the concept of scaling invariant spaces in case that the underlying partial differential equation enjoys a scaling invariance. Applications to the vorticity equations for the Navier-Stokes problem, convection-diffusion equations, the Nernst-Planck-Poisson equations in electro-chemistry, chemotaxis equations, the MHD equations, and some other well-known parabolic equations are given.
Spatial adaptive sampling in multiscale simulation
NASA Astrophysics Data System (ADS)
Rouet-Leduc, Bertrand; Barros, Kipton; Cieren, Emmanuel; Elango, Venmugil; Junghans, Christoph; Lookman, Turab; Mohd-Yusof, Jamaludin; Pavel, Robert S.; Rivera, Axel Y.; Roehm, Dominic; McPherson, Allen L.; Germann, Timothy C.
2014-07-01
In a common approach to multiscale simulation, an incomplete set of macroscale equations must be supplemented with constitutive data provided by fine-scale simulation. Collecting statistics from these fine-scale simulations is typically the overwhelming computational cost. We reduce this cost by interpolating the results of fine-scale simulation over the spatial domain of the macro-solver. Unlike previous adaptive sampling strategies, we do not interpolate on the potentially very high dimensional space of inputs to the fine-scale simulation. Our approach is local in space and time, avoids the need for a central database, and is designed to parallelize well on large computer clusters. To demonstrate our method, we simulate one-dimensional elastodynamic shock propagation using the Heterogeneous Multiscale Method (HMM); we find that spatial adaptive sampling requires only ≈ 50 ×N0.14 fine-scale simulations to reconstruct the stress field at all N grid points. Related multiscale approaches, such as Equation Free methods, may also benefit from spatial adaptive sampling.
Scalable non-negative matrix tri-factorization.
Čopar, Andrej; Žitnik, Marinka; Zupan, Blaž
2017-01-01
Matrix factorization is a well established pattern discovery tool that has seen numerous applications in biomedical data analytics, such as gene expression co-clustering, patient stratification, and gene-disease association mining. Matrix factorization learns a latent data model that takes a data matrix and transforms it into a latent feature space enabling generalization, noise removal and feature discovery. However, factorization algorithms are numerically intensive, and hence there is a pressing challenge to scale current algorithms to work with large datasets. Our focus in this paper is matrix tri-factorization, a popular method that is not limited by the assumption of standard matrix factorization about data residing in one latent space. Matrix tri-factorization solves this by inferring a separate latent space for each dimension in a data matrix, and a latent mapping of interactions between the inferred spaces, making the approach particularly suitable for biomedical data mining. We developed a block-wise approach for latent factor learning in matrix tri-factorization. The approach partitions a data matrix into disjoint submatrices that are treated independently and fed into a parallel factorization system. An appealing property of the proposed approach is its mathematical equivalence with serial matrix tri-factorization. In a study on large biomedical datasets we show that our approach scales well on multi-processor and multi-GPU architectures. On a four-GPU system we demonstrate that our approach can be more than 100-times faster than its single-processor counterpart. A general approach for scaling non-negative matrix tri-factorization is proposed. The approach is especially useful parallel matrix factorization implemented in a multi-GPU environment. We expect the new approach will be useful in emerging procedures for latent factor analysis, notably for data integration, where many large data matrices need to be collectively factorized.
Physics in space-time with scale-dependent metrics
NASA Astrophysics Data System (ADS)
Balankin, Alexander S.
2013-10-01
We construct three-dimensional space Rγ3 with the scale-dependent metric and the corresponding Minkowski space-time Mγ,β4 with the scale-dependent fractal (DH) and spectral (DS) dimensions. The local derivatives based on scale-dependent metrics are defined and differential vector calculus in Rγ3 is developed. We state that Mγ,β4 provides a unified phenomenological framework for dimensional flow observed in quite different models of quantum gravity. Nevertheless, the main attention is focused on the special case of flat space-time M1/3,14 with the scale-dependent Cantor-dust-like distribution of admissible states, such that DH increases from DH=2 on the scale ≪ℓ0 to DH=4 in the infrared limit ≫ℓ0, where ℓ0 is the characteristic length (e.g. the Planck length, or characteristic size of multi-fractal features in heterogeneous medium), whereas DS≡4 in all scales. Possible applications of approach based on the scale-dependent metric to systems of different nature are briefly discussed.
Stable isotopes to trace food web stressors: Is space the final frontier?
To support community decision-making, we need to evaluate sources of stress and impact at a variety of spatial scales, whether local or watershed-based. Increasingly, we are using stable isotope-based approaches to determine those scales of impact, and using these approaches in v...
Multiscale wavelet representations for mammographic feature analysis
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Song, Shuwu
1992-12-01
This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet coefficients, enhanced by linear, exponential and constant weight functions localized in scale space. By improving the visualization of breast pathology we can improve the changes of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).
NASA Astrophysics Data System (ADS)
Vlah, Zvonimir; Seljak, Uroš; McDonald, Patrick; Okumura, Teppei; Baldauf, Tobias
2012-11-01
We develop a perturbative approach to redshift space distortions (RSD) using the phase space distribution function approach and apply it to the dark matter redshift space power spectrum and its moments. RSD can be written as a sum over density weighted velocity moments correlators, with the lowest order being density, momentum density and stress energy density. We use standard and extended perturbation theory (PT) to determine their auto and cross correlators, comparing them to N-body simulations. We show which of the terms can be modeled well with the standard PT and which need additional terms that include higher order corrections which cannot be modeled in PT. Most of these additional terms are related to the small scale velocity dispersion effects, the so called finger of god (FoG) effects, which affect some, but not all, of the terms in this expansion, and which can be approximately modeled using a simple physically motivated ansatz such as the halo model. We point out that there are several velocity dispersions that enter into the detailed RSD analysis with very different amplitudes, which can be approximately predicted by the halo model. In contrast to previous models our approach systematically includes all of the terms at a given order in PT and provides a physical interpretation for the small scale dispersion values. We investigate RSD power spectrum as a function of μ, the cosine of the angle between the Fourier mode and line of sight, focusing on the lowest order powers of μ and multipole moments which dominate the observable RSD power spectrum. Overall we find considerable success in modeling many, but not all, of the terms in this expansion. This is similar to the situation in real space, but predicting power spectrum in redshift space is more difficult because of the explicit influence of small scale dispersion type effects in RSD, which extend to very large scales.
Krishna Kumar, P; Araki, Tadashi; Rajan, Jeny; Saba, Luca; Lavra, Francesco; Ikeda, Nobutaka; Sharma, Aditya M; Shafique, Shoaib; Nicolaides, Andrew; Laird, John R; Gupta, Ajay; Suri, Jasjit S
2017-08-01
Monitoring of cerebrovascular diseases via carotid ultrasound has started to become a routine. The measurement of image-based lumen diameter (LD) or inter-adventitial diameter (IAD) is a promising approach for quantification of the degree of stenosis. The manual measurements of LD/IAD are not reliable, subjective and slow. The curvature associated with the vessels along with non-uniformity in the plaque growth poses further challenges. This study uses a novel and generalized approach for automated LD and IAD measurement based on a combination of spatial transformation and scale-space. In this iterative procedure, the scale-space is first used to get the lumen axis which is then used with spatial image transformation paradigm to get a transformed image. The scale-space is then reapplied to retrieve the lumen region and boundary in the transformed framework. Then, inverse transformation is applied to display the results in original image framework. Two hundred and two patients' left and right common carotid artery (404 carotid images) B-mode ultrasound images were retrospectively analyzed. The validation of our algorithm has done against the two manual expert tracings. The coefficient of correlation between the two manual tracings for LD was 0.98 (p < 0.0001) and 0.99 (p < 0.0001), respectively. The precision of merit between the manual expert tracings and the automated system was 97.7 and 98.7%, respectively. The experimental analysis demonstrated superior performance of the proposed method over conventional approaches. Several statistical tests demonstrated the stability and reliability of the automated system.
Luminet, Jean-Pierre; Weeks, Jeffrey R; Riazuelo, Alain; Lehoucq, Roland; Uzan, Jean-Philippe
2003-10-09
The current 'standard model' of cosmology posits an infinite flat universe forever expanding under the pressure of dark energy. First-year data from the Wilkinson Microwave Anisotropy Probe (WMAP) confirm this model to spectacular precision on all but the largest scales. Temperature correlations across the microwave sky match expectations on angular scales narrower than 60 degrees but, contrary to predictions, vanish on scales wider than 60 degrees. Several explanations have been proposed. One natural approach questions the underlying geometry of space--namely, its curvature and topology. In an infinite flat space, waves from the Big Bang would fill the universe on all length scales. The observed lack of temperature correlations on scales beyond 60 degrees means that the broadest waves are missing, perhaps because space itself is not big enough to support them. Here we present a simple geometrical model of a finite space--the Poincaré dodecahedral space--which accounts for WMAP's observations with no fine-tuning required. The predicted density is Omega(0) approximately 1.013 > 1, and the model also predicts temperature correlations in matching circles on the sky.
Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo
2012-12-01
A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.
Scaling in biomechanical experimentation: a finite similitude approach.
Ochoa-Cabrero, Raul; Alonso-Rasgado, Teresa; Davey, Keith
2018-06-01
Biological experimentation has many obstacles: resource limitations, unavailability of materials, manufacturing complexities and ethical compliance issues; any approach that resolves all or some of these is of some interest. The aim of this study is applying the recently discovered concept of finite similitude as a novel approach for the design of scaled biomechanical experiments supported with analysis using a commercial finite-element package and validated by means of image correlation software. The study of isotropic scaling of synthetic bones leads to the selection of three-dimensional (3D) printed materials for the trial-space materials. These materials conforming to the theory are analysed in finite-element models of a cylinder and femur geometries undergoing compression, tension, torsion and bending tests to assess the efficacy of the approach using reverse scaling of the approach. The finite-element results show similar strain patterns in the surface for the cylinder with a maximum difference of less than 10% and for the femur with a maximum difference of less than 4% across all tests. Finally, the trial-space, physical-trial experimentation using 3D printed materials for compression and bending testing provides a good agreement in a Bland-Altman statistical analysis, providing good supporting evidence for the practicality of the approach. © 2018 The Author(s).
A real-space stochastic density matrix approach for density functional electronic structure.
Beck, Thomas L
2015-12-21
The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches.
Epitrochoid Power-Law Nozzle Rapid Prototype Build/Test Project (Briefing Charts)
2015-02-01
Production Approved for public release; distribution is unlimited. PA clearance # 15122. 4 Epitrochoid Power-Law Nozzle Build/Test Build on SpaceX ...Multiengine Approach SpaceX ) Approved for public release; distribution is unlimited. PA clearance # 15122. Engines: Merlin 1D on Falcon 9 v1.1 (Photo 5...to utilize features of high performance engines advances and the economies of scale of the multi-engine approach of SpaceX Falcon 9 – Rapid Prototype
Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals
NASA Astrophysics Data System (ADS)
Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.
2017-10-01
We present a code implementing the linearized quasiparticle self-consistent GW method (LQSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N3 scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method. Program Files doi:http://dx.doi.org/10.17632/cpchkfty4w.1 Licensing provisions: GNU General Public License Programming language: Fortran 90 External routines/libraries: BLAS, LAPACK, MPI (optional) Nature of problem: Direct implementation of the GW method scales as N4 with the system size, which quickly becomes prohibitively time consuming even in the modern computers. Solution method: We implemented the GW approach using a method that switches between real space and momentum space representations. Some operations are faster in real space, whereas others are more computationally efficient in the reciprocal space. This makes our approach scale as N3. Restrictions: The limiting factor is usually the memory available in a computer. Using 10 GB/core of memory allows us to study the systems up to 15 atoms per unit cell.
Measurement of nanoscale three-dimensional diffusion in the interior of living cells by STED-FCS.
Lanzanò, Luca; Scipioni, Lorenzo; Di Bona, Melody; Bianchini, Paolo; Bizzarri, Ranieri; Cardarelli, Francesco; Diaspro, Alberto; Vicidomini, Giuseppe
2017-07-06
The observation of molecular diffusion at different spatial scales, and in particular below the optical diffraction limit (<200 nm), can reveal details of the subcellular topology and its functional organization. Stimulated-emission depletion microscopy (STED) has been previously combined with fluorescence correlation spectroscopy (FCS) to investigate nanoscale diffusion (STED-FCS). However, stimulated-emission depletion fluorescence correlation spectroscopy has only been used successfully to reveal functional organization in two-dimensional space, such as the plasma membrane, while, an efficient implementation for measurements in three-dimensional space, such as the cellular interior, is still lacking. Here we integrate the STED-FCS method with two analytical approaches, the recent separation of photons by lifetime tuning and the fluorescence lifetime correlation spectroscopy, to simultaneously probe diffusion in three dimensions at different sub-diffraction scales. We demonstrate that this method efficiently provides measurement of the diffusion of EGFP at spatial scales tunable from the diffraction size down to ∼80 nm in the cytoplasm of living cells.The measurement of molecular diffusion at sub-diffraction scales has been achieved in 2D space using STED-FCS, but an implementation for 3D diffusion is lacking. Here the authors present an analytical approach to probe diffusion in 3D space using STED-FCS and measure the diffusion of EGFP at different spatial scales.
Gautestad, Arild O
2012-09-07
Animals moving under the influence of spatio-temporal scaling and long-term memory generate a kind of space-use pattern that has proved difficult to model within a coherent theoretical framework. An extended kind of statistical mechanics is needed, accounting for both the effects of spatial memory and scale-free space use, and put into a context of ecological conditions. Simulations illustrating the distinction between scale-specific and scale-free locomotion are presented. The results show how observational scale (time lag between relocations of an individual) may critically influence the interpretation of the underlying process. In this respect, a novel protocol is proposed as a method to distinguish between some main movement classes. For example, the 'power law in disguise' paradox-from a composite Brownian motion consisting of a superposition of independent movement processes at different scales-may be resolved by shifting the focus from pattern analysis at one particular temporal resolution towards a more process-oriented approach involving several scales of observation. A more explicit consideration of system complexity within a statistical mechanical framework, supplementing the more traditional mechanistic modelling approach, is advocated.
Assessing global vegetation activity using spatio-temporal Bayesian modelling
NASA Astrophysics Data System (ADS)
Mulder, Vera L.; van Eck, Christel M.; Friedlingstein, Pierre; Regnier, Pierre A. G.
2016-04-01
This work demonstrates the potential of modelling vegetation activity using a hierarchical Bayesian spatio-temporal model. This approach allows modelling changes in vegetation and climate simultaneous in space and time. Changes of vegetation activity such as phenology are modelled as a dynamic process depending on climate variability in both space and time. Additionally, differences in observed vegetation status can be contributed to other abiotic ecosystem properties, e.g. soil and terrain properties. Although these properties do not change in time, they do change in space and may provide valuable information in addition to the climate dynamics. The spatio-temporal Bayesian models were calibrated at a regional scale because the local trends in space and time can be better captured by the model. The regional subsets were defined according to the SREX segmentation, as defined by the IPCC. Each region is considered being relatively homogeneous in terms of large-scale climate and biomes, still capturing small-scale (grid-cell level) variability. Modelling within these regions is hence expected to be less uncertain due to the absence of these large-scale patterns, compared to a global approach. This overall modelling approach allows the comparison of model behavior for the different regions and may provide insights on the main dynamic processes driving the interaction between vegetation and climate within different regions. The data employed in this study encompasses the global datasets for soil properties (SoilGrids), terrain properties (Global Relief Model based on SRTM DEM and ETOPO), monthly time series of satellite-derived vegetation indices (GIMMS NDVI3g) and climate variables (Princeton Meteorological Forcing Dataset). The findings proved the potential of a spatio-temporal Bayesian modelling approach for assessing vegetation dynamics, at a regional scale. The observed interrelationships of the employed data and the different spatial and temporal trends support our hypothesis. That is, the change of vegetation in space and time may be better understood when modelling vegetation change as both a dynamic and multivariate process. Therefore, future research will focus on a multivariate dynamical spatio-temporal modelling approach. This ongoing research is performed within the context of the project "Global impacts of hydrological and climatic extremes on vegetation" (project acronym: SAT-EX) which is part of the Belgian research programme for Earth Observation Stereo III.
A Scalable Approach to Probabilistic Latent Space Inference of Large-Scale Networks
Yin, Junming; Ho, Qirong; Xing, Eric P.
2014-01-01
We propose a scalable approach for making inference about latent spaces of large networks. With a succinct representation of networks as a bag of triangular motifs, a parsimonious statistical model, and an efficient stochastic variational inference algorithm, we are able to analyze real networks with over a million vertices and hundreds of latent roles on a single machine in a matter of hours, a setting that is out of reach for many existing methods. When compared to the state-of-the-art probabilistic approaches, our method is several orders of magnitude faster, with competitive or improved accuracy for latent space recovery and link prediction. PMID:25400487
Estimating Agricultural Nitrous Oxide Emissions
USDA-ARS?s Scientific Manuscript database
Nitrous oxide emissions are highly variable in space and time and different methodologies have not agreed closely, especially at small scales. However, as scale increases, so does the agreement between estimates based on soil surface measurements (bottom up approach) and estimates derived from chang...
HIGH-RESOLUTION DATASET OF URBAN CANOPY PARAMETERS FOR HOUSTON, TEXAS
Urban dispersion and air quality simulation models applied at various horizontal scales require different levels of fidelity for specifying the characteristics of the underlying surfaces. As the modeling scales approach the neighborhood level (~1 km horizontal grid spacing), the...
Nutritional Systems Biology Modeling: From Molecular Mechanisms to Physiology
de Graaf, Albert A.; Freidig, Andreas P.; De Roos, Baukje; Jamshidi, Neema; Heinemann, Matthias; Rullmann, Johan A.C.; Hall, Kevin D.; Adiels, Martin; van Ommen, Ben
2009-01-01
The use of computational modeling and simulation has increased in many biological fields, but despite their potential these techniques are only marginally applied in nutritional sciences. Nevertheless, recent applications of modeling have been instrumental in answering important nutritional questions from the cellular up to the physiological levels. Capturing the complexity of today's important nutritional research questions poses a challenge for modeling to become truly integrative in the consideration and interpretation of experimental data at widely differing scales of space and time. In this review, we discuss a selection of available modeling approaches and applications relevant for nutrition. We then put these models into perspective by categorizing them according to their space and time domain. Through this categorization process, we identified a dearth of models that consider processes occurring between the microscopic and macroscopic scale. We propose a “middle-out” strategy to develop the required full-scale, multilevel computational models. Exhaustive and accurate phenotyping, the use of the virtual patient concept, and the development of biomarkers from “-omics” signatures are identified as key elements of a successful systems biology modeling approach in nutrition research—one that integrates physiological mechanisms and data at multiple space and time scales. PMID:19956660
NASA Astrophysics Data System (ADS)
Dündar, Furkan Semih
2018-01-01
We provide a theory of n-scales previously called as n dimensional time scales. In previous approaches to the theory of time scales, multi-dimensional scales were taken as product space of two time scales [1, 2]. n-scales make the mathematical structure more flexible and appropriate to real world applications in physics and related fields. Here we define an n-scale as an arbitrary closed subset of ℝn. Modified forward and backward jump operators, Δ-derivatives and Δ-integrals on n-scales are defined.
A Principled Approach to the Specification of System Architectures for Space Missions
NASA Technical Reports Server (NTRS)
McKelvin, Mark L. Jr.; Castillo, Robert; Bonanne, Kevin; Bonnici, Michael; Cox, Brian; Gibson, Corrina; Leon, Juan P.; Gomez-Mustafa, Jose; Jimenez, Alejandro; Madni, Azad
2015-01-01
Modern space systems are increasing in complexity and scale at an unprecedented pace. Consequently, innovative methods, processes, and tools are needed to cope with the increasing complexity of architecting these systems. A key systems challenge in practice is the ability to scale processes, methods, and tools used to architect complex space systems. Traditionally, the process for specifying space system architectures has largely relied on capturing the system architecture in informal descriptions that are often embedded within loosely coupled design documents and domain expertise. Such informal descriptions often lead to misunderstandings between design teams, ambiguous specifications, difficulty in maintaining consistency as the architecture evolves throughout the system development life cycle, and costly design iterations. Therefore, traditional methods are becoming increasingly inefficient to cope with ever-increasing system complexity. We apply the principles of component-based design and platform-based design to the development of the system architecture for a practical space system to demonstrate feasibility of our approach using SysML. Our results show that we are able to apply a systematic design method to manage system complexity, thus enabling effective data management, semantic coherence and traceability across different levels of abstraction in the design chain. Just as important, our approach enables interoperability among heterogeneous tools in a concurrent engineering model based design environment.
A scale space feature based registration technique for fusion of satellite imagery
NASA Technical Reports Server (NTRS)
Raghavan, Srini; Cromp, Robert F.; Campbell, William C.
1997-01-01
Feature based registration is one of the most reliable methods to register multi-sensor images (both active and passive imagery) since features are often more reliable than intensity or radiometric values. The only situation where a feature based approach will fail is when the scene is completely homogenous or densely textural in which case a combination of feature and intensity based methods may yield better results. In this paper, we present some preliminary results of testing our scale space feature based registration technique, a modified version of feature based method developed earlier for classification of multi-sensor imagery. The proposed approach removes the sensitivity in parameter selection experienced in the earlier version as explained later.
NASA Astrophysics Data System (ADS)
Antonov, N. V.; Gulitskiy, N. M.; Kostenko, M. M.; Lučivjanský, T.
2017-03-01
We study a model of fully developed turbulence of a compressible fluid, based on the stochastic Navier-Stokes equation, by means of the field-theoretic renormalization group. In this approach, scaling properties are related to the fixed points of the renormalization group equations. Previous analysis of this model near the real-world space dimension 3 identified a scaling regime [N. V. Antonov et al., Theor. Math. Phys. 110, 305 (1997), 10.1007/BF02630456]. The aim of the present paper is to explore the existence of additional regimes, which could not be found using the direct perturbative approach of the previous work, and to analyze the crossover between different regimes. It seems possible to determine them near the special value of space dimension 4 in the framework of double y and ɛ expansion, where y is the exponent associated with the random force and ɛ =4 -d is the deviation from the space dimension 4. Our calculations show that there exists an additional fixed point that governs scaling behavior. Turbulent advection of a passive scalar (density) field by this velocity ensemble is considered as well. We demonstrate that various correlation functions of the scalar field exhibit anomalous scaling behavior in the inertial-convective range. The corresponding anomalous exponents, identified as scaling dimensions of certain composite fields, can be systematically calculated as a series in y and ɛ . All calculations are performed in the leading one-loop approximation.
Chi, Baofang; Tao, Shiheng; Liu, Yanlin
2015-01-01
Sampling the solution space of genome-scale models is generally conducted to determine the feasible region for metabolic flux distribution. Because the region for actual metabolic states resides only in a small fraction of the entire space, it is necessary to shrink the solution space to improve the predictive power of a model. A common strategy is to constrain models by integrating extra datasets such as high-throughput datasets and C13-labeled flux datasets. However, studies refining these approaches by performing a meta-analysis of massive experimental metabolic flux measurements, which are closely linked to cellular phenotypes, are limited. In the present study, experimentally identified metabolic flux data from 96 published reports were systematically reviewed. Several strong associations among metabolic flux phenotypes were observed. These phenotype-phenotype associations at the flux level were quantified and integrated into a Saccharomyces cerevisiae genome-scale model as extra physiological constraints. By sampling the shrunken solution space of the model, the metabolic flux fluctuation level, which is an intrinsic trait of metabolic reactions determined by the network, was estimated and utilized to explore its relationship to gene expression noise. Although no correlation was observed in all enzyme-coding genes, a relationship between metabolic flux fluctuation and expression noise of genes associated with enzyme-dosage sensitive reactions was detected, suggesting that the metabolic network plays a role in shaping gene expression noise. Such correlation was mainly attributed to the genes corresponding to non-essential reactions, rather than essential ones. This was at least partially, due to regulations underlying the flux phenotype-phenotype associations. Altogether, this study proposes a new approach in shrinking the solution space of a genome-scale model, of which sampling provides new insights into gene expression noise.
Scientific, statistical, practical, and regulatory considerations in design space development.
Debevec, Veronika; Srčič, Stanko; Horvat, Matej
2018-03-01
The quality by design (QbD) paradigm guides the pharmaceutical industry towards improved understanding of products and processes, and at the same time facilitates a high degree of manufacturing and regulatory flexibility throughout the establishment of the design space. This review article presents scientific, statistical and regulatory considerations in design space development. All key development milestones, starting with planning, selection of factors, experimental execution, data analysis, model development and assessment, verification, and validation, and ending with design space submission, are presented and discussed. The focus is especially on frequently ignored topics, like management of factors and CQAs that will not be included in experimental design, evaluation of risk of failure on design space edges, or modeling scale-up strategy. Moreover, development of a design space that is independent of manufacturing scale is proposed as the preferred approach.
NASA Technical Reports Server (NTRS)
Xu, Kuan-Man; Cheng, Anning
2007-01-01
The effects of subgrid-scale condensation and transport become more important as the grid spacings increase from those typically used in large-eddy simulation (LES) to those typically used in cloud-resolving models (CRMs). Incorporation of these effects can be achieved by a joint probability density function approach that utilizes higher-order moments of thermodynamic and dynamic variables. This study examines how well shallow cumulus and stratocumulus clouds are simulated by two versions of a CRM that is implemented with low-order and third-order turbulence closures (LOC and TOC) when a typical CRM horizontal resolution is used and what roles the subgrid-scale and resolved-scale processes play as the horizontal grid spacing of the CRM becomes finer. Cumulus clouds were mostly produced through subgrid-scale transport processes while stratocumulus clouds were produced through both subgrid-scale and resolved-scale processes in the TOC version of the CRM when a typical CRM grid spacing is used. The LOC version of the CRM relied upon resolved-scale circulations to produce both cumulus and stratocumulus clouds, due to small subgrid-scale transports. The mean profiles of thermodynamic variables, cloud fraction and liquid water content exhibit significant differences between the two versions of the CRM, with the TOC results agreeing better with the LES than the LOC results. The characteristics, temporal evolution and mean profiles of shallow cumulus and stratocumulus clouds are weakly dependent upon the horizontal grid spacing used in the TOC CRM. However, the ratio of the subgrid-scale to resolved-scale fluxes becomes smaller as the horizontal grid spacing decreases. The subcloud-layer fluxes are mostly due to the resolved scales when a grid spacing less than or equal to 1 km is used. The overall results of the TOC simulations suggest that a 1-km grid spacing is a good choice for CRM simulation of shallow cumulus and stratocumulus.
Relativistic initial conditions for N-body simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fidler, Christian; Tram, Thomas; Crittenden, Robert
2017-06-01
Initial conditions for (Newtonian) cosmological N-body simulations are usually set by re-scaling the present-day power spectrum obtained from linear (relativistic) Boltzmann codes to the desired initial redshift of the simulation. This back-scaling method can account for the effect of inhomogeneous residual thermal radiation at early times, which is absent in the Newtonian simulations. We analyse this procedure from a fully relativistic perspective, employing the recently-proposed Newtonian motion gauge framework. We find that N-body simulations for ΛCDM cosmology starting from back-scaled initial conditions can be self-consistently embedded in a relativistic space-time with first-order metric potentials calculated using a linear Boltzmann code.more » This space-time coincides with a simple ''N-body gauge'' for z < 50 for all observable modes. Care must be taken, however, when simulating non-standard cosmologies. As an example, we analyse the back-scaling method in a cosmology with decaying dark matter, and show that metric perturbations become large at early times in the back-scaling approach, indicating a breakdown of the perturbative description. We suggest a suitable ''forwards approach' for such cases.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vlah, Zvonimir; Seljak, Uroš; Baldauf, Tobias
We develop a perturbative approach to redshift space distortions (RSD) using the phase space distribution function approach and apply it to the dark matter redshift space power spectrum and its moments. RSD can be written as a sum over density weighted velocity moments correlators, with the lowest order being density, momentum density and stress energy density. We use standard and extended perturbation theory (PT) to determine their auto and cross correlators, comparing them to N-body simulations. We show which of the terms can be modeled well with the standard PT and which need additional terms that include higher order correctionsmore » which cannot be modeled in PT. Most of these additional terms are related to the small scale velocity dispersion effects, the so called finger of god (FoG) effects, which affect some, but not all, of the terms in this expansion, and which can be approximately modeled using a simple physically motivated ansatz such as the halo model. We point out that there are several velocity dispersions that enter into the detailed RSD analysis with very different amplitudes, which can be approximately predicted by the halo model. In contrast to previous models our approach systematically includes all of the terms at a given order in PT and provides a physical interpretation for the small scale dispersion values. We investigate RSD power spectrum as a function of μ, the cosine of the angle between the Fourier mode and line of sight, focusing on the lowest order powers of μ and multipole moments which dominate the observable RSD power spectrum. Overall we find considerable success in modeling many, but not all, of the terms in this expansion. This is similar to the situation in real space, but predicting power spectrum in redshift space is more difficult because of the explicit influence of small scale dispersion type effects in RSD, which extend to very large scales.« less
Urban dispersion and air quality simulation models applied at various horizontal scales require different levels of fidelity for specifying the characteristics of the underlying surfaces. As the modeling scales approach the neighborhood level (~1 km horizontal grid spacing), the...
Fast hierarchical knowledge-based approach for human face detection in color images
NASA Astrophysics Data System (ADS)
Jiang, Jun; Gong, Jie; Zhang, Guilin; Hu, Ruolan
2001-09-01
This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.
Wavelet processing techniques for digital mammography
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Song, Shuwu
1992-09-01
This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Similar to traditional coarse to fine matching strategies, the radiologist may first choose to look for coarse features (e.g., dominant mass) within low frequency levels of a wavelet transform and later examine finer features (e.g., microcalcifications) at higher frequency levels. In addition, features may be extracted by applying geometric constraints within each level of the transform. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet representations, enhanced by linear, exponential and constant weight functions through scale space. By improving the visualization of breast pathology we can improve the chances of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).
Gautestad, Arild O.
2012-01-01
Animals moving under the influence of spatio-temporal scaling and long-term memory generate a kind of space-use pattern that has proved difficult to model within a coherent theoretical framework. An extended kind of statistical mechanics is needed, accounting for both the effects of spatial memory and scale-free space use, and put into a context of ecological conditions. Simulations illustrating the distinction between scale-specific and scale-free locomotion are presented. The results show how observational scale (time lag between relocations of an individual) may critically influence the interpretation of the underlying process. In this respect, a novel protocol is proposed as a method to distinguish between some main movement classes. For example, the ‘power law in disguise’ paradox—from a composite Brownian motion consisting of a superposition of independent movement processes at different scales—may be resolved by shifting the focus from pattern analysis at one particular temporal resolution towards a more process-oriented approach involving several scales of observation. A more explicit consideration of system complexity within a statistical mechanical framework, supplementing the more traditional mechanistic modelling approach, is advocated. PMID:22456456
Memory matters: influence from a cognitive map on animal space use.
Gautestad, Arild O
2011-10-21
A vertebrate individual's cognitive map provides a capacity for site fidelity and long-distance returns to favorable patches. Fractal-geometrical analysis of individual space use based on collection of telemetry fixes makes it possible to verify the influence of a cognitive map on the spatial scatter of habitat use and also to what extent space use has been of a scale-specific versus a scale-free kind. This approach rests on a statistical mechanical level of system abstraction, where micro-scale details of behavioral interactions are coarse-grained to macro-scale observables like the fractal dimension of space use. In this manner, the magnitude of the fractal dimension becomes a proxy variable for distinguishing between main classes of habitat exploration and site fidelity, like memory-less (Markovian) Brownian motion and Levy walk and memory-enhanced space use like Multi-scaled Random Walk (MRW). In this paper previous analyses are extended by exploring MRW simulations under three scenarios: (1) central place foraging, (2) behavioral adaptation to resource depletion (avoidance of latest visited locations) and (3) transition from MRW towards Levy walk by narrowing memory capacity to a trailing time window. A generalized statistical-mechanical theory with the power to model cognitive map influence on individual space use will be important for statistical analyses of animal habitat preferences and the mechanics behind site fidelity and home ranges. Copyright © 2011 Elsevier Ltd. All rights reserved.
An Integrated Optimal Estimation Approach to Spitzer Space Telescope Focal Plane Survey
NASA Technical Reports Server (NTRS)
Bayard, David S.; Kang, Bryan H.; Brugarolas, Paul B.; Boussalis, D.
2004-01-01
This paper discusses an accurate and efficient method for focal plane survey that was used for the Spitzer Space Telescope. The approach is based on using a high-order 37-state Instrument Pointing Frame (IPF) Kalman filter that combines both engineering parameters and science parameters into a single filter formulation. In this approach, engineering parameters such as pointing alignments, thermomechanical drift and gyro drifts are estimated along with science parameters such as plate scales and optical distortions. This integrated approach has many advantages compared to estimating the engineering and science parameters separately. The resulting focal plane survey approach is applicable to a diverse range of science instruments such as imaging cameras, spectroscopy slits, and scanning-type arrays alike. The paper will summarize results from applying the IPF Kalman Filter to calibrating the Spitzer Space Telescope focal plane, containing the MIPS, IRAC, and the IRS science Instrument arrays.
Imagining Cosmopolitan Space: Spectacle, Rice and Global Citizenship
ERIC Educational Resources Information Center
Parry, Simon
2010-01-01
How do you stage the world? This article reviews how a series of performance installations by the theatre company Stan's Cafe have approached global space. It examines the way "Plague Nation" and "Of All the People in All the World" tackle national and global scale through the representation of populations. Drawing on a…
Michael C. Dietze; Rodrigo Vargas; Andrew D. Richardson; Paul C. Stoy; Alan G. Barr; Ryan S. Anderson; M. Altaf Arain; Ian T. Baker; T. Andrew Black; Jing M. Chen; Philippe Ciais; Lawrence B. Flanagan; Christopher M. Gough; Robert F. Grant; David Hollinger; R. Cesar Izaurralde; Christopher J. Kucharik; Peter Lafleur; Shugang Liu; Erandathie Lokupitiya; Yiqi Luo; J. William Munger; Changhui Peng; Benjamin Poulter; David T. Price; Daniel M. Ricciuto; William J. Riley; Alok Kumar Sahoo; Kevin Schaefer; Andrew E. Suyker; Hanqin Tian; Christina Tonitto; Hans Verbeeck; Shashi B. Verma; Weifeng Wang; Ensheng Weng
2011-01-01
Ecosystem models are important tools for diagnosing the carbon cycle and projecting its behavior across space and time. Despite the fact that ecosystems respond to drivers at multiple time scales, most assessments of model performance do not discriminate different time scales. Spectral methods, such as wavelet analyses, present an alternative approach that enables the...
Transformational System Concepts and Technologies for Our Future in Space
NASA Technical Reports Server (NTRS)
Howell, Joe T.; Mankins, John C.
2004-01-01
Continued constrained budgets and growing national and international interests in the commercialization and development of space requires NASA to be constantly vigilant, to be creative, and to seize every opportunity for assuring the maximum return on space infrastructure investments. Accordingly, efforts are underway to forge new and innovative approaches to transform our space systems in the future to ultimately achieve two or three or five times as much with the same resources. This bold undertaking can be achieved only through extensive cooperative efforts throughout the aerospace community and truly effective planning to pursue advanced space system design concepts and high-risk/high-leverage research and technology. Definitive implementation strategies and roadmaps containing new methodologies and revolutionary approaches must be developed to economically accommodate the continued exploration and development of space. Transformation can be realized through modular design and stepping stone development. This approach involves sustainable budget levels and multi-purpose systems development of supporting capabilities that lead to a diverse amy of sustainable future space activities. Transformational design and development requires revolutionary advances by using modular designs and a planned, stepping stone development process. A modular approach to space systems potentially offers many improvements over traditional one-of-a-kind space systems comprised of different subsystem element with little standardization in interfaces or functionality. Modular systems must be more flexible, scaleable, reconfigurable, and evolvable. Costs can be reduced through learning curve effects and economies of scale, and by enabling servicing and repair that would not otherwise be feasible. This paper briefly discusses achieving a promising approach to transforming space systems planning and evolution into a meaningful stepping stone design, development, and implementation process. The success of this well planned and orchestrated approach holds great promise for achieving innovation and revolutionary technology development for supporting future exploration and development of space.
NASA Astrophysics Data System (ADS)
Ives, Christopher
2015-04-01
Measuring social values for landscapes is an emerging field of research and is critical to the successful management of urban ecosystems. Green open space planning has traditionally relied on rigid standards and metrics without considering the physical requirements of green spaces that are valued for different reasons and by different people. Relating social landscape values to key environmental variables provides a much stronger evidence base for planning landscapes that are both socially desirable and environmentally sustainable. This study spatially quantified residents' values for green space in the Lower Hunter Valley of New South Wales, Australia by enabling participants to mark their values for specific open spaces on interactive paper maps. The survey instrument was designed to evaluate the effect of spatial scale by providing maps of residents' local area at both suburb and municipality scales. The importance of open space values differed depending on whether they were indicated via marker dots or reported on in a general aspatial sense. This suggests that certain open space functions were inadequately provided for in the local area (specifically, cultural significance and health/therapeutic value). Additionally, all value types recorded a greater abundance of marker dots at the finer (suburb) scale compared to the coarser (municipality) scale, but this pattern was more pronounced for some values than others (e.g. physical exercise value). Finally, significant relationships were observed between the abundance of value marker dots in parks and their environmental characteristics (e.g. percentage of vegetation). These results have interesting implications when considering the compatibility between different functions of green spaces and how planners can incorporate information about social values with more traditional approaches to green space planning.
Achieving Space Shuttle ATO Using the Five-Segment Booster (FSB)
NASA Technical Reports Server (NTRS)
Sauvageau, Donald R.; McCool, Alex (Technical Monitor)
2001-01-01
As part of the continuing effort to identify approaches to improve the safety and reliability of the Space Shuttle system, a Five-Segment Booster (FSB) design was conceptualized as a replacement for the current Space Shuttle boosters. The FSB offers a simple, unique approach to improve astronaut safety and increase performance margin. To determine the feasibility of the FSB, a Phase A study effort was sponsored by NASA and directed by the Marshall Space Flight Center. This study was initiated in March of 1999 and completed in December of 2000. The basic objective of this study was to assess the feasibility of the FSB design concept and also estimate the cost and scope of a full-scale development program for the FSB. In order to ensure an effective and thorough evaluation of the FSB concept, four team members were put on contract to support various areas of importance in assessing the overall feasibility of the design approach.
Ye, Tao; Zhou, Fuqiang
2015-04-10
When imaged by detectors, space targets (including satellites and debris) and background stars have similar point-spread functions, and both objects appear to change as detectors track targets. Therefore, traditional tracking methods cannot separate targets from stars and cannot directly recognize targets in 2D images. Consequently, we propose an autonomous space target recognition and tracking approach using a star sensor technique and a Kalman filter (KF). A two-step method for subpixel-scale detection of star objects (including stars and targets) is developed, and the combination of the star sensor technique and a KF is used to track targets. The experimental results show that the proposed method is adequate for autonomously recognizing and tracking space targets.
Flight Approach to Adaptive Control Research
NASA Technical Reports Server (NTRS)
Pavlock, Kate Maureen; Less, James L.; Larson, David Nils
2011-01-01
The National Aeronautics and Space Administration's Dryden Flight Research Center completed flight testing of adaptive controls research on a full-scale F-18 testbed. The testbed served as a full-scale vehicle to test and validate adaptive flight control research addressing technical challenges involved with reducing risk to enable safe flight in the presence of adverse conditions such as structural damage or control surface failures. This paper describes the research interface architecture, risk mitigations, flight test approach and lessons learned of adaptive controls research.
Design and Analysis of a Formation Flying System for the Cross-Scale Mission Concept
NASA Technical Reports Server (NTRS)
Cornara, Stefania; Bastante, Juan C.; Jubineau, Franck
2007-01-01
The ESA-funded "Cross-Scale Technology Reference Study has been carried out with the primary aim to identify and analyse a mission concept for the investigation of fundamental space plasma processes that involve dynamical non-linear coupling across multiple length scales. To fulfill this scientific mission goal, a constellation of spacecraft is required, flying in loose formations around the Earth and sampling three characteristic plasma scale distances simultaneously, with at least two satellites per scale: electron kinetic (10 km), ion kinetic (100-2000 km), magnetospheric fluid (3000-15000 km). The key Cross-Scale mission drivers identified are the number of S/C, the space segment configuration, the reference orbit design, the transfer and deployment strategy, the inter-satellite localization and synchronization process and the mission operations. This paper presents a comprehensive overview of the mission design and analysis for the Cross-Scale concept and outlines a technically feasible mission architecture for a multi-dimensional investigation of space plasma phenomena. The main effort has been devoted to apply a thorough mission-level trade-off approach and to accomplish an exhaustive analysis, so as to allow the characterization of a wide range of mission requirements and design solutions.
Millimeterwave Space Power Grid architecture development 2012
NASA Astrophysics Data System (ADS)
Komerath, Narayanan; Dessanti, Brendan; Shah, Shaan
This is an update of the Space Power Grid architecture for space-based solar power with an improved design of the collector/converter link, the primary heater and the radiator of the active thermal control system. The Space Power Grid offers an evolutionary approach towards TeraWatt-level Space-based solar power. The use of millimeter wave frequencies (around 220GHz) and Low-Mid Earth Orbits shrinks the size of the space and ground infrastructure to manageable levels. In prior work we showed that using Brayton cycle conversion of solar power allows large economies of scale compared to the linear mass-power relationship of photovoltaic conversion. With high-temperature materials permitting 3600 K temperature in the primary heater, over 80 percent cycle efficiency was shown with a closed helium cycle for the 1GW converter satellite which formed the core element of the architecture. Work done since the last IEEE conference has shown that the use of waveguides incorporated into lighter-than-air antenna platforms, can overcome the difficulties in transmitting millimeter wave power through the moist, dense lower atmosphere. A graphene-based radiator design conservatively meets the mass budget for the waste heat rejection system needed for the compressor inlet temperature. Placing the ultralight Mirasol collectors in lower orbits overcomes the solar beam spot size problem of high-orbit collection. The architecture begins by establishing a power exchange with terrestrial renewable energy plants, creating an early revenue generation approach with low investment. The approach allows for technology development and demonstration of high power millimeter wave technology. A multinational experiment using the International Space Station and another power exchange satellite is proposed to gather required data and experience, thus reducing the technical and policy risks. The full-scale architecture deploys pairs of Mirasol sunlight collectors and Girasol 1 GW converter satellites t- ramp up space solar power level to over 5.6 TeraWatts by year 50 from project start. Runway-based launch and landing are required to achieve the launch productivity as well as the cost reductions to enable such a large deployment on schedule. Advancements in the certainty of millimeter wave conversion technology and runway-based space access, are seen to be the outstanding issues in proceeding to full-scale Space Solar Power.
WAKES: Wavelet Adaptive Kinetic Evolution Solvers
NASA Astrophysics Data System (ADS)
Mardirian, Marine; Afeyan, Bedros; Larson, David
2016-10-01
We are developing a general capability to adaptively solve phase space evolution equations mixing particle and continuum techniques in an adaptive manner. The multi-scale approach is achieved using wavelet decompositions which allow phase space density estimation to occur with scale dependent increased accuracy and variable time stepping. Possible improvements on the SFK method of Larson are discussed, including the use of multiresolution analysis based Richardson-Lucy Iteration, adaptive step size control in explicit vs implicit approaches. Examples will be shown with KEEN waves and KEEPN (Kinetic Electrostatic Electron Positron Nonlinear) waves, which are the pair plasma generalization of the former, and have a much richer span of dynamical behavior. WAKES techniques are well suited for the study of driven and released nonlinear, non-stationary, self-organized structures in phase space which have no fluid, limit nor a linear limit, and yet remain undamped and coherent well past the drive period. The work reported here is based on the Vlasov-Poisson model of plasma dynamics. Work supported by a Grant from the AFOSR.
Technique for forcing high Reynolds number isotropic turbulence in physical space
NASA Astrophysics Data System (ADS)
Palmore, John A.; Desjardins, Olivier
2018-03-01
Many common engineering problems involve the study of turbulence interaction with other physical processes. For many such physical processes, solutions are expressed most naturally in physical space, necessitating the use of physical space solutions. For simulating isotropic turbulence in physical space, linear forcing is a commonly used strategy because it produces realistic turbulence in an easy-to-implement formulation. However, the method resolves a smaller range of scales on the same mesh than spectral forcing. We propose an alternative approach for turbulence forcing in physical space that uses the low-pass filtered velocity field as the basis of the forcing term. This method is shown to double the range of scales captured by linear forcing while maintaining the flexibility and low computational cost of the original method. This translates to a 60% increase of the Taylor microscale Reynolds number on the same mesh. An extension is made to scalar mixing wherein a scalar field is forced to have an arbitrarily chosen, constant variance. Filtered linear forcing of the scalar field allows for control over the length scale of scalar injection, which could be important when simulating scalar mixing.
Intercomparison of 3D pore-scale flow and solute transport simulation methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaofan; Mehmani, Yashar; Perkins, William A.
2016-09-01
Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include methods that 1) explicitly model the three-dimensional geometry of pore spaces and 2) those that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of class 1, based on direct numerical simulation using computational fluid dynamics (CFD) codes, against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of class 1 based on the immersed-boundary method (IMB),more » lattice Boltzmann method (LBM), smoothed particle hydrodynamics (SPH), as well as a model of class 2 (a pore-network model or PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and nonreactive solute transport, and intercompare the model results with previously reported experimental observations. Experimental observations are limited to measured pore-scale velocities, so solute transport comparisons are made only among the various models. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations).« less
Cross-scale interactions: Quantifying multi-scaled cause–effect relationships in macrosystems
Soranno, Patricia A.; Cheruvelil, Kendra S.; Bissell, Edward G.; Bremigan, Mary T.; Downing, John A.; Fergus, Carol E.; Filstrup, Christopher T.; Henry, Emily N.; Lottig, Noah R.; Stanley, Emily H.; Stow, Craig A.; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E.
2014-01-01
Ecologists are increasingly discovering that ecological processes are made up of components that are multi-scaled in space and time. Some of the most complex of these processes are cross-scale interactions (CSIs), which occur when components interact across scales. When undetected, such interactions may cause errors in extrapolation from one region to another. CSIs, particularly those that include a regional scaled component, have not been systematically investigated or even reported because of the challenges of acquiring data at sufficiently broad spatial extents. We present an approach for quantifying CSIs and apply it to a case study investigating one such interaction, between local and regional scaled land-use drivers of lake phosphorus. Ultimately, our approach for investigating CSIs can serve as a basis for efforts to understand a wide variety of multi-scaled problems such as climate change, land-use/land-cover change, and invasive species.
Servidio, S; Chasapis, A; Matthaeus, W H; Perrone, D; Valentini, F; Parashar, T N; Veltri, P; Gershman, D; Russell, C T; Giles, B; Fuselier, S A; Phan, T D; Burch, J
2017-11-17
Plasma turbulence is investigated using unprecedented high-resolution ion velocity distribution measurements by the Magnetospheric Multiscale mission (MMS) in the Earth's magnetosheath. This novel observation of a highly structured particle distribution suggests a cascadelike process in velocity space. Complex velocity space structure is investigated using a three-dimensional Hermite transform, revealing, for the first time in observational data, a power-law distribution of moments. In analogy to hydrodynamics, a Kolmogorov approach leads directly to a range of predictions for this phase-space transport. The scaling theory is found to be in agreement with observations. The combined use of state-of-the-art MMS data sets, novel implementation of a Hermite transform method, and scaling theory of the velocity cascade opens new pathways to the understanding of plasma turbulence and the crucial velocity space features that lead to dissipation in plasmas.
Realism and Perspectivism: a Reevaluation of Rival Theories of Spatial Vision.
NASA Astrophysics Data System (ADS)
Thro, E. Broydrick
1990-01-01
My study reevaluates two theories of human space perception, a trigonometric surveying theory I call perspectivism and a "scene recognition" theory I call realism. Realists believe that retinal image geometry can supply no unambiguous information about an object's size and distance--and that, as a result, viewers can locate objects in space only by making discretionary interpretations based on familiar experience of object types. Perspectivists, in contrast, think viewers can disambiguate object sizes/distances on the basis of retinal image information alone. More specifically, they believe the eye responds to perspective image geometry with an automatic trigonometric calculation that not only fixes the directions and shapes, but also roughly fixes the sizes and distances of scene elements in space. Today this surveyor theory has been largely superceded by the realist approach, because most vision scientists believe retinal image geometry is ambiguous about the scale of space. However, I show that there is a considerable body of neglected evidence, both past and present, tending to call this scale ambiguity claim into question. I maintain that this evidence against scale ambiguity could hardly be more important, if one considers its subversive implications for the scene recognition theory that is not only today's reigning approach to spatial vision, but also the foundation for computer scientists' efforts to create space-perceiving robots. If viewers were deemed to be capable of automatic surveying calculations, the discretionary scene recognition theory would lose its main justification. Clearly, it would be difficult for realists to maintain that we viewers rely on scene recognition for space perception in spite of our ability to survey. And in reality, as I show, the surveyor theory does a much better job of describing the everyday space we viewers actually see--a space featuring stable, unambiguous relationships among scene elements, and a single horizon and vanishing point for (meter-scale) receding objects. In addition, I argue, the surveyor theory raises fewer philosophical difficulties, because it is more in harmony with our everyday concepts of material objects, human agency and the self.
A computational method for detecting copy number variations using scale-space filtering
2013-01-01
Background As next-generation sequencing technology made rapid and cost-effective sequencing available, the importance of computational approaches in finding and analyzing copy number variations (CNVs) has been amplified. Furthermore, most genome projects need to accurately analyze sequences with fairly low-coverage read data. It is urgently needed to develop a method to detect the exact types and locations of CNVs from low coverage read data. Results Here, we propose a new CNV detection method, CNV_SS, which uses scale-space filtering. The scale-space filtering is evaluated by applying to the read coverage data the Gaussian convolution for various scales according to a given scaling parameter. Next, by differentiating twice and finding zero-crossing points, inflection points of scale-space filtered read coverage data are calculated per scale. Then, the types and the exact locations of CNVs are obtained by analyzing the finger print map, the contours of zero-crossing points for various scales. Conclusions The performance of CNV_SS showed that FNR and FPR stay in the range of 1.27% to 2.43% and 1.14% to 2.44%, respectively, even at a relatively low coverage (0.5x ≤C ≤2x). CNV_SS gave also much more effective results than the conventional methods in the evaluation of FNR, at 3.82% at least and 76.97% at most even when the coverage level of read data is low. CNV_SS source code is freely available from http://dblab.hallym.ac.kr/CNV SS/. PMID:23418726
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tassev, Svetlin, E-mail: tassev@astro.princeton.edu
We present a pedagogical systematic investigation of the accuracy of Eulerian and Lagrangian perturbation theories of large-scale structure. We show that significant differences exist between them especially when trying to model the Baryon Acoustic Oscillations (BAO). We find that the best available model of the BAO in real space is the Zel'dovich Approximation (ZA), giving an accuracy of ∼<3% at redshift of z = 0 in modelling the matter 2-pt function around the acoustic peak. All corrections to the ZA around the BAO scale are perfectly perturbative in real space. Any attempt to achieve better precision requires calibrating the theorymore » to simulations because of the need to renormalize those corrections. In contrast, theories which do not fully preserve the ZA as their solution, receive O(1) corrections around the acoustic peak in real space at z = 0, and are thus of suspicious convergence at low redshift around the BAO. As an example, we find that a similar accuracy of 3% for the acoustic peak is achieved by Eulerian Standard Perturbation Theory (SPT) at linear order only at z ≈ 4. Thus even when SPT is perturbative, one needs to include loop corrections for z∼<4 in real space. In Fourier space, all models perform similarly, and are controlled by the overdensity amplitude, thus recovering standard results. However, that comes at a price. Real space cleanly separates the BAO signal from non-linear dynamics. In contrast, Fourier space mixes signal from short mildly non-linear scales with the linear signal from the BAO to the level that non-linear contributions from short scales dominate. Therefore, one has little hope in constructing a systematic theory for the BAO in Fourier space.« less
Sanders, David M.; Decker, Derek E.
1999-01-01
Optical patterns and lithographic techniques are used as part of a process to embed parallel and evenly spaced conductors in the non-planar surfaces of an insulator to produce high gradient insulators. The approach extends the size that high gradient insulating structures can be fabricated as well as improves the performance of those insulators by reducing the scale of the alternating parallel lines of insulator and conductor along the surface. This fabrication approach also substantially decreases the cost required to produce high gradient insulators.
Intercomparison of 3D pore-scale flow and solute transport simulation methods
Mehmani, Yashar; Schoenherr, Martin; Pasquali, Andrea; ...
2015-09-28
Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based onmore » the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This paper provides support for confidence in a variety of pore-scale modeling methods and motivates further development and application of pore-scale simulation methods.« less
Intercomparison of 3D pore-scale flow and solute transport simulation methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaofan; Mehmani, Yashar; Perkins, William A.
2016-09-01
Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based onmore » the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This study provides support for confidence in a variety of pore-scale modeling methods and motivates further development and application of pore-scale simulation methods.« less
Properties of small-scale interfacial turbulence from a novel thermography based approach
NASA Astrophysics Data System (ADS)
Schnieders, Jana; Garbe, Christoph
2013-04-01
Oceans cover nearly two thirds of the earth's surface and exchange processes between the Atmosphere and the Ocean are of fundamental environmental importance. At the air-sea interface, complex interaction processes take place on a multitude of scales. Turbulence plays a key role in the coupling of momentum, heat and mass transfer [2]. Here we use high resolution infrared imagery to visualize near surface aqueous turbulence. Thermographic data is analized from a range of laboratory facilities and experimental conditions with wind speeds ranging from 1ms-1 to 7ms-1 and various surface conditions. The surface heat pattern is formed by distinct structures on two scales - small-scale short lived structures termed fish scales and larger scale cold streaks that are consistent with the footprints of Langmuir Circulations. There are two key characteristics of the observed surface heat patterns: (1) The surface heat patterns show characteristic features of scales. (2) The structure of these patterns change with increasing wind stress and surface conditions. We present a new image processing based approach to the analysis of the spacing of cold streaks based on a machine learning approach [4, 1] to classify the thermal footprints of near surface turbulence. Our random forest classifier is based on classical features in image processing such as gray value gradients and edge detecting features. The result is a pixel-wise classification of the surface heat pattern with a subsequent analysis of the streak spacing. This approach has been presented in [3] and can be applied to a wide range of experimental data. In spite of entirely different boundary conditions, the spacing of turbulent cells near the air-water interface seems to match the expected turbulent cell size for flow near a no-slip wall. The analysis of the spacing of cold streaks shows consistent behavior in a range of laboratory facilities when expressed as a function of water sided friction velocity, u*. The scales systematically decrease until a point of saturation at u* = 0.7 cm/s. Results suggest a saturation in the tangential stress, anticipating that similar behavior will be observed in the open ocean. A comparison with studies of small-scale Langmuir circulations and Langmuir numbers shows that thermal footprints in infrared images are consistent with Langmuir circulations and depend strongly on wind wave conditions. Our approach is not limited to laboratory measurments. In the near future, we will deploy it on in-situ measurements and verify our findings in these more challenging conditions. References [1] L. Breimann. Random forests. Machine Learning, 45:5-32, 2001. [2] S. P. McKenna and W. R. McGillis. The role of free-surface turbulence and surfactants in air-water gas transfer. Int. J. Heat Mass Transfer, 47:539-553, 2004. [3] J Schnieders, C. S. Garbe, W.L. Peirson, and C. J. Zappa. Analyzing the footprints of near surface aqueous turbulence - an image processing based approach. Journal of Geophysical Research-Oceans, 2013. [4] Christoph Sommer, Christoph Straehle, Ullrich Koethe, and Fred A. Hamprecht. ilastik: Interactive learning and segmentation toolkit. In 8th IEEE International Symposium on Biomedical Imaging (ISBI 2011), 2011. [5] W.-T. Tsai, S.-M. Chen, and C.-H. Moeng. A numerical study on the evolution and structure of a stress-driven free-surface turbulent shear flow. J. Fluid Mech., 545:163-192, 2005.
Abrupt skin lesion border cutoff measurement for malignancy detection in dermoscopy images.
Kaya, Sertan; Bayraktar, Mustafa; Kockara, Sinan; Mete, Mutlu; Halic, Tansel; Field, Halle E; Wong, Henry K
2016-10-06
Automated skin lesion border examination and analysis techniques have become an important field of research for distinguishing malignant pigmented lesions from benign lesions. An abrupt pigment pattern cutoff at the periphery of a skin lesion is one of the most important dermoscopic features for detection of neoplastic behavior. In current clinical setting, the lesion is divided into a virtual pie with eight sections. Each section is examined by a dermatologist for abrupt cutoff and scored accordingly, which can be tedious and subjective. This study introduces a novel approach to objectively quantify abruptness of pigment patterns along the lesion periphery. In the proposed approach, first, the skin lesion border is detected by the density based lesion border detection method. Second, the detected border is gradually scaled through vector operations. Then, along gradually scaled borders, pigment pattern homogeneities are calculated at different scales. Through this process, statistical texture features are extracted. Moreover, different color spaces are examined for the efficacy of texture analysis. The proposed method has been tested and validated on 100 (31 melanoma, 69 benign) dermoscopy images. Analyzed results indicate that proposed method is efficient on malignancy detection. More specifically, we obtained specificity of 0.96 and sensitivity of 0.86 for malignancy detection in a certain color space. The F-measure, harmonic mean of recall and precision, of the framework is reported as 0.87. The use of texture homogeneity along the periphery of the lesion border is an effective method to detect malignancy of the skin lesion in dermoscopy images. Among different color spaces tested, RGB color space's blue color channel is the most informative color channel to detect malignancy for skin lesions. That is followed by YCbCr color spaces Cr channel, and Cr is closely followed by the green color channel of RGB color space.
A Practical Computational Method for the Anisotropic Redshift-Space 3-Point Correlation Function
NASA Astrophysics Data System (ADS)
Slepian, Zachary; Eisenstein, Daniel J.
2018-04-01
We present an algorithm enabling computation of the anisotropic redshift-space galaxy 3-point correlation function (3PCF) scaling as N2, with N the number of galaxies. Our previous work showed how to compute the isotropic 3PCF with this scaling by expanding the radially-binned density field around each galaxy in the survey into spherical harmonics and combining these coefficients to form multipole moments. The N2 scaling occurred because this approach never explicitly required the relative angle between a galaxy pair about the primary galaxy. Here we generalize this work, demonstrating that in the presence of azimuthally-symmetric anisotropy produced by redshift-space distortions (RSD) the 3PCF can be described by two triangle side lengths, two independent total angular momenta, and a spin. This basis for the anisotropic 3PCF allows its computation with negligible additional work over the isotropic 3PCF. We also present the covariance matrix of the anisotropic 3PCF measured in this basis. Our algorithm tracks the full 5-D redshift-space 3PCF, uses an accurate line of sight to each triplet, is exact in angle, and easily handles edge correction. It will enable use of the anisotropic large-scale 3PCF as a probe of RSD in current and upcoming large-scale redshift surveys.
On the accuracy of modelling the dynamics of large space structures
NASA Technical Reports Server (NTRS)
Diarra, C. M.; Bainum, P. M.
1985-01-01
Proposed space missions will require large scale, light weight, space based structural systems. Large space structure technology (LSST) systems will have to accommodate (among others): ocean data systems; electronic mail systems; large multibeam antenna systems; and, space based solar power systems. The structures are to be delivered into orbit by the space shuttle. Because of their inherent size, modelling techniques and scaling algorithms must be developed so that system performance can be predicted accurately prior to launch and assembly. When the size and weight-to-area ratio of proposed LSST systems dictate that the entire system be considered flexible, there are two basic modeling methods which can be used. The first is a continuum approach, a mathematical formulation for predicting the motion of a general orbiting flexible body, in which elastic deformations are considered small compared with characteristic body dimensions. This approach is based on an a priori knowledge of the frequencies and shape functions of all modes included within the system model. Alternatively, finite element techniques can be used to model the entire structure as a system of lumped masses connected by a series of (restoring) springs and possibly dampers. In addition, a computational algorithm was developed to evaluate the coefficients of the various coupling terms in the equations of motion as applied to the finite element model of the Hoop/Column.
Cross-indexing of binary SIFT codes for large-scale image search.
Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi
2014-05-01
In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.
2012-09-13
CAPE CANAVERAL, Fla. – Reporters look over a model of the Shuttle Carrier Aircraft, or SCA, and a space shuttle during a tour of the real Shuttle Carrier Aircraft. The model is a radio-controlled scale version of the modified 747 that was used to test theories for how the space shuttle would separate from the SCA during approach and landing tests. Photo credit: NASA/Kim Shiflett
2012-09-13
CAPE CANAVERAL, Fla. – A visitor looks over a model of the Shuttle Carrier Aircraft, or SCA, and a space shuttle during a tour of the real Shuttle Carrier Aircraft. The model is a radio-controlled scale version of the modified 747 that was used to test theories for how the space shuttle would separate from the SCA during approach and landing tests. Photo credit: NASA/Kim Shiflett
Cell culture experiments planned for the space bioreactor
NASA Technical Reports Server (NTRS)
Morrison, Dennis R.; Cross, John H.
1987-01-01
Culturing of cells in a pilot-scale bioreactor remains to be done in microgravity. An approach is presented based on several studies of cell culture systems. Previous and current cell culture research in microgravity which is specifically directed towards development of a space bioprocess is described. Cell culture experiments planned for a microgravity sciences mission are described in abstract form.
NASA Astrophysics Data System (ADS)
Shiangjen, Kanokwatt; Chaijaruwanich, Jeerayut; Srisujjalertwaja, Wijak; Unachak, Prakarn; Somhom, Samerkae
2018-02-01
This article presents an efficient heuristic placement algorithm, namely, a bidirectional heuristic placement, for solving the two-dimensional rectangular knapsack packing problem. The heuristic demonstrates ways to maximize space utilization by fitting the appropriate rectangle from both sides of the wall of the current residual space layer by layer. The iterative local search along with a shift strategy is developed and applied to the heuristic to balance the exploitation and exploration tasks in the solution space without the tuning of any parameters. The experimental results on many scales of packing problems show that this approach can produce high-quality solutions for most of the benchmark datasets, especially for large-scale problems, within a reasonable duration of computational time.
Transport regimes spanning magnetization-coupling phase space
NASA Astrophysics Data System (ADS)
Baalrud, Scott D.; Daligault, Jérôme
2017-10-01
The manner in which transport properties vary over the entire parameter-space of coupling and magnetization strength is explored. Four regimes are identified based on the relative size of the gyroradius compared to other fundamental length scales: the collision mean free path, Debye length, distance of closest approach, and interparticle spacing. Molecular dynamics simulations of self-diffusion and temperature anisotropy relaxation spanning the parameter space are found to agree well with the predicted boundaries. Comparison with existing theories reveals regimes where they succeed, where they fail, and where no theory has yet been developed.
THE FUTURE OF TOXICOLOGY-PREDICTIVE TOXICOLOGY ...
A chemistry approach to predictive toxicology relies on structure−activity relationship (SAR) modeling to predict biological activity from chemical structure. Such approaches have proven capabilities when applied to well-defined toxicity end points or regions of chemical space. These approaches are less well-suited, however, to the challenges of global toxicity prediction, i.e., to predicting the potential toxicity of structurally diverse chemicals across a wide range of end points of regulatory and pharmaceutical concern. New approaches that have the potential to significantly improve capabilities in predictive toxicology are elaborating the “activity” portion of the SAR paradigm. Recent advances in two areas of endeavor are particularly promising. Toxicity data informatics relies on standardized data schema, developed for particular areas of toxicological study, to facilitate data integration and enable relational exploration and mining of data across both historical and new areas of toxicological investigation. Bioassay profiling refers to large-scale high-throughput screening approaches that use chemicals as probes to broadly characterize biological response space, extending the concept of chemical “properties” to the biological activity domain. The effective capture and representation of legacy and new toxicity data into mineable form and the large-scale generation of new bioassay data in relation to chemical toxicity, both employing chemical stru
Energy Efficient Engine acoustic supporting technology report
NASA Technical Reports Server (NTRS)
Lavin, S. P.; Ho, P. Y.
1985-01-01
The acoustic development of the Energy Efficient Engine combined testing and analysis using scale model rigs and an integrated Core/Low Spool demonstration engine. The scale model tests show that a cut-on blade/vane ratio fan with a large spacing (S/C = 2.3) is as quiet as a cut-off blade/vane ratio with a tighter spacing (S/C = 1.27). Scale model mixer tests show that separate flow nozzles are the noisiest, conic nozzles the quietest, with forced mixers in between. Based on projections of ICLS data the Energy Efficient Engine (E3) has FAR 36 margins of 3.7 EPNdB at approach, 4.5 EPNdB at full power takeoff, and 7.2 EPNdB at sideline conditions.
Development of Indigenous Basic Interest Scales: Re-Structuring the Icelandic Interest Space
ERIC Educational Resources Information Center
Einarsdottir, Sif; Eyjolfsdottir, Katrin Osk; Rounds, James
2013-01-01
The present investigation used an emic approach to develop a set of Icelandic indigenous basic interest scales. An indigenous item pool that is representative of the Icelandic labor market was administered to three samples (N = 1043, 1368, and 2218) of upper secondary and higher education students in two studies. A series of item level cluster and…
Space transportation booster engine configuration study. Volume 1: Executive Summary
NASA Technical Reports Server (NTRS)
1989-01-01
The objective of the Space Transportation Booster Engine (STBE) Configuration Study is to contribute to the Advanced Launch System (ALS) development effort by providing highly reliable, low cost booster engine concepts for both expendable and reusable rocket engines. The objectives of the Space Transportation Booster Engine (STBE) Configuration Study were to identify engine configurations which enhance vehicle performance and provide operational flexibility at low cost, and to explore innovative approaches to the follow-on full-scale development (FSD) phase for the STBE.
Fabrication process scale-up and optimization for a boron-aluminum composite radiator
NASA Technical Reports Server (NTRS)
Okelly, K. P.
1973-01-01
Design approaches to a practical utilization of a boron-aluminum radiator for the space shuttle orbiter are presented. The program includes studies of laboratory composite material processes to determine the feasibility of a structural and functional composite radiator panel, and to estimate the cost of its fabrication. The objective is the incorporation of boron-aluminum modulator radiator on the space shuttle.
Southern Impact Testing Alliance (SITA)
NASA Technical Reports Server (NTRS)
Hubbs, Whitney; Roebuck, Brian; Zwiener, Mark; Wells, Brian
2009-01-01
Efforts to form this Alliance began in 2008 to showcase the impact testing capabilities within the southern United States. Impact testing customers can utilize SITA partner capabilities to provide supporting data during all program phases-materials/component/ flight hardware design, development, and qualification. This approach would allow programs to reduce risk by providing low cost testing during early development to flush out possible problems before moving on to larger scale1 higher cost testing. Various SITA partners would participate in impact testing depending on program phase-materials characterization, component/subsystem characterization, full-scale system testing for qualification. SITA partners would collaborate with the customer to develop an integrated test approach during early program phases. Modeling and analysis validation can start with small-scale testing to ensure a level of confidence for the next step large or full-scale conclusive test shots. Impact Testing Facility (ITF) was established and began its research in spacecraft debris shielding in the early 1960's and played a malor role in the International Space Station debris shield development. As a result of return to flight testing after the loss of STS-107 (Columbia) MSFC ITF realized the need to expand their capabilities beyond meteoroid and space debris impact testing. MSFC partnered with the Department of Defense and academic institutions as collaborative efforts to gain and share knowledge that would benefit the Space Agency as well as the DoD. MSFC ITF current capabilities include: Hypervelocity impact testing, ballistic impact testing, and environmental impact testing.
A new method of automatic landmark tagging for shape model construction via local curvature scale
NASA Astrophysics Data System (ADS)
Rueda, Sylvia; Udupa, Jayaram K.; Bai, Li
2008-03-01
Segmentation of organs in medical images is a difficult task requiring very often the use of model-based approaches. To build the model, we need an annotated training set of shape examples with correspondences indicated among shapes. Manual positioning of landmarks is a tedious, time-consuming, and error prone task, and almost impossible in the 3D space. To overcome some of these drawbacks, we devised an automatic method based on the notion of c-scale, a new local scale concept. For each boundary element b, the arc length of the largest homogeneous curvature region connected to b is estimated as well as the orientation of the tangent at b. With this shape description method, we can automatically locate mathematical landmarks selected at different levels of detail. The method avoids the use of landmarks for the generation of the mean shape. The selection of landmarks on the mean shape is done automatically using the c-scale method. Then, these landmarks are propagated to each shape in the training set, defining this way the correspondences among the shapes. Altogether 12 strategies are described along these lines. The methods are evaluated on 40 MRI foot data sets, the object of interest being the talus bone. The results show that, for the same number of landmarks, the proposed methods are more compact than manual and equally spaced annotations. The approach is applicable to spaces of any dimensionality, although we have focused in this paper on 2D shapes.
Analytical Tools for Space Suit Design
NASA Technical Reports Server (NTRS)
Aitchison, Lindsay
2011-01-01
As indicated by the implementation of multiple small project teams within the agency, NASA is adopting a lean approach to hardware development that emphasizes quick product realization and rapid response to shifting program and agency goals. Over the past two decades, space suit design has been evolutionary in approach with emphasis on building prototypes then testing with the largest practical range of subjects possible. The results of these efforts show continuous improvement but make scaled design and performance predictions almost impossible with limited budgets and little time. Thus, in an effort to start changing the way NASA approaches space suit design and analysis, the Advanced Space Suit group has initiated the development of an integrated design and analysis tool. It is a multi-year-if not decadal-development effort that, when fully implemented, is envisioned to generate analysis of any given space suit architecture or, conversely, predictions of ideal space suit architectures given specific mission parameters. The master tool will exchange information to and from a set of five sub-tool groups in order to generate the desired output. The basic functions of each sub-tool group, the initial relationships between the sub-tools, and a comparison to state of the art software and tools are discussed.
Differential settlement of a geosynthetic reinforced soil abutment : full-scale investigation.
DOT National Transportation Integrated Search
2015-05-01
The Geosynthetic Reinforced Soil Integrated Bridge System (GRS-IBS) uses alternating layers of closely spaced : geosynthetic reinforcement and well-compacted granular fill to support the bridge superstructure and form an integrated roadway : approach...
NASA Astrophysics Data System (ADS)
Häyhä, Tiina; Cornell, Sarah; Lucas, Paul; van Vuuren, Detlef; Hoff, Holger
2016-04-01
The planetary boundaries framework proposes precautionary quantitative global limits to the anthropogenic perturbation of crucial Earth system processes. In this way, it marks out a planetary 'safe operating space' for human activities. However, decisions regarding resource use and emissions are mostly made at much smaller scales, mostly by (sub-)national and regional governments, businesses, and other local actors. To operationalize the planetary boundaries, they need to be translated into and aligned with targets that are relevant at these smaller scales. In this paper, we develop a framework that addresses the three dimension of bridging across scales: biophysical, socio-economic and ethical, to provide a consistent universally applicable approach for translating the planetary boundaries into national level context-specific and fair shares of the safe operating space. We discuss our findings in the context of previous studies and their implications for future analyses and policymaking. In this way, we help link the planetary boundaries framework to widely- applied operational and policy concepts for more robust strong sustainability decision-making.
Adaptive multiscale processing for contrast enhancement
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Song, Shuwu; Fan, Jian; Huda, Walter; Honeyman, Janice C.; Steinbach, Barbara G.
1993-07-01
This paper introduces a novel approach for accomplishing mammographic feature analysis through overcomplete multiresolution representations. We show that efficient representations may be identified from digital mammograms within a continuum of scale space and used to enhance features of importance to mammography. Choosing analyzing functions that are well localized in both space and frequency, results in a powerful methodology for image analysis. We describe methods of contrast enhancement based on two overcomplete (redundant) multiscale representations: (1) Dyadic wavelet transform (2) (phi) -transform. Mammograms are reconstructed from transform coefficients modified at one or more levels by non-linear, logarithmic and constant scale-space weight functions. Multiscale edges identified within distinct levels of transform space provide a local support for enhancement throughout each decomposition. We demonstrate that features extracted from wavelet spaces can provide an adaptive mechanism for accomplishing local contrast enhancement. We suggest that multiscale detection and local enhancement of singularities may be effectively employed for the visualization of breast pathology without excessive noise amplification.
Notes on a Vision for the Global Space Weather Enterprise
NASA Astrophysics Data System (ADS)
Head, James N.
2015-07-01
Space weather phenomena impacts human civilization on a global scale and hence calls for a global approach to research, monitoring, and operational forecasting. The Global Space Weather Enterprise (GSWE) could be arranged along lines well established in existing international frameworks related to space exploration or to the use of space to benefit humanity. The Enterprise need not establish a new organization, but could evolve from existing international organizations. A GSWE employing open architectural concepts could be arranged to promote participation by all interested States regardless of current differences in science and technical capacity. Such an Enterprise would engender capacity building and burden sharing opportunities.
Small-scale turbulence detected in Mercury's magnetic field
NASA Astrophysics Data System (ADS)
Schultz, Colin
2011-11-01
With its closest approach a mere 46 million kilometers from the Sun, the blast of the solar wind was supposed to wash away any chance that Mercury could hold on to a magnetic field—an idea rejected by the observations of the Mariner 10 spacecraft in 1974. Though Mercury was shown to harbor a weak magnetic field (one-hundredth the strength of Earth's), its structure, behavior, and interactions with the solar wind remained heavily debated, yet untested, until the 14 January 2008 approach of NASA's MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) orbiter. Using a continuous scalogram analysis—a novel statistical technique in space research—Uritsky et al. analyzed the high-resolution magnetic field strength observations taken by MESSENGER as it flew within a few hundred kilometers of the planet's surface. The authors found turbulence in Mercury's magnetosphere, which they attributed to small-scale interactions between the solar wind plasma and the magnetic field. At large spatial and temporal scales the solar wind can be thought of as a fluid with some magnetic properties—a domain well explained by the theories of magnetohydrodynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seljak, Uroš; McDonald, Patrick, E-mail: useljak@berkeley.edu, E-mail: pvmcdonald@lbl.gov
We develop a phase space distribution function approach to redshift space distortions (RSD), in which the redshift space density can be written as a sum over velocity moments of the distribution function. These moments are density weighted and have well defined physical interpretation: their lowest orders are density, momentum density, and stress energy density. The series expansion is convergent if kμu/aH < 1, where k is the wavevector, H the Hubble parameter, u the typical gravitational velocity and μ = cos θ, with θ being the angle between the Fourier mode and the line of sight. We perform an expansionmore » of these velocity moments into helicity modes, which are eigenmodes under rotation around the axis of Fourier mode direction, generalizing the scalar, vector, tensor decomposition of perturbations to an arbitrary order. We show that only equal helicity moments correlate and derive the angular dependence of the individual contributions to the redshift space power spectrum. We show that the dominant term of μ{sup 2} dependence on large scales is the cross-correlation between the density and scalar part of momentum density, which can be related to the time derivative of the matter power spectrum. Additional terms contributing to μ{sup 2} and dominating on small scales are the vector part of momentum density-momentum density correlations, the energy density-density correlations, and the scalar part of anisotropic stress density-density correlations. The second term is what is usually associated with the small scale Fingers-of-God damping and always suppresses power, but the first term comes with the opposite sign and always adds power. Similarly, we identify 7 terms contributing to μ{sup 4} dependence. Some of the advantages of the distribution function approach are that the series expansion converges on large scales and remains valid in multi-stream situations. We finish with a brief discussion of implications for RSD in galaxies relative to dark matter, highlighting the issue of scale dependent bias of velocity moments correlators.« less
We Must Take the Next Steps Towards Safe, Routine Space Travel
NASA Technical Reports Server (NTRS)
Lyles, G. M.
2000-01-01
This paper presents, in viewgraph form, six in a half generations of airplanes in a century. Some of the topics include: 1) Enterprise goals; 2) Generations of Reusable Launch Vehicles; 3) Space Transportation Across NASA; 4) Three Tiered Implementation Approach for Future Space Transportation Technology; 5) Develop a Comprehensive, Agency Level Space Transportation Plan That Will Enable NASA's Strategic Plan; 6) Timeline for Addressing NASA's Needs; 7) Significant 2nd Generation Technology Drivers; 8) Example Large Scale Ground Demonstrations; and 9) Example Pathfinder Demonstrations. The paper also includes various aircraft designs and propulsion system technology.
Transport regimes spanning magnetization-coupling phase space
Baalrud, Scott D.; Daligault, Jérôme
2017-10-06
The manner in which transport properties vary over the entire parameter-space of coupling and magnetization strength is explored in this paper. Four regimes are identified based on the relative size of the gyroradius compared to other fundamental length scales: the collision mean free path, Debye length, distance of closest approach, and interparticle spacing. Molecular dynamics simulations of self-diffusion and temperature anisotropy relaxation spanning the parameter space are found to agree well with the predicted boundaries. Finally, comparison with existing theories reveals regimes where they succeed, where they fail, and where no theory has yet been developed.
Transport regimes spanning magnetization-coupling phase space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baalrud, Scott D.; Daligault, Jérôme
The manner in which transport properties vary over the entire parameter-space of coupling and magnetization strength is explored in this paper. Four regimes are identified based on the relative size of the gyroradius compared to other fundamental length scales: the collision mean free path, Debye length, distance of closest approach, and interparticle spacing. Molecular dynamics simulations of self-diffusion and temperature anisotropy relaxation spanning the parameter space are found to agree well with the predicted boundaries. Finally, comparison with existing theories reveals regimes where they succeed, where they fail, and where no theory has yet been developed.
MUSIC: MUlti-Scale Initial Conditions
NASA Astrophysics Data System (ADS)
Hahn, Oliver; Abel, Tom
2013-11-01
MUSIC generates multi-scale initial conditions with multiple levels of refinements for cosmological ‘zoom-in’ simulations. The code uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). MUSIC achieves rms relative errors of the order of 10-4 for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier space-induced interference ringing.
Uniform color space analysis of LACIE image products
NASA Technical Reports Server (NTRS)
Nalepka, R. F. (Principal Investigator); Balon, R. J.; Cicone, R. C.
1979-01-01
The author has identified the following significant results. Analysis and comparison of image products generated by different algorithms show that the scaling and biasing of data channels for control of PFC primaries lead to loss of information (in a probability-of misclassification sense) by two major processes. In order of importance they are: neglecting the input of one channel of data in any one image, and failing to provide sufficient color resolution of the data. The scaling and biasing approach tends to distort distance relationships in data space and provides less than desirable resolution when the data variation is typical of a developed, nonhazy agricultural scene.
Critical scales to explain urban hydrological response: an application in Cranbrook, London
NASA Astrophysics Data System (ADS)
Cristiano, Elena; ten Veldhuis, Marie-Claire; Gaitan, Santiago; Ochoa Rodriguez, Susana; van de Giesen, Nick
2018-04-01
Rainfall variability in space and time, in relation to catchment characteristics and model complexity, plays an important role in explaining the sensitivity of hydrological response in urban areas. In this work we present a new approach to classify rainfall variability in space and time and we use this classification to investigate rainfall aggregation effects on urban hydrological response. Nine rainfall events, measured with a dual polarimetric X-Band radar instrument at the CAESAR site (Cabauw Experimental Site for Atmospheric Research, NL), were aggregated in time and space in order to obtain different resolution combinations. The aim of this work was to investigate the influence that rainfall and catchment scales have on hydrological response in urban areas. Three dimensionless scaling factors were introduced to investigate the interactions between rainfall and catchment scale and rainfall input resolution in relation to the performance of the model. Results showed that (1) rainfall classification based on cluster identification well represents the storm core, (2) aggregation effects are stronger for rainfall than flow, (3) model complexity does not have a strong influence compared to catchment and rainfall scales for this case study, and (4) scaling factors allow the adequate rainfall resolution to be selected to obtain a given level of accuracy in the calculation of hydrological response.
Subgrid-scale parameterization and low-frequency variability: a response theory approach
NASA Astrophysics Data System (ADS)
Demaeyer, Jonathan; Vannitsem, Stéphane
2016-04-01
Weather and climate models are limited in the possible range of resolved spatial and temporal scales. However, due to the huge space- and time-scale ranges involved in the Earth System dynamics, the effects of many sub-grid processes should be parameterized. These parameterizations have an impact on the forecasts or projections. It could also affect the low-frequency variability present in the system (such as the one associated to ENSO or NAO). An important question is therefore to know what is the impact of stochastic parameterizations on the Low-Frequency Variability generated by the system and its model representation. In this context, we consider a stochastic subgrid-scale parameterization based on the Ruelle's response theory and proposed in Wouters and Lucarini (2012). We test this approach in the context of a low-order coupled ocean-atmosphere model, detailed in Vannitsem et al. (2015), for which a part of the atmospheric modes is considered as unresolved. A natural separation of the phase-space into a slow invariant set and its fast complement allows for an analytical derivation of the different terms involved in the parameterization, namely the average, the fluctuation and the long memory terms. Its application to the low-order system reveals that a considerable correction of the low-frequency variability along the invariant subset can be obtained. This new approach of scale separation opens new avenues of subgrid-scale parameterizations in multiscale systems used for climate forecasts. References: Vannitsem S, Demaeyer J, De Cruz L, Ghil M. 2015. Low-frequency variability and heat transport in a low-order nonlinear coupled ocean-atmosphere model. Physica D: Nonlinear Phenomena 309: 71-85. Wouters J, Lucarini V. 2012. Disentangling multi-level systems: averaging, correlations and memory. Journal of Statistical Mechanics: Theory and Experiment 2012(03): P03 003.
Generalization of mixed multiscale finite element methods with applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C S
Many science and engineering problems exhibit scale disparity and high contrast. The small scale features cannot be omitted in the physical models because they can affect the macroscopic behavior of the problems. However, resolving all the scales in these problems can be prohibitively expensive. As a consequence, some types of model reduction techniques are required to design efficient solution algorithms. For practical purpose, we are interested in mixed finite element problems as they produce solutions with certain conservative properties. Existing multiscale methods for such problems include the mixed multiscale finite element methods. We show that for complicated problems, the mixedmore » multiscale finite element methods may not be able to produce reliable approximations. This motivates the need of enrichment for coarse spaces. Two enrichment approaches are proposed, one is based on generalized multiscale finte element metthods (GMsFEM), while the other is based on spectral element-based algebraic multigrid (rAMGe). The former one, which is called mixed GMsFEM, is developed for both Darcy’s flow and linear elasticity. Application of the algorithm in two-phase flow simulations are demonstrated. For linear elasticity, the algorithm is subtly modified due to the symmetry requirement of the stress tensor. The latter enrichment approach is based on rAMGe. The algorithm differs from GMsFEM in that both of the velocity and pressure spaces are coarsened. Due the multigrid nature of the algorithm, recursive application is available, which results in an efficient multilevel construction of the coarse spaces. Stability, convergence analysis, and exhaustive numerical experiments are carried out to validate the proposed enrichment approaches. iii« less
Scaling for hard-sphere colloidal glasses near jamming
NASA Astrophysics Data System (ADS)
Zargar, Rojman; DeGiuli, Eric; Bonn, Daniel
2016-12-01
Hard-sphere colloids are model systems in which to study the glass transition and universal properties of amorphous solids. Using covariance matrix analysis to determine the vibrational modes, we experimentally measure here the scaling behavior of the density of states, shear modulus, and mean-squared displacement (MSD) in a hard-sphere colloidal glass. Scaling the frequency with the boson-peak frequency, we find that the density of states at different volume fractions all collapse on a single master curve, which obeys a power law in terms of the scaled frequency. Below the boson peak, the exponent is consistent with theoretical results obtained by real-space and phase-space approaches to understanding amorphous solids. We find that the shear modulus and the MSD are nearly inversely proportional, and show a singular power-law dependence on the distance from random close packing. Our results are in very good agreement with the theoretical predictions.
DOT National Transportation Integrated Search
2015-05-01
The Geosynthetic Reinforced Soil Integrated Bridge System (GRS-IBS) uses alternating layers of closely spaced : geosynthetic reinforcement and well-compacted granular fill to support the bridge superstructure and form an integrated roadway : approach...
A short essay on quantum black holes and underlying noncommutative quantized space-time
NASA Astrophysics Data System (ADS)
Tanaka, Sho
2017-01-01
We emphasize the importance of noncommutative geometry or Lorenz-covariant quantized space-time towards the ultimate theory of quantum gravity and Planck scale physics. We focus our attention on the statistical and substantial understanding of the Bekenstein-Hawking area-entropy law of black holes in terms of the kinematical holographic relation (KHR). KHR manifestly holds in Yang’s quantized space-time as the result of kinematical reduction of spatial degrees of freedom caused by its own nature of noncommutative geometry, and plays an important role in our approach without any recourse to the familiar hypothesis, so-called holographic principle. In the present paper, we find a unified form of KHR applicable to the whole region ranging from macroscopic to microscopic scales in spatial dimension d = 3. We notice a possibility of nontrivial modification of area-entropy law of black holes which becomes most remarkable in the extremely microscopic system close to Planck scale.
Scaling Impacts in Life Support Architecture and Technology Selection
NASA Technical Reports Server (NTRS)
Lange, Kevin
2016-01-01
For long-duration space missions outside of Earth orbit, reliability considerations will drive higher levels of redundancy and/or on-board spares for life support equipment. Component scaling will be a critical element in minimizing overall launch mass while maintaining an acceptable level of system reliability. Building on an earlier reliability study (AIAA 2012-3491), this paper considers the impact of alternative scaling approaches, including the design of technology assemblies and their individual components to maximum, nominal, survival, or other fractional requirements. The optimal level of life support system closure is evaluated for deep-space missions of varying duration using equivalent system mass (ESM) as the comparative basis. Reliability impacts are included in ESM by estimating the number of component spares required to meet a target system reliability. Common cause failures are included in the analysis. ISS and ISS-derived life support technologies are considered along with selected alternatives. This study focusses on minimizing launch mass, which may be enabling for deep-space missions.
Efficient Data Mining for Local Binary Pattern in Texture Image Analysis
Kwak, Jin Tae; Xu, Sheng; Wood, Bradford J.
2015-01-01
Local binary pattern (LBP) is a simple gray scale descriptor to characterize the local distribution of the grey levels in an image. Multi-resolution LBP and/or combinations of the LBPs have shown to be effective in texture image analysis. However, it is unclear what resolutions or combinations to choose for texture analysis. Examining all the possible cases is impractical and intractable due to the exponential growth in a feature space. This limits the accuracy and time- and space-efficiency of LBP. Here, we propose a data mining approach for LBP, which efficiently explores a high-dimensional feature space and finds a relatively smaller number of discriminative features. The features can be any combinations of LBPs. These may not be achievable with conventional approaches. Hence, our approach not only fully utilizes the capability of LBP but also maintains the low computational complexity. We incorporated three different descriptors (LBP, local contrast measure, and local directional derivative measure) with three spatial resolutions and evaluated our approach using two comprehensive texture databases. The results demonstrated the effectiveness and robustness of our approach to different experimental designs and texture images. PMID:25767332
Edgeworth streaming model for redshift space distortions
NASA Astrophysics Data System (ADS)
Uhlemann, Cora; Kopp, Michael; Haugg, Thomas
2015-09-01
We derive the Edgeworth streaming model (ESM) for the redshift space correlation function starting from an arbitrary distribution function for biased tracers of dark matter by considering its two-point statistics and show that it reduces to the Gaussian streaming model (GSM) when neglecting non-Gaussianities. We test the accuracy of the GSM and ESM independent of perturbation theory using the Horizon Run 2 N -body halo catalog. While the monopole of the redshift space halo correlation function is well described by the GSM, higher multipoles improve upon including the leading order non-Gaussian correction in the ESM: the GSM quadrupole breaks down on scales below 30 Mpc /h whereas the ESM stays accurate to 2% within statistical errors down to 10 Mpc /h . To predict the scale-dependent functions entering the streaming model we employ convolution Lagrangian perturbation theory (CLPT) based on the dust model and local Lagrangian bias. Since dark matter halos carry an intrinsic length scale given by their Lagrangian radius, we extend CLPT to the coarse-grained dust model and consider two different smoothing approaches operating in Eulerian and Lagrangian space, respectively. The coarse graining in Eulerian space features modified fluid dynamics different from dust while the coarse graining in Lagrangian space is performed in the initial conditions with subsequent single-streaming dust dynamics, implemented by smoothing the initial power spectrum in the spirit of the truncated Zel'dovich approximation. Finally, we compare the predictions of the different coarse-grained models for the streaming model ingredients to N -body measurements and comment on the proper choice of both the tracer distribution function and the smoothing scale. Since the perturbative methods we considered are not yet accurate enough on small scales, the GSM is sufficient when applied to perturbation theory.
Coherent Structures and Spectral Energy Transfer in Turbulent Plasma: A Space-Filter Approach.
Camporeale, E; Sorriso-Valvo, L; Califano, F; Retinò, A
2018-03-23
Plasma turbulence at scales of the order of the ion inertial length is mediated by several mechanisms, including linear wave damping, magnetic reconnection, the formation and dissipation of thin current sheets, and stochastic heating. It is now understood that the presence of localized coherent structures enhances the dissipation channels and the kinetic features of the plasma. However, no formal way of quantifying the relationship between scale-to-scale energy transfer and the presence of spatial structures has been presented so far. In the Letter we quantify such a relationship analyzing the results of a two-dimensional high-resolution Hall magnetohydrodynamic simulation. In particular, we employ the technique of space filtering to derive a spectral energy flux term which defines, in any point of the computational domain, the signed flux of spectral energy across a given wave number. The characterization of coherent structures is performed by means of a traditional two-dimensional wavelet transformation. By studying the correlation between the spectral energy flux and the wavelet amplitude, we demonstrate the strong relationship between scale-to-scale transfer and coherent structures. Furthermore, by conditioning one quantity with respect to the other, we are able for the first time to quantify the inhomogeneity of the turbulence cascade induced by topological structures in the magnetic field. Taking into account the low space-filling factor of coherent structures (i.e., they cover a small portion of space), it emerges that 80% of the spectral energy transfer (both in the direct and inverse cascade directions) is localized in about 50% of space, and 50% of the energy transfer is localized in only 25% of space.
Coherent Structures and Spectral Energy Transfer in Turbulent Plasma: A Space-Filter Approach
NASA Astrophysics Data System (ADS)
Camporeale, E.; Sorriso-Valvo, L.; Califano, F.; Retinò, A.
2018-03-01
Plasma turbulence at scales of the order of the ion inertial length is mediated by several mechanisms, including linear wave damping, magnetic reconnection, the formation and dissipation of thin current sheets, and stochastic heating. It is now understood that the presence of localized coherent structures enhances the dissipation channels and the kinetic features of the plasma. However, no formal way of quantifying the relationship between scale-to-scale energy transfer and the presence of spatial structures has been presented so far. In the Letter we quantify such a relationship analyzing the results of a two-dimensional high-resolution Hall magnetohydrodynamic simulation. In particular, we employ the technique of space filtering to derive a spectral energy flux term which defines, in any point of the computational domain, the signed flux of spectral energy across a given wave number. The characterization of coherent structures is performed by means of a traditional two-dimensional wavelet transformation. By studying the correlation between the spectral energy flux and the wavelet amplitude, we demonstrate the strong relationship between scale-to-scale transfer and coherent structures. Furthermore, by conditioning one quantity with respect to the other, we are able for the first time to quantify the inhomogeneity of the turbulence cascade induced by topological structures in the magnetic field. Taking into account the low space-filling factor of coherent structures (i.e., they cover a small portion of space), it emerges that 80% of the spectral energy transfer (both in the direct and inverse cascade directions) is localized in about 50% of space, and 50% of the energy transfer is localized in only 25% of space.
NASA Astrophysics Data System (ADS)
Peng, D. J.; Wu, B.
2012-01-01
With the availability of precise GPS ephemeris and clock solution, the ionospheric range delay is left as the dominant error sources in the post-processing of space-borne GPS data from single-frequency receivers. Thus, the removal of ionospheric effects is a major prerequisite for an improved orbit reconstruction of LEO satellites equipped with low cost single-frequency GPS receivers. In this paper, the use of Global Ionospheric Maps (GIM) in kinematic and dynamic orbit determination for LEO satellites with single-frequency GPS measurements is discussed first,and then, estimating the scale factor of ionosphere to remove the ionospheric effects in C/A code pseudo-range measurements in both kinematic and adynamia orbit defemination approaches is addressed. As it is known the ionospheric path delay of space-borne GPS signals is strongly dependent on the orbit altitudes of LEO satellites, we selected real space-borne GPS data from CHAMP, GRACE, TerraSAR-X and SAC-C satellites with altitudes between 300 km and 800 km as sample data in this paper. It is demonstrated that the approach of eliminating ionospheric effects in space-borne C/A code pseudo-range by estimating the scale factor of ionosphere is highly effective. Employing this approach, the accuracy of both kinematic and dynamic orbits can be improved notably. Among those five LEO satellites, CHAMP with the lowest orbit altitude has the most remarkable orbit accuracy improvements, which are 55.6% and 47.6% for kinematic and dynamic approaches, respectively. SAC-C with the highest orbit altitude has the least orbit accuracy improvements accordingly, which are 47.8% and 38.2%, respectively.
Affordable Options for Ground-Based, Large-Aperture Optical Space Surveillance Systems
NASA Astrophysics Data System (ADS)
Ackermann, M.; Beason, J. D.; Kiziah, R.; Spillar, E.; Vestrand, W. T.; Cox, D.; McGraw, J.; Zimmer, P.; Holland, C.
2013-09-01
The Space Surveillance Telescope (SST) developed by the Defense Advanced Research Projects Agency (DARPA) - has demonstrated significant capability improvements over legacy ground-based optical space surveillance systems. To fulfill better the current and future space situational awareness (SSA) requirements, the Air Force would benefit from a global network of such telescopes, but the high cost to replicate the SST makes such an acquisition decision difficult, particularly in an era of fiscal austerity. Ideally, the Air Force needs the capabilities provided by the SST, but at a more affordable price. To address this issue, an informal study considered a total of 67 alternative optical designs, with each being evaluated for cost, complexity and SSA performance. One promising approach identified in the study uses a single mirror at prime focus with a small number of corrective lenses. This approach results in telescopes that are less complex and estimated to be less expensive than replicated SSTs. They should also be acquirable on shorter time scales. Another approach would use a modest network of smaller telescopes for space surveillance. This approach provides significant cost advantages but faces some challenges with very dim objects. In this paper, we examine the cost and SSA utility for each of the 67 designs considered.
Estimating Ω from Galaxy Redshifts: Linear Flow Distortions and Nonlinear Clustering
NASA Astrophysics Data System (ADS)
Bromley, B. C.; Warren, M. S.; Zurek, W. H.
1997-02-01
We propose a method to determine the cosmic mass density Ω from redshift-space distortions induced by large-scale flows in the presence of nonlinear clustering. Nonlinear structures in redshift space, such as fingers of God, can contaminate distortions from linear flows on scales as large as several times the small-scale pairwise velocity dispersion σv. Following Peacock & Dodds, we work in the Fourier domain and propose a model to describe the anisotropy in the redshift-space power spectrum; tests with high-resolution numerical data demonstrate that the model is robust for both mass and biased galaxy halos on translinear scales and above. On the basis of this model, we propose an estimator of the linear growth parameter β = Ω0.6/b, where b measures bias, derived from sampling functions that are tuned to eliminate distortions from nonlinear clustering. The measure is tested on the numerical data and found to recover the true value of β to within ~10%. An analysis of IRAS 1.2 Jy galaxies yields β=0.8+0.4-0.3 at a scale of 1000 km s-1, which is close to optimal given the shot noise and finite size of the survey. This measurement is consistent with dynamical estimates of β derived from both real-space and redshift-space information. The importance of the method presented here is that nonlinear clustering effects are removed to enable linear correlation anisotropy measurements on scales approaching the translinear regime. We discuss implications for analyses of forthcoming optical redshift surveys in which the dispersion is more than a factor of 2 greater than in the IRAS data.
Transformational Systems Concepts and Technologies for Our Future in Space
NASA Technical Reports Server (NTRS)
Howell, J. T.; George,P.; Mankins, J. C. (Editor); Christensen, C. B.
2004-01-01
NASA is constantly searching for new ideas and approaches yielding opportunities for assuring maximum returns on space infrastructure investments. Perhaps the idea of transformational innovation in developing space systems is long overdue. However, the concept of utilizing modular space system designs combined with stepping-stone development processes has merit and promises to return several times the original investment since each new space system or component is not treated as a unique and/or discrete design and development challenge. New space systems can be planned and designed so that each builds on the technology of previous systems and provides capabilities to support future advanced systems. Subsystems can be designed to use common modular components and achieve economies of scale, production, and operation. Standards, interoperability, and "plug and play" capabilities, when implemented vigorously and consistently, will result in systems that can be upgraded effectively with new technologies. This workshop explored many building-block approaches via way of example across a broad spectrum of technology discipline areas for potentially transforming space systems and inspiring future innovation. Details describing the workshop structure, process, and results are contained in this Conference Publication.
NASA Astrophysics Data System (ADS)
Prechtel, Alexander; Ray, Nadja; Rupp, Andreas
2017-04-01
We want to present an approach for the mathematical, mechanistic modeling and numerical treatment of processes leading to the formation, stability, and turnover of soil micro-aggregates. This aims at deterministic aggregation models including detailed mechanistic pore-scale descriptions to account for the interplay of geochemistry and microbiology, and the link to soil functions as, e.g., the porosity. We therefore consider processes at the pore scale and the mesoscale (laboratory scale). At the pore scale transport by diffusion, advection, and drift emerging from electric forces can be taken into account, in addition to homogeneous and heterogeneous reactions of species. In the context of soil micro-aggregates the growth of biofilms or other glueing substances as EPS (extracellular polymeric substances) is important and affects the structure of the pore space in space and time. This model is upscaled mathematically in the framework of (periodic) homogenization to transfer it to the mesoscale resulting in effective coefficients/parameters there. This micro-macro model thus couples macroscopic equations that describe the transport and fluid flow at the scale of the porous medium (mesoscale) with averaged time- and space-dependent coefficient functions. These functions may be explicitly computed by means of auxiliary cell problems (microscale). Finally, the pore space in which the cell problems are defined is time and space dependent and its geometry inherits information from the transport equation's solutions. The microscale problems rely on versatile combinations of cellular automata and discontiuous Galerkin methods while on the mesoscale mixed finite elements are used. The numerical simulations allow to study the interplay between these processes.
Automatic rock detection for in situ spectroscopy applications on Mars
NASA Astrophysics Data System (ADS)
Mahapatra, Pooja; Foing, Bernard H.
A novel algorithm for rock detection has been developed for effectively utilising Mars rovers, and enabling autonomous selection of target rocks that require close-contact spectroscopic measurements. The algorithm demarcates small rocks in terrain images as seen by cameras on a Mars rover during traverse. This information may be used by the rover for selection of geologically relevant sample rocks, and (in conjunction with a rangefinder) to pick up target samples using a robotic arm for automatic in situ determination of rock composition and mineralogy using, for example, a Raman spectrometer. Determining rock samples within the region that are of specific interest without physically approaching them significantly reduces time, power and risk. Input images in colour are converted to greyscale for intensity analysis. Bilateral filtering is used for texture removal while preserving rock boundaries. Unsharp masking is used for contrast enhance-ment. Sharp contrasts in intensities are detected using Canny edge detection, with thresholds that are calculated from the image obtained after contrast-limited adaptive histogram equalisation of the unsharp masked image. Scale-space representations are then generated by convolving this image with a Gaussian kernel. A scale-invariant blob detector (Laplacian of the Gaussian, LoG) detects blobs independently of their sizes, and therefore requires a multi-scale approach with automatic scale se-lection. The scale-space blob detector consists of convolution of the Canny edge-detected image with a scale-normalised LoG at several scales, and finding the maxima of squared LoG response in scale-space. After the extraction of local intensity extrema, the intensity profiles along rays going out of the local extremum are investigated. An ellipse is fitted to the region determined by significant changes in the intensity profiles. The fitted ellipses are overlaid on the original Mars terrain image for a visual estimation of the rock detection accuracy, and the number of ellipses are counted. Since geometry and illumination have the least effect on small rocks, the proposed algorithm is effective in detecting small rocks (or bigger rocks at larger distances from the camera) that consist of a small fraction of image pixels. Acknowledgements: The first author would like to express her gratitude to the European Space Agency (ESA/ESTEC) and the International Lunar Exploration Working Group (ILEWG) for their support of this work.
NASA Technical Reports Server (NTRS)
1989-01-01
The objective of the Space Transportation Booster Engine Configuration Study is to contribute to the ALS development effort by providing highly reliable, low cost booster engine concepts for both expendable and reusable rocket engines. The objectives of the Space Transportation Booster Engine (STBE) Configuration Study were: (1) to identify engine development configurations which enhance vehicle performance and provide operational flexibility at low cost; and (2) to explore innovative approaches to the follow-on Full-Scale Development (FSD) phase for the STBE.
NASA Technical Reports Server (NTRS)
1989-01-01
The objective of the Space Transportation Booster Engine (STBE) Configuration Study is to contribute to the Advanced Launch System (ALS) development effort by providing highly reliable, low cost booster engine concepts for both expendable and reusable rocket engines. The objectives of the space Transportation Booster Engine (STBE) Configuration Study were: (1) to identify engine configurations which enhance vehicle performance and provide operational flexibility at low cost, and (2) to explore innovative approaches to the follow-on Full-Scale Development (FSD) phase for the STBE.
The Physics of Boiling at Burnout
NASA Technical Reports Server (NTRS)
Theofanous, T. G.; Tu, J. P.; Dinh, T. N.; Salmassi, T.; Dinh, A. T.; Gasljevic, K.
2000-01-01
The basic elements of a new experimental approach for the investigation of burnout in pool boiling are presented. The approach consists of the combined use of ultrathin (nano-scale) heaters and high speed infrared imaging of the heater temperature pattern as a whole, in conjunction with highly detailed control and characterization of heater morphology at the nano and micron scales. It is shown that the burnout phenomenon can be resolved in both space and time. Ultrathin heaters capable of dissipating power levels, at steady-state, of over 1 MW/square m are demonstrated. A separation of scales is identified and it is used to transfer the focus of attention from the complexity of the two-phase mixing layer in the vicinity of the heater to a micron-scaled microlayer and nucleation and associated film-disruption processes within it.
A unified framework for mesh refinement in random and physical space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jing; Stinis, Panos
In recent work we have shown how an accurate reduced model can be utilized to perform mesh renement in random space. That work relied on the explicit knowledge of an accurate reduced model which is used to monitor the transfer of activity from the large to the small scales of the solution. Since this is not always available, we present in the current work a framework which shares the merits and basic idea of the previous approach but does not require an explicit knowledge of a reduced model. Moreover, the current framework can be applied for renement in both randommore » and physical space. In this manuscript we focus on the application to random space mesh renement. We study examples of increasing difficulty (from ordinary to partial differential equations) which demonstrate the effciency and versatility of our approach. We also provide some results from the application of the new framework to physical space mesh refinement.« less
NASA Astrophysics Data System (ADS)
Queiros-Conde, D.; Foucher, F.; Mounaïm-Rousselle, C.; Kassem, H.; Feidt, M.
2008-12-01
Multi-scale features of turbulent flames near a wall display two kinds of scale-dependent fractal features. In scale-space, an unique fractal dimension cannot be defined and the fractal dimension of the front is scale-dependent. Moreover, when the front approaches the wall, this dependency changes: fractal dimension also depends on the wall-distance. Our aim here is to propose a general geometrical framework that provides the possibility to integrate these two cases, in order to describe the multi-scale structure of turbulent flames interacting with a wall. Based on the scale-entropy quantity, which is simply linked to the roughness of the front, we thus introduce a general scale-entropy diffusion equation. We define the notion of “scale-evolutivity” which characterises the deviation of a multi-scale system from the pure fractal behaviour. The specific case of a constant “scale-evolutivity” over the scale-range is studied. In this case, called “parabolic scaling”, the fractal dimension is a linear function of the logarithm of scale. The case of a constant scale-evolutivity in the wall-distance space implies that the fractal dimension depends linearly on the logarithm of the wall-distance. We then verified experimentally, that parabolic scaling represents a good approximation of the real multi-scale features of turbulent flames near a wall.
Wavefront Sensing and Control Technology for Submillimeter and Far-Infrared Space Telescopes
NASA Technical Reports Server (NTRS)
Redding, Dave
2004-01-01
The NGST wavefront sensing and control system will be developed to TRL6 over the next few years, including testing in a cryogenic vacuum environment with traceable hardware. Doing this in the far-infrared and submillimeter is probably easier, as some aspects of the problem scale with wavelength, and the telescope is likely to have a more stable environment; however, detectors may present small complications. Since this is a new system approach, it warrants a new look. For instance, a large space telescope based on the DART membrane mirror design requires a new actuation approach. Other mirror and actuation technologies may prove useful as well.
Some intriguing aspects of multiparticle production processes
NASA Astrophysics Data System (ADS)
Wilk, Grzegorz; Włodarczyk, Zbigniew
2018-04-01
Multiparticle production processes provide valuable information about the mechanism of the conversion of the initial energy of projectiles into a number of secondaries by measuring their multiplicity distributions and their distributions in phase space. They therefore serve as a reference point for more involved measurements. Distributions in phase space are usually investigated using the statistical approach, very successful in general but failing in cases of small colliding systems, small multiplicities, and at the edges of the allowed phase space, in which cases the underlying dynamical effects competing with the statistical distributions take over. We discuss an alternative approach, which applies to the whole phase space without detailed knowledge of dynamics. It is based on a modification of the usual statistics by generalizing it to a superstatistical form. We stress particularly the scaling and self-similar properties of such an approach manifesting themselves as the phenomena of the log-periodic oscillations and oscillations of temperature caused by sound waves in hadronic matter. Concerning the multiplicity distributions we discuss in detail the phenomenon of the oscillatory behavior of the modified combinants apparently observed in experimental data.
Coarse-Grain Bandwidth Estimation Techniques for Large-Scale Space Network
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Jennings, Esther
2013-01-01
In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-andforward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.
International Cooperation of Space Science and Application in Chinese Manned Space Program
NASA Astrophysics Data System (ADS)
Gao, Ming; Guo, Jiong; Yang, Yang
Early in China Manned Space Program, lots of space science and application projects have been carried out by utilizing the SZ series manned spaceships and the TG-1 spacelab, and remarkable achievements have been attained with the efforts of international partners. Around 2020, China is going to build its space station and carry out space science and application research of larger scale. Along with the scientific utilization plan for Chinese space station, experiment facilities are considered especially for international scientific cooperation, and preparations on international cooperation projects management are made as well. This paper briefs the international scientific cooperation history and achievement in the previous missions of China Manned Space Program. The general resources and facilities that will support potential cooperation projects are then presented. Finally, the international cooperation modes and approaches for utilizing Chinese Space Station are discussed.
A controls engineering approach for analyzing airplane input-output characteristics
NASA Technical Reports Server (NTRS)
Arbuckle, P. Douglas
1991-01-01
An engineering approach for analyzing airplane control and output characteristics is presented. State-space matrix equations describing the linear perturbation dynamics are transformed from physical coordinates into scaled coordinates. The scaling is accomplished by applying various transformations to the system to employ prior engineering knowledge of the airplane physics. Two different analysis techniques are then explained. Modal analysis techniques calculate the influence of each system input on each fundamental mode of motion and the distribution of each mode among the system outputs. The optimal steady state response technique computes the blending of steady state control inputs that optimize the steady state response of selected system outputs. Analysis of an example airplane model is presented to demonstrate the described engineering approach.
Conceptual design and analysis of a dynamic scale model of the Space Station Freedom
NASA Technical Reports Server (NTRS)
Davis, D. A.; Gronet, M. J.; Tan, M. K.; Thorne, J.
1994-01-01
This report documents the conceptual design study performed to evaluate design options for a subscale dynamic test model which could be used to investigate the expected on-orbit structural dynamic characteristics of the Space Station Freedom early build configurations. The baseline option was a 'near-replica' model of the SSF SC-7 pre-integrated truss configuration. The approach used to develop conceptual design options involved three sets of studies: evaluation of the full-scale design and analysis databases, conducting scale factor trade studies, and performing design sensitivity studies. The scale factor trade study was conducted to develop a fundamental understanding of the key scaling parameters that drive design, performance and cost of a SSF dynamic scale model. Four scale model options were estimated: 1/4, 1/5, 1/7, and 1/10 scale. Prototype hardware was fabricated to assess producibility issues. Based on the results of the study, a 1/4-scale size is recommended based on the increased model fidelity associated with a larger scale factor. A design sensitivity study was performed to identify critical hardware component properties that drive dynamic performance. A total of 118 component properties were identified which require high-fidelity replication. Lower fidelity dynamic similarity scaling can be used for non-critical components.
On the wavelet optimized finite difference method
NASA Technical Reports Server (NTRS)
Jameson, Leland
1994-01-01
When one considers the effect in the physical space, Daubechies-based wavelet methods are equivalent to finite difference methods with grid refinement in regions of the domain where small scale structure exists. Adding a wavelet basis function at a given scale and location where one has a correspondingly large wavelet coefficient is, essentially, equivalent to adding a grid point, or two, at the same location and at a grid density which corresponds to the wavelet scale. This paper introduces a wavelet optimized finite difference method which is equivalent to a wavelet method in its multiresolution approach but which does not suffer from difficulties with nonlinear terms and boundary conditions, since all calculations are done in the physical space. With this method one can obtain an arbitrarily good approximation to a conservative difference method for solving nonlinear conservation laws.
Space station group activities habitability module study
NASA Technical Reports Server (NTRS)
Nixon, David
1986-01-01
This study explores and analyzes architectural design approaches for the interior of the Space Station Habitability Module (originally defined as Habitability Module 1 in Space Station Reference Configuration Decription, JSC-19989, August 1984). In the Research Phase, architectural program and habitability design guidelines are specified. In the Schematic Design Phase, a range of alternative concepts is described and illustrated with drawings, scale-model photographs and design analysis evaluations. Recommendations are presented on the internal architectural, configuration of the Space Station Habitability Module for such functions as the wardroom, galley, exercise facility, library and station control work station. The models show full design configurations for on-orbit performance.
A phenomenological description of space-time noise in quantum gravity.
Amelino-Camelia, G
2001-04-26
Space-time 'foam' is a geometric picture of the smallest size scales in the Universe, which is characterized mainly by the presence of quantum uncertainties in the measurement of distances. All quantum-gravity theories should predict some kind of foam, but the description of the properties of this foam varies according to the theory, thereby providing a possible means of distinguishing between such theories. I previously showed that foam-induced distance fluctuations would introduce a new source of noise to the measurements of gravity-wave interferometers, but the theories are insufficiently developed to permit detailed predictions that would be of use to experimentalists. Here I propose a phenomenological approach that directly describes space-time foam, and which leads naturally to a picture of distance fluctuations that is independent of the details of the interferometer. The only unknown in the model is the length scale that sets the overall magnitude of the effect, but recent data already rule out the possibility that this length scale could be identified with the 'string length' (10-34 m < Ls < 10-33 m). Length scales even smaller than the 'Planck length' (LP approximately 10-35 m) will soon be probed experimentally.
Marginal space learning for efficient detection of 2D/3D anatomical structures in medical images.
Zheng, Yefeng; Georgescu, Bogdan; Comaniciu, Dorin
2009-01-01
Recently, marginal space learning (MSL) was proposed as a generic approach for automatic detection of 3D anatomical structures in many medical imaging modalities [1]. To accurately localize a 3D object, we need to estimate nine pose parameters (three for position, three for orientation, and three for anisotropic scaling). Instead of exhaustively searching the original nine-dimensional pose parameter space, only low-dimensional marginal spaces are searched in MSL to improve the detection speed. In this paper, we apply MSL to 2D object detection and perform a thorough comparison between MSL and the alternative full space learning (FSL) approach. Experiments on left ventricle detection in 2D MRI images show MSL outperforms FSL in both speed and accuracy. In addition, we propose two novel techniques, constrained MSL and nonrigid MSL, to further improve the efficiency and accuracy. In many real applications, a strong correlation may exist among pose parameters in the same marginal spaces. For example, a large object may have large scaling values along all directions. Constrained MSL exploits this correlation for further speed-up. The original MSL only estimates the rigid transformation of an object in the image, therefore cannot accurately localize a nonrigid object under a large deformation. The proposed nonrigid MSL directly estimates the nonrigid deformation parameters to improve the localization accuracy. The comparison experiments on liver detection in 226 abdominal CT volumes demonstrate the effectiveness of the proposed methods. Our system takes less than a second to accurately detect the liver in a volume.
Low rank approximation methods for MR fingerprinting with large scale dictionaries.
Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra
2018-04-01
This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu Cenke
In this paper, we calculate the entanglement Renyi entropy of two coupled gapless systems in general spatial dimension d. The gapless systems can be either conformal field theories or Fermi liquids. We assume the two systems are coupled uniformly in an h-dimensional submanifold of the space, with 0{<=}h{<=}d. We will focus on the scaling of the Renyi entropy with the size of the system, and its scaling with the intersystem coupling constant g. Three approaches will be used for our calculation: (1) exact calculation with ground-state wave functional, (2) perturbative calculation with functional path integral, and (3) scaling argument.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertacca, Daniele; Maartens, Roy; Raccanelli, Alvise
We extend previous analyses of wide-angle correlations in the galaxy power spectrum in redshift space to include all general relativistic effects. These general relativistic corrections to the standard approach become important on large scales and at high redshifts, and they lead to new terms in the wide-angle correlations. We show that in principle the new terms can produce corrections of nearly 10% on Gpc scales over the usual Newtonian approximation. General relativistic corrections will be important for future large-volume surveys such as SKA and Euclid, although the problem of cosmic variance will present a challenge in observing this.
2012-01-01
The fast and accurate computation of the electric forces that drive the motion of charged particles at the nanometer scale represents a computational challenge. For this kind of system, where the discrete nature of the charges cannot be neglected, boundary element methods (BEM) represent a better approach than finite differences/finite elements methods. In this article, we compare two different BEM approaches to a canonical electrostatic problem in a three-dimensional space with inhomogeneous dielectrics, emphasizing their suitability for particle-based simulations: the iterative method proposed by Hoyles et al. and the Induced Charge Computation introduced by Boda et al. PMID:22338640
Berti, Claudio; Gillespie, Dirk; Eisenberg, Robert S; Fiegna, Claudio
2012-02-16
The fast and accurate computation of the electric forces that drive the motion of charged particles at the nanometer scale represents a computational challenge. For this kind of system, where the discrete nature of the charges cannot be neglected, boundary element methods (BEM) represent a better approach than finite differences/finite elements methods. In this article, we compare two different BEM approaches to a canonical electrostatic problem in a three-dimensional space with inhomogeneous dielectrics, emphasizing their suitability for particle-based simulations: the iterative method proposed by Hoyles et al. and the Induced Charge Computation introduced by Boda et al.
NASA Technical Reports Server (NTRS)
Blackwell, William C., Jr.
2004-01-01
In this paper space is modeled as a lattice of Compton wave oscillators (CWOs) of near- Planck size. It is shown that gravitation and special relativity emerge from the interaction between particles Compton waves. To develop this CWO model an algorithmic approach was taken, incorporating simple rules of interaction at the Planck-scale developed using well known physical laws. This technique naturally leads to Newton s law of gravitation and a new form of doubly special relativity. The model is in apparent agreement with the holographic principle, and it predicts a cutoff energy for ultrahigh-energy cosmic rays that is consistent with observational data.
A Subjective Test of Modulated Blade Spacing for Helicopter Main Rotors
NASA Technical Reports Server (NTRS)
Sullivan, Brenda M.; Edwards, Bryan D.; Brentner, Kenneth S.; Booth, Earl R., Jr.
2002-01-01
Analytically, uneven (modulated) spacing of main rotor blades was found to reduce helicopter noise. A study was performed to see if these reductions transferred to improvements in subjective response. Using a predictive computer code, sounds produced by six main rotor configurations: 4 blades evenly spaced, 5 blades evenly spaced and four configurations with 5 blades with modulated spacing of varying amounts, were predicted. These predictions were converted to audible sounds corresponding to the level flyover, takeoff and approach flight conditions. Subjects who heard the simulations were asked to assess the overflight sounds in terms of noisiness on a scale of 0 to 10. In general the evenly spaced configurations were found less noisy than the modulated spacings, possibly because the uneven spacings produced a perceptible pulsating sound due to the very low fundamental frequency.
Space resources. Volume 4: Social concerns
NASA Technical Reports Server (NTRS)
Mckay, Mary Fae (Editor); Mckay, David S. (Editor); Duke, Michael B. (Editor)
1992-01-01
Space resources must be used to support life on the Moon and exploration of Mars. This volume, Social Concerns, covers some of the most important issues which must be addressed in any major program for the human exploration of space. The volume begins with a consideration of the economics and management of large scale space activities. Then the legal aspects of these activities are discussed, particularly the interpretation of treaty law with respect to the Moon and asteroids. The social and cultural issues of moving people into space are considered in detail, and the eventual emergence of a space culture different from the existing culture is envisioned. The environmental issues raised by the development of space settlements are faced. Some innovative approaches are proposed to space communities and habitats and self-sufficiency is considered along with human safety at a lunar base or outpost.
NASA Astrophysics Data System (ADS)
Zhang, Yongping; Shang, Pengjian; Xiong, Hui; Xia, Jianan
Time irreversibility is an important property of nonequilibrium dynamic systems. A visibility graph approach was recently proposed, and this approach is generally effective to measure time irreversibility of time series. However, its result may be unreliable when dealing with high-dimensional systems. In this work, we consider the joint concept of time irreversibility and adopt the phase-space reconstruction technique to improve this visibility graph approach. Compared with the previous approach, the improved approach gives a more accurate estimate for the irreversibility of time series, and is more effective to distinguish irreversible and reversible stochastic processes. We also use this approach to extract the multiscale irreversibility to account for the multiple inherent dynamics of time series. Finally, we apply the approach to detect the multiscale irreversibility of financial time series, and succeed to distinguish the time of financial crisis and the plateau. In addition, Asian stock indexes away from other indexes are clearly visible in higher time scales. Simulations and real data support the effectiveness of the improved approach when detecting time irreversibility.
A Multi-scale Cognitive Approach to Intrusion Detection and Response
2015-12-28
the behavior of the traffic on the network, either by using mathematical formulas or by replaying packet streams. As a result, simulators depend...large scale. Summary of the most important results We obtained a powerful machine, which has 768 cores and 1.25 TB memory . RBG has been...time. Each client is configured with 1GB memory , 10 GB disk space, and one 100M Ethernet interface. The server nodes include web servers
NASA Astrophysics Data System (ADS)
Lyakh, Dmitry I.
2018-03-01
A novel reduced-scaling, general-order coupled-cluster approach is formulated by exploiting hierarchical representations of many-body tensors, combined with the recently suggested formalism of scale-adaptive tensor algebra. Inspired by the hierarchical techniques from the renormalisation group approach, H/H2-matrix algebra and fast multipole method, the computational scaling reduction in our formalism is achieved via coarsening of quantum many-body interactions at larger interaction scales, thus imposing a hierarchical structure on many-body tensors of coupled-cluster theory. In our approach, the interaction scale can be defined on any appropriate Euclidean domain (spatial domain, momentum-space domain, energy domain, etc.). We show that the hierarchically resolved many-body tensors can reduce the storage requirements to O(N), where N is the number of simulated quantum particles. Subsequently, we prove that any connected many-body diagram consisting of a finite number of arbitrary-order tensors, e.g. an arbitrary coupled-cluster diagram, can be evaluated in O(NlogN) floating-point operations. On top of that, we suggest an additional approximation to further reduce the computational complexity of higher order coupled-cluster equations, i.e. equations involving higher than double excitations, which otherwise would introduce a large prefactor into formal O(NlogN) scaling.
The Griffiss Institute Summer Faculty Program
2013-05-01
can inherit the advantages of the static approach while overcoming its drawbacks . Our solution is centered on the following: (i) application-layer web...inverted pendulum balancing problem. In these challenging environments we show that our algorithm not only allows NEAT to scale to high-dimensional spaces
Dutta, Achintya Kumar; Vaval, Nayana; Pal, Sourav
2015-01-28
We propose a new elegant strategy to implement third order triples correction in the light of many-body perturbation theory to the Fock space multi-reference coupled cluster method for the ionization problem. The computational scaling as well as the storage requirement is of key concerns in any many-body calculations. Our proposed approach scales as N(6) does not require the storage of triples amplitudes and gives superior agreement over all the previous attempts made. This approach is capable of calculating multiple roots in a single calculation in contrast to the inclusion of perturbative triples in the equation of motion variant of the coupled cluster theory, where each root needs to be computed in a state-specific way and requires both the left and right state vectors together. The performance of the newly implemented scheme is tested by applying to methylene, boron nitride (B2N) anion, nitrogen, water, carbon monoxide, acetylene, formaldehyde, and thymine monomer, a DNA base.
NASA Astrophysics Data System (ADS)
Sarkar, Debojit
2018-02-01
An energy independent scaling of the near-side ridge yield at a given multiplicity has been observed by the ATLAS and the CMS collaborations in p +p collisions at √{s }=7 and 13 TeV. Such a striking feature of the data can be successfully explained by approaches based on initial state momentum space correlation generated due to gluon saturation. In this paper, we try to examine if such a scaling is also an inherent feature of the approaches that employ strong final state interaction in p +p collisions. We find that hydrodynamical modeling of p +p collisions using EPOS 3 shows a violation of such scaling. The current study can, therefore, provide important new insights on the origin of long-range azimuthal correlations in high multiplicity p +p collisions at the LHC energies.
A space-time multiscale modelling of Earth's gravity field variations
NASA Astrophysics Data System (ADS)
Wang, Shuo; Panet, Isabelle; Ramillien, Guillaume; Guilloux, Frédéric
2017-04-01
The mass distribution within the Earth varies over a wide range of spatial and temporal scales, generating variations in the Earth's gravity field in space and time. These variations are monitored by satellites as the GRACE mission, with a 400 km spatial resolution and 10 days to 1 month temporal resolution. They are expressed in the form of gravity field models, often with a fixed spatial or temporal resolution. The analysis of these models allows us to study the mass transfers within the Earth system. Here, we have developed space-time multi-scale models of the gravity field, in order to optimize the estimation of gravity signals resulting from local processes at different spatial and temporal scales, and to adapt the time resolution of the model to its spatial resolution according to the satellites sampling. For that, we first build a 4D wavelet family combining spatial Poisson wavelets with temporal Haar wavelets. Then, we set-up a regularized inversion of inter-satellites gravity potential differences in a bayesian framework, to estimate the model parameters. To build the prior, we develop a spectral analysis, localized in time and space, of geophysical models of mass transport and associated gravity variations. Finally, we test our approach to the reconstruction of space-time variations of the gravity field due to hydrology. We first consider a global distribution of observations along the orbit, from a simplified synthetic hydrology signal comprising only annual variations at large spatial scales. Then, we consider a regional distribution of observations in Africa, and a larger number of spatial and temporal scales. We test the influence of an imperfect prior and discuss our results.
Flight Test Approach to Adaptive Control Research
NASA Technical Reports Server (NTRS)
Pavlock, Kate Maureen; Less, James L.; Larson, David Nils
2011-01-01
The National Aeronautics and Space Administration s Dryden Flight Research Center completed flight testing of adaptive controls research on a full-scale F-18 testbed. The validation of adaptive controls has the potential to enhance safety in the presence of adverse conditions such as structural damage or control surface failures. This paper describes the research interface architecture, risk mitigations, flight test approach and lessons learned of adaptive controls research.
2016-07-01
characteristics and to examine the sensitivity of using such techniques for evaluating microstructure. In addition to the GUI tool, a manual describing its use has... Evaluating Local Primary Dendrite Arm Spacing Characterization Techniques Using Synthetic Directionally Solidified Dendritic Microstructures, Metallurgical and...driven approach for quanti - fying materials uncertainty in creep deformation and failure of aerspace materials, Multi-scale Structural Mechanics and
NASA Astrophysics Data System (ADS)
Ruiz Ruiz, Juan; Guttenfelder, Walter; Loureiro, Nuno; Ren, Yang; White, Anne; MIT/PPPL Collaboration
2017-10-01
Turbulent fluctuations on the electron gyro-radius length scale are thought to cause anomalous transport of electron energy in spherical tokamaks such as NSTX and MAST in some parametric regimes. In NSTX, electron-scale turbulence is studied through a combination of experimental measurements from a high-k scattering system and gyrokinetic simulations. Until now most comparisons between experiment and simulation of electron scale turbulence have been qualitative, with recent work expanding to more quantitative comparisons via synthetic diagnostic development. In this new work, we propose two alternate, complementary ways to perform a synthetic diagnostic using the gyrokinetic code GYRO. The first approach builds on previous work and is based on the traditional selection of wavenumbers using a wavenumber filter, for which a new wavenumber mapping was implemented for general axisymmetric geometry. A second alternate approach selects wavenumbers in real-space to compute the power spectra. These approaches are complementary, and recent results from both synthetic diagnostic approaches applied to NSTX plasmas will be presented. Work supported by U.S. DOE contracts DE-AC02-09CH11466 and DE-AC02-05CH11231.
Weather and climate needs for lidar observations from space and concepts for their realization
NASA Technical Reports Server (NTRS)
Atlas, D.; Korb, C. L.
1981-01-01
The spectrum of weather and climate needs for lidar observations from space is discussed. This paper focuses mainly on the requirements for winds, temperature, moisture, and pressure. Special emphasis is given to the need for wind observations, and it is shown that winds are required to depict realistically all atmospheric scales in the tropics and the smaller scales at higher latitudes, where both temperature and wind profiles are necessary. The need for means to estimate air-sea exchanges of sensible and latent heat also is noted. Lidar can aid here by measurement of the slope of the boundary layer. Recent theoretical feasibility studies concerning the profiling of temperature, pressure, and humidity by differential absorption lidar (DIAL) from space and expected accuracies are reviewed. Initial ground-based trials provide support for these approaches and also indicate their direct applicability to path-average temperature measurements near the surface. An alternative approach to Doppler lidar wind measurements also is presented. The concept involves the measurement of the displacement of the aerosol backscatter pattern, at constant height, between two successive scans of the same area, one ahead of the spacecraft and the other behind it, a few minutes later. Finally, an integrated space lidar system capable of measuring temperature, pressure, humidity, and winds which combines the DIAL methods with the aerosol pattern displacement concept is described briefly.
Efficacy of the SU(3) scheme for ab initio large-scale calculations beyond the lightest nuclei
Dytrych, T.; Maris, P.; Launey, K. D.; ...
2016-06-22
We report on the computational characteristics of ab initio nuclear structure calculations in a symmetry-adapted no-core shell model (SA-NCSM) framework. We examine the computational complexity of the current implementation of the SA-NCSM approach, dubbed LSU3shell, by analyzing ab initio results for 6Li and 12C in large harmonic oscillator model spaces and SU3-selected subspaces. We demonstrate LSU3shell’s strong-scaling properties achieved with highly-parallel methods for computing the many-body matrix elements. Results compare favorably with complete model space calculations and significant memory savings are achieved in physically important applications. In particular, a well-chosen symmetry-adapted basis affords memory savings in calculations of states withmore » a fixed total angular momentum in large model spaces while exactly preserving translational invariance.« less
Efficacy of the SU(3) scheme for ab initio large-scale calculations beyond the lightest nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dytrych, T.; Maris, Pieter; Launey, K. D.
2016-06-09
We report on the computational characteristics of ab initio nuclear structure calculations in a symmetry-adapted no-core shell model (SA-NCSM) framework. We examine the computational complexity of the current implementation of the SA-NCSM approach, dubbed LSU3shell, by analyzing ab initio results for 6Li and 12C in large harmonic oscillator model spaces and SU(3)-selected subspaces. We demonstrate LSU3shell's strong-scaling properties achieved with highly-parallel methods for computing the many-body matrix elements. Results compare favorably with complete model space calculations and signi cant memory savings are achieved in physically important applications. In particular, a well-chosen symmetry-adapted basis a ords memory savings in calculations ofmore » states with a fixed total angular momentum in large model spaces while exactly preserving translational invariance.« less
NASA Astrophysics Data System (ADS)
Wang, Xianmin; Li, Bo; Xu, Qizhi
2016-07-01
The anisotropic scale space (ASS) is often used to enhance the performance of a scale-invariant feature transform (SIFT) algorithm in the registration of synthetic aperture radar (SAR) images. The existing ASS-based methods usually suffer from unstable keypoints and false matches, since the anisotropic diffusion filtering has limitations in reducing the speckle noise from SAR images while building the ASS image representation. We proposed a speckle reducing SIFT match method to obtain stable keypoints and acquire precise matches for the SAR image registration. First, the keypoints are detected in a speckle reducing anisotropic scale space constructed by the speckle reducing anisotropic diffusion, so that speckle noise is greatly reduced and prominent structures of the images are preserved, consequently the stable keypoints can be derived. Next, the probabilistic relaxation labeling approach is employed to establish the matches of the keypoints then the correct match rate of the keypoints is significantly increased. Experiments conducted on simulated speckled images and real SAR images demonstrate the effectiveness of the proposed method.
Percolation in random-Sierpiński carpets: A real space renormalization group approach
NASA Astrophysics Data System (ADS)
Perreau, Michel; Peiro, Joaquina; Berthier, Serge
1996-11-01
The site percolation transition in random Sierpiński carpets is investigated by real space renormalization. The fixed point is not unique like in regular translationally invariant lattices, but depends on the number k of segmentation steps of the generation process of the fractal. It is shown that, for each scale invariance ratio n, the sequence of fixed points pn,k is increasing with k, and converges when k-->∞ toward a limit pn strictly less than 1. Moreover, in such scale invariant structures, the percolation threshold does not depend only on the scale invariance ratio n, but also on the scale. The sequence pn,k and pn are calculated for n=4, 8, 16, 32, and 64, and for k=1 to k=11, and k=∞. The corresponding thermal exponent sequence νn,k is calculated for n=8 and 16, and for k=1 to k=5, and k=∞. Suggestions are made for an experimental test in physical self-similar structures.
Gravitational wave as probe of superfluid dark matter
NASA Astrophysics Data System (ADS)
Cai, Rong-Gen; Liu, Tong-Bo; Wang, Shao-Jiang
2018-02-01
In recent years, superfluid dark matter (SfDM) has become a competitive model of emergent modified Newtonian dynamics (MOND) scenario: MOND phenomenons naturally emerge as a derived concept due to an extra force mediated between baryons by phonons as a result of axionlike particles condensed as superfluid at galactic scales; Beyond galactic scales, these axionlike particles behave as normal fluid without phonon-mediated MOND-like force between baryons, therefore SfDM also maintains the usual success of Λ CDM at cosmological scales. In this paper, we use gravitational waves (GWs) to probe the relevant parameter space of SfDM. GWs through Bose-Einstein condensate (BEC) could propagate with a speed slightly deviation from the speed-of-light due to the change in the effective refractive index, which depends on the SfDM parameters and GW-source properties. We find that Five hundred meter Aperture Spherical Telescope (FAST), Square Kilometre Array (SKA) and International Pulsar Timing Array (IPTA) are the most promising means as GW probe of relevant parameter space of SfDM. Future space-based GW detectors are also capable of probing SfDM if a multimessenger approach is adopted.
NASA Astrophysics Data System (ADS)
Kähler, Sven; Olsen, Jeppe
2017-11-01
A computational method is presented for systems that require high-level treatments of static and dynamic electron correlation but cannot be treated using conventional complete active space self-consistent field-based methods due to the required size of the active space. Our method introduces an efficient algorithm for perturbative dynamic correlation corrections for compact non-orthogonal MCSCF calculations. In the algorithm, biorthonormal expansions of orbitals and CI-wave functions are used to reduce the scaling of the performance determining step from quadratic to linear in the number of configurations. We describe a hierarchy of configuration spaces that can be chosen for the active space. Potential curves for the nitrogen molecule and the chromium dimer are compared for different configuration spaces. Already the most compact spaces yield qualitatively correct potentials that with increasing size of configuration spaces systematically approach complete active space results.
An Overview of Quantitative Risk Assessment of Space Shuttle Propulsion Elements
NASA Technical Reports Server (NTRS)
Safie, Fayssal M.
1998-01-01
Since the Space Shuttle Challenger accident in 1986, NASA has been working to incorporate quantitative risk assessment (QRA) in decisions concerning the Space Shuttle and other NASA projects. One current major NASA QRA study is the creation of a risk model for the overall Space Shuttle system. The model is intended to provide a tool to estimate Space Shuttle risk and to perform sensitivity analyses/trade studies, including the evaluation of upgrades. Marshall Space Flight Center (MSFC) is a part of the NASA team conducting the QRA study; MSFC responsibility involves modeling the propulsion elements of the Space Shuttle, namely: the External Tank (ET), the Solid Rocket Booster (SRB), the Reusable Solid Rocket Motor (RSRM), and the Space Shuttle Main Engine (SSME). This paper discusses the approach that MSFC has used to model its Space Shuttle elements, including insights obtained from this experience in modeling large scale, highly complex systems with a varying availability of success/failure data. Insights, which are applicable to any QRA study, pertain to organizing the modeling effort, obtaining customer buy-in, preparing documentation, and using varied modeling methods and data sources. Also provided is an overall evaluation of the study results, including the strengths and the limitations of the MSFC QRA approach and of qRA technology in general.
Spatial and Temporal Scaling of Thermal Infrared Remote Sensing Data
NASA Technical Reports Server (NTRS)
Quattrochi, Dale A.; Goel, Narendra S.
1995-01-01
Although remote sensing has a central role to play in the acquisition of synoptic data obtained at multiple spatial and temporal scales to facilitate our understanding of local and regional processes as they influence the global climate, the use of thermal infrared (TIR) remote sensing data in this capacity has received only minimal attention. This results from some fundamental challenges that are associated with employing TIR data collected at different space and time scales, either with the same or different sensing systems, and also from other problems that arise in applying a multiple scaled approach to the measurement of surface temperatures. In this paper, we describe some of the more important problems associated with using TIR remote sensing data obtained at different spatial and temporal scales, examine why these problems appear as impediments to using multiple scaled TIR data, and provide some suggestions for future research activities that may address these problems. We elucidate the fundamental concept of scale as it relates to remote sensing and explore how space and time relationships affect TIR data from a problem-dependency perspective. We also describe how linearity and non-linearity observation versus parameter relationships affect the quantitative analysis of TIR data. Some insight is given on how the atmosphere between target and sensor influences the accurate measurement of surface temperatures and how these effects will be compounded in analyzing multiple scaled TIR data. Last, we describe some of the challenges in modeling TIR data obtained at different space and time scales and discuss how multiple scaled TIR data can be used to provide new and important information for measuring and modeling land-atmosphere energy balance processes.
A Re-Unification of Two Competing Models for Document Retrieval.
ERIC Educational Resources Information Center
Bodoff, David
1999-01-01
Examines query-oriented versus document-oriented information retrieval and feedback learning. Highlights include a reunification of the two approaches for probabilistic document retrieval and for vector space model (VSM) retrieval; learning in VSM and in probabilistic models; multi-dimensional scaling; and ongoing field studies. (LRW)
Quantum metabolism explains the allometric scaling of metabolic rates.
Demetrius, Lloyd; Tuszynski, J A
2010-03-06
A general model explaining the origin of allometric laws of physiology is proposed based on coupled energy-transducing oscillator networks embedded in a physical d-dimensional space (d = 1, 2, 3). This approach integrates Mitchell's theory of chemi-osmosis with the Debye model of the thermal properties of solids. We derive a scaling rule that relates the energy generated by redox reactions in cells, the dimensionality of the physical space and the mean cycle time. Two major regimes are found corresponding to classical and quantum behaviour. The classical behaviour leads to allometric isometry while the quantum regime leads to scaling laws relating metabolic rate and body size that cover a broad range of exponents that depend on dimensionality and specific parameter values. The regimes are consistent with a range of behaviours encountered in micelles, plants and animals and provide a conceptual framework for a theory of the metabolic function of living systems.
NASA Astrophysics Data System (ADS)
Bates, P. D.; Quinn, N.; Sampson, C. C.; Smith, A.; Wing, O.; Neal, J. C.
2017-12-01
Remotely sensed data has transformed the field of large scale hydraulic modelling. New digital elevation, hydrography and river width data has allowed such models to be created for the first time, and remotely sensed observations of water height, slope and water extent has allowed them to be calibrated and tested. As a result, we are now able to conduct flood risk analyses at national, continental or even global scales. However, continental scale analyses have significant additional complexity compared to typical flood risk modelling approaches. Traditional flood risk assessment uses frequency curves to define the magnitude of extreme flows at gauging stations. The flow values for given design events, such as the 1 in 100 year return period flow, are then used to drive hydraulic models in order to produce maps of flood hazard. Such an approach works well for single gauge locations and local models because over relatively short river reaches (say 10-60km) one can assume that the return period of an event does not vary. At regional to national scales and across multiple river catchments this assumption breaks down, and for a given flood event the return period will be different at different gauging stations, a pattern known as the event `footprint'. Despite this, many national scale risk analyses still use `constant in space' return period hazard layers (e.g. the FEMA Special Flood Hazard Areas) in their calculations. Such an approach can estimate potential exposure, but will over-estimate risk and cannot determine likely flood losses over a whole region or country. We address this problem by using a stochastic model to simulate many realistic extreme event footprints based on observed gauged flows and the statistics of gauge to gauge correlations. We take the entire USGS gauge data catalogue for sites with > 45 years of record and use a conditional approach for multivariate extreme values to generate sets of flood events with realistic return period variation in space. We undertake a number of quality checks of the stochastic model and compare real and simulated footprints to show that the method is able to re-create realistic patterns even at continental scales where there is large variation in flood generating mechanisms. We then show how these patterns can be used to drive a large scale 2D hydraulic to predict regional scale flooding.
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
High resolution fossil fuel combustion CO2 emission fluxes for the United States.
Gurney, Kevin R; Mendoza, Daniel L; Zhou, Yuyu; Fischer, Marc L; Miller, Chris C; Geethakumar, Sarath; de la Rue du Can, Stephane
2009-07-15
Quantification of fossil fuel CO2 emissions at fine space and time resolution is emerging as a critical need in carbon cycle and climate change research. As atmospheric CO2 measurements expand with the advent of a dedicated remote sensing platform and denser in situ measurements, the ability to close the carbon budget at spatial scales of approximately 100 km2 and daily time scales requires fossil fuel CO2 inventories at commensurate resolution. Additionally, the growing interest in U.S. climate change policy measures are best served by emissions that are tied to the driving processes in space and time. Here we introduce a high resolution data product (the "Vulcan" inventory: www.purdue.edu/eas/carbon/vulcan/) that has quantified fossil fuel CO2 emissions for the contiguous U.S. at spatial scales less than 100 km2 and temporal scales as small as hours. This data product completed for the year 2002, includes detail on combustion technology and 48 fuel types through all sectors of the U.S. economy. The Vulcan inventory is built from the decades of local/regional air pollution monitoring and complements these data with census, traffic, and digital road data sets. The Vulcan inventory shows excellent agreement with national-level Department of Energy inventories, despite the different approach taken by the DOE to quantify U.S. fossil fuel CO2 emissions. Comparison to the global 1degree x 1 degree fossil fuel CO2 inventory, used widely by the carbon cycle and climate change community prior to the construction of the Vulcan inventory, highlights the space/time biases inherent in the population-based approach.
On the Gompertzian growth in the fractal space-time.
Molski, Marcin; Konarski, Jerzy
2008-06-01
An analytical approach to determination of time-dependent temporal fractal dimension b(t)(t) and scaling factor a(t)(t) for the Gompertzian growth in the fractal space-time is presented. The derived formulae take into account the proper boundary conditions and permit a calculation of the mean values b(t)(t) and a(t)(t) at any period of time. The formulae derived have been tested on experimental data obtained by Schrek for the Brown-Pearce rabbit's tumor growth. The results obtained confirm a possibility of successful mapping of the experimental Gompertz curve onto the fractal power-law scaling function y(t)=a(t)tb(t) and support a thesis that Gompertzian growth is a self-similar and allometric process of a holistic nature.
2012-01-11
dynamic behavior , wherein a dissipative dynamical system can deliver only a fraction of its energy to its surroundings and can store only a fraction of the...collection of interacting subsystems. The behavior and properties of the aggregate large-scale system can then be deduced from the behaviors of the...uniqueness is established. This state space formalism of thermodynamics shows that the behavior of heat, as described by the conservation equations of
2012-01-20
ultrasonic Lamb waves to plastic strain and fatigue life. Theory was developed and validated to predict second harmonic generation for specific mode... Fatigue and damage generation and progression are processes consisting of a series of interrelated events that span large scales of space and time...strain and fatigue life A set of experiments were completed that worked to relate the acoustic nonlinearity measured with Lamb waves to both the
NASA Astrophysics Data System (ADS)
Hansen, S. K.; Berkowitz, B.
2014-12-01
Recently, we developed an alternative CTRW formulation which uses a "latching" upscaling scheme to rigorously map continuous or fine-scale stochastic solute motion onto discrete transitions on an arbitrarily coarse lattice (with spacing potentially on the meter scale or more). This approach enables model simplification, among many other things. Under advection, for example, we see that many relevant anomalous transport problems may be mapped into 1D, with latching to a sequence of successive, uniformly spaced planes. On this formulation (which we term RP-CTRW), the spatial transition vector may generally be made deterministic, with CTRW waiting time distributions encapsulating all the stochastic behavior. We demonstrate the excellent performance of this technique alongside Pareto-distributed waiting times in explaining experiments across a variety of scales using only two degrees of freedom. An interesting new application of the RP-CTRW technique is the analysis of radial (push-pull) tracer tests. Given modern computational power, random walk simulations are a natural fit for the inverse problem of inferring subsurface parameters from push-pull test data, and we propose them as an alternative to the classical type curve approach. In particular, we explore the visibility of heterogeneity through non-Fickian behavior in push-pull tests, and illustrate the ability of a radial RP-CTRW technique to encapsulate this behavior using a sparse parameterization which has predictive value.
Relativistic space-charge-limited current for massive Dirac fermions
NASA Astrophysics Data System (ADS)
Ang, Y. S.; Zubair, M.; Ang, L. K.
2017-04-01
A theory of relativistic space-charge-limited current (SCLC) is formulated to determine the SCLC scaling, J ∝Vα/Lβ , for a finite band-gap Dirac material of length L biased under a voltage V . In one-dimensional (1D) bulk geometry, our model allows (α ,β ) to vary from (2,3) for the nonrelativistic model in traditional solids to (3/2,2) for the ultrarelativistic model of massless Dirac fermions. For 2D thin-film geometry we obtain α =β , which varies between 2 and 3/2, respectively, at the nonrelativistic and ultrarelativistic limits. We further provide rigorous proof based on a Green's-function approach that for a uniform SCLC model described by carrier-density-dependent mobility, the scaling relations of the 1D bulk model can be directly mapped into the case of 2D thin film for any contact geometries. Our simplified approach provides a convenient tool to obtain the 2D thin-film SCLC scaling relations without the need of explicitly solving the complicated 2D problems. Finally, this work clarifies the inconsistency in using the traditional SCLC models to explain the experimental measurement of a 2D Dirac semiconductor. We conclude that the voltage scaling 3 /2 <α <2 is a distinct signature of massive Dirac fermions in a Dirac semiconductor and is in agreement with experimental SCLC measurements in MoS2.
Nonextensive Entropy Approach to Space Plasma Fluctuations and Turbulence
NASA Astrophysics Data System (ADS)
Leubner, M. P.; Vörös, Z.; Baumjohann, W.
Spatial intermittency in fully developed turbulence is an established feature of astrophysical plasma fluctuations and in particular apparent in the interplanetary medium by in situ observations. In this situation, the classical Boltzmann— Gibbs extensive thermo-statistics, applicable when microscopic interactions and memory are short ranged and the environment is a continuous and differentiable manifold, fails. Upon generalization of the entropy function to nonextensivity, accounting for long-range interactions and thus for correlations in the system, it is demonstrated that the corresponding probability distribution functions (PDFs) are members of a family of specific power-law distributions. In particular, the resulting theoretical bi-κ functional reproduces accurately the observed global leptokurtic, non-Gaussian shape of the increment PDFs of characteristic solar wind variables on all scales, where nonlocality in turbulence is controlled via a multiscale coupling parameter. Gradual decoupling is obtained by enhancing the spatial separation scale corresponding to increasing κ-values in case of slow solar wind conditions where a Gaussian is approached in the limit of large scales. Contrary, the scaling properties in the high speed solar wind are predominantly governed by the mean energy or variance of the distribution, appearing as second parameter in the theory. The PDFs of solar wind scalar field differences are computed from WIND and ACE data for different time-lags and bulk speeds and analyzed within the nonextensive theory, where also a particular nonlinear dependence of the coupling parameter and variance with scale arises for best fitting theoretical PDFs. Consequently, nonlocality in fluctuations, related to both, turbulence and its large scale driving, should be related to long-range interactions in the context of nonextensive entropy generalization, providing fundamentally the physical background of the observed scale dependence of fluctuations in intermittent space plasmas.
Rugel, Emily J; Henderson, Sarah B; Carpiano, Richard M; Brauer, Michael
2017-11-01
Natural spaces can provide psychological benefits to individuals, but population-level epidemiologic studies have produced conflicting results. Refining current exposure-assessment methods is necessary to advance our understanding of population health and to guide the design of health-promoting urban forms. The aim of this study was to develop a comprehensive Natural Space Index that robustly models potential exposure based on the presence, form, accessibility, and quality of multiple forms of greenspace (e.g., parks and street trees) and bluespace (e.g., oceans and lakes). The index was developed for greater Vancouver, Canada. Greenness presence was derived from remote sensing (NDVI/EVI); forms were extracted from municipal and private databases; and accessibility was based on restrictions such as private ownership. Quality appraisals were conducted for 200 randomly sampled parks using the Public Open Space Desktop Appraisal Tool (POSDAT). Integrating these measures in GIS, exposure was assessed for 60,242 postal codes using 100- to 1,600-m buffers based on hypothesized pathways to mental health. A single index was then derived using principal component analysis (PCA). Comparing NDVI with alternate approaches for assessing natural space resulted in widely divergent results, with quintile rankings shifting for 22-88% of postal codes, depending on the measure. Overall park quality was fairly low (mean of 15 on a scale of 0-45), with no significant difference seen by neighborhood-level household income. The final PCA identified three main sets of variables, with the first two components explaining 68% of the total variance. The first component was dominated by the percentages of public and private greenspace and bluespace and public greenspace within 250m, while the second component was driven by lack of access to bluespace within 1 km. Many current approaches to modeling natural space may misclassify exposures and have limited specificity. The Natural Space Index represents a novel approach at a regional scale with application to urban planning and policy-making. Copyright © 2017 Elsevier Inc. All rights reserved.
Energy Considerations of Hypothetical Space Drives
NASA Technical Reports Server (NTRS)
Millis, Marc G.
2007-01-01
The energy requirements of hypothetical, propellant-less space drives are compared to rockets. This serves to provide introductory estimates for potential benefits and to suggest analytical approaches for further study. A "space drive" is defined as an idealized form of propulsion that converts stored potential energy directly into kinetic energy using only the interactions between the spacecraft and its surrounding space. For Earth-to-orbit, the space drive uses 3.7 times less energy. For deep space travel, energy is proportional to the square of delta-v, whereas rocket energy scales exponentially. This has the effect of rendering a space drive 150-orders-of-magnitude better than a 17,000-s Specific Impulse rocket for sending a modest 5000 kg probe to traverse 5 ly in 50 years. Indefinite levitation, which is impossible for a rocket, could conceivably require 62 MJ/kg for a space drive. Assumption sensitivities and further analysis options are offered to guide further inquires.
Tree detection in orchards from VHR satellite images using scale-space theory
NASA Astrophysics Data System (ADS)
Mahour, Milad; Tolpekin, Valentyn; Stein, Alfred
2016-10-01
This study focused on extracting reliable and detailed information from very High Resolution (VHR) satellite images for the detection of individual trees in orchards. The images contain detailed information on spectral and geometrical properties of trees. Their scale level, however, is insufficient for spectral properties of individual trees, because adjacent tree canopies interlock. We modeled trees using a bell shaped spectral profile. Identifying the brightest peak was challenging due to sun illumination effects caused 1 by differences in positions of the sun and the satellite sensor. Crown boundary detection was solved by using the NDVI from the same image. We used Gaussian scale-space methods that search for extrema in the scale-space domain. The procedures were tested on two orchards with different tree types, tree sizes and tree observation patterns in Iran. Validation was done using reference data derived from an UltraCam digital aerial photo. Local extrema of the determinant of the Hessian corresponded well to the geographical coordinates and the size of individual trees. False detections arising from a slight asymmetry of trees were distinguished from multiple detections of the same tree with different extents. Uncertainty assessment was carried out on the presence and spatial extents of individual trees. The study demonstrated how the suggested approach can be used for image segmentation for orchards with different types of trees. We concluded that Gaussian scale-space theory can be applied to extract information from VHR satellite images for individual tree detection. This may lead to improved decision making for irrigation and crop water requirement purposes in future studies.
Management applications of discontinuity theory
Angeler, David G.; Allen, Craig R.; Barichievy, Chris; Eason, Tarsha; Garmestani, Ahjond S.; Graham, Nicholas A.J.; Granholm, Dean; Gunderson, Lance H.; Knutson, Melinda; Nash, Kirsty L.; Nelson, R. John; Nystrom, Magnus; Spanbauer, Trisha; Stow, Craig A.; Sundstrom, Shana M.
2015-01-01
Human impacts on the environment are multifaceted and can occur across distinct spatiotemporal scales. Ecological responses to environmental change are therefore difficult to predict, and entail large degrees of uncertainty. Such uncertainty requires robust tools for management to sustain ecosystem goods and services and maintain resilient ecosystems.We propose an approach based on discontinuity theory that accounts for patterns and processes at distinct spatial and temporal scales, an inherent property of ecological systems. Discontinuity theory has not been applied in natural resource management and could therefore improve ecosystem management because it explicitly accounts for ecological complexity.Synthesis and applications. We highlight the application of discontinuity approaches for meeting management goals. Specifically, discontinuity approaches have significant potential to measure and thus understand the resilience of ecosystems, to objectively identify critical scales of space and time in ecological systems at which human impact might be most severe, to provide warning indicators of regime change, to help predict and understand biological invasions and extinctions and to focus monitoring efforts. Discontinuity theory can complement current approaches, providing a broader paradigm for ecological management and conservation.
Noncommutative FRW Apparent Horizon and Hawking Radiation
NASA Astrophysics Data System (ADS)
Bouhallouf, H.; Mebarki, N.; Aissaoui, H.
2017-11-01
In the context of noncommutative (NCG) gauge gravity, and using a cosmic time power law formula for the scale factor, a Friedman-Robertson-Walker (FRW) like metric is obtained. Within the fermions tunneling effect approach and depending on the various intervals of the power parameter, expressions of the apparent horizon are also derived. It is shown that in some regions of the parameter space, a pure NCG trapped horizon does exist leading to new interpretation of the role played by the noncommutativity of the space-time.
2004-04-15
This is an artist's concept of the completely operational International Space Station being approached by an X-33 Reusable Launch Vehicle (RLV). The X-33 program was designed to pave the way to a full-scale, commercially developed RLV as the flagship technology demonstrator for technologies that would lower the cost of access to space. It is unpiloted, taking off vertically like a rocket, reaching an altitude of up to 60 miles and speeds between Mach 13 and 15, and landing horizontally like an airplane. The X-33 program was cancelled in 2001.
Casimir force in the Gödel space-time and its possible induced cosmological inhomogeneity
NASA Astrophysics Data System (ADS)
Khodabakhshi, Sh.; Shojai, A.
2017-07-01
The Casimir force between two parallel plates in the Gödel universe is computed for a scalar field at finite temperature. It is observed that when the plates' separation is comparable with the scale given by the rotation of the space-time, the force becomes repulsive and then approaches zero. Since it has been shown previously that the universe may experience a Gödel phase for a small period of time, the induced inhomogeneities from the Casimir force are also studied.
Free-space microwave-power transmission
NASA Technical Reports Server (NTRS)
Brown, W. C.
1976-01-01
Laboratory-scale wireless transmission of microwave power approaches fifty-four percent efficiency. DC is converted to a 2.45-GHz signal and is transmitted through horn antenna array; microwave signal is received at rectenna and is simultaneously collected and rectified back to dc at receiving sites; dc is then processed for wired distribution.
ERIC Educational Resources Information Center
Silvester, June P.; And Others
This report describes a new automated process that pioneers full-scale operational use of subject switching by the NASA (National Aeronautics and Space Administration) Scientific and Technical Information (STI) Facility. The subject switching process routinely translates machine-readable subject terms from one controlled vocabulary into the…
Semantic Search of Web Services
ERIC Educational Resources Information Center
Hao, Ke
2013-01-01
This dissertation addresses semantic search of Web services using natural language processing. We first survey various existing approaches, focusing on the fact that the expensive costs of current semantic annotation frameworks result in limited use of semantic search for large scale applications. We then propose a vector space model based service…
Spatially explicit animal response to composition of habitat
Benjamin P. Pauli; Nicholas P. McCann; Patrick A. Zollner; Robert Cummings; Jonathan H. Gilbert; Eric J. Gustafson
2013-01-01
Complex decisions dramatically affect animal dispersal and space use. Dispersing individuals respond to a combination of fine-scale environmental stimuli and internal attributes. Individual-based modeling offers a valuable approach for the investigation of such interactions because it combines the heterogeneity of animal behaviors with spatial detail. Most individual-...
Asteroid Redirect Mission Concept: A Bold Approach for Utilizing Space Resources
NASA Technical Reports Server (NTRS)
Mazanek, Daniel D.; Merrill, Raymond G.; Brophy, John R.; Mueller, Robert P.
2014-01-01
The utilization of natural resources from asteroids is an idea that is older than the Space Age. The technologies are now available to transform this endeavour from an idea into reality. The Asteroid Redirect Mission (ARM) is a mission concept which includes the goal of robotically returning a small Near-Earth Asteroid (NEA) or a multi-ton boulder from a large NEA to cislunar space in the mid 2020's using an advanced Solar Electric Propulsion (SEP) vehicle and currently available technologies. The paradigm shift enabled by the ARM concept would allow in-situ resource utilization (ISRU) to be used at the human mission departure location (i.e., cislunar space) versus exclusively at the deep-space mission destination. This approach drastically reduces the barriers associated with utilizing ISRU for human deep-space missions. The successful testing of ISRU techniques and associated equipment could enable large-scale commercial ISRU operations to become a reality and enable a future space-based economy utilizing processed asteroidal materials. This paper provides an overview of the ARM concept and discusses the mission objectives, key technologies, and capabilities associated with the mission, as well as how the ARM and associated operations would benefit humanity's quest for the exploration and settlement of space.
Multiscale unfolding of real networks by geometric renormalization
NASA Astrophysics Data System (ADS)
García-Pérez, Guillermo; Boguñá, Marián; Serrano, M. Ángeles
2018-06-01
Symmetries in physical theories denote invariance under some transformation, such as self-similarity under a change of scale. The renormalization group provides a powerful framework to study these symmetries, leading to a better understanding of the universal properties of phase transitions. However, the small-world property of complex networks complicates application of the renormalization group by introducing correlations between coexisting scales. Here, we provide a framework for the investigation of complex networks at different resolutions. The approach is based on geometric representations, which have been shown to sustain network navigability and to reveal the mechanisms that govern network structure and evolution. We define a geometric renormalization group for networks by embedding them into an underlying hidden metric space. We find that real scale-free networks show geometric scaling under this renormalization group transformation. We unfold the networks in a self-similar multilayer shell that distinguishes the coexisting scales and their interactions. This in turn offers a basis for exploring critical phenomena and universality in complex networks. It also affords us immediate practical applications, including high-fidelity smaller-scale replicas of large networks and a multiscale navigation protocol in hyperbolic space, which betters those on single layers.
Monitoring Atmospheric CO2 From Space: Challenge & Approach
NASA Technical Reports Server (NTRS)
Lin, Bing; Harrison, F. Wallace; Nehrir, Amin; Browell, Edward; Dobler, Jeremy; Campbell, Joel; Meadows, Byron; Obland, Michael; Kooi, Susan; Fan, Tai-Fang;
2015-01-01
Atmospheric CO2 is the key radiative forcing for the Earth's climate and may contribute a major part of the Earth's warming during the past 150 years. Advanced knowledge on the CO2 distributions and changes can lead considerable model improvements in predictions of the Earth's future climate. Large uncertainties in the predictions have been found for decades owing to limited CO2 observations. To obtain precise measurements of atmospheric CO2, certain challenges have to be overcome. For an example, global annual means of the CO2 are rather stable, but, have a very small increasing trend that is significant for multi-decadal long-term climate. At short time scales (a second to a few hours), regional and subcontinental gradients in the CO2 concentration are very small and only in an order of a few parts per million (ppm) compared to the mean atmospheric CO2 concentration of about 400 ppm, which requires atmospheric CO2 space monitoring systems with extremely high accuracy and precision (about 0.5 ppm or 0.125%) in spatiotemporal scales around 75 km and 10-s. It also requires a decadal-scale system stability. Furthermore, rapid changes in high latitude environments such as melting ice, snow and frozen soil, persistent thin cirrus clouds in Amazon and other tropical areas, and harsh weather conditions over Southern Ocean all increase difficulties in satellite atmospheric CO2 observations. Space lidar approaches using Integrated Path Differential Absorption (IPDA) technique are considered to be capable of obtaining precise CO2 measurements and, thus, have been proposed by various studies including the 2007 Decadal Survey (DS) of the U.S. National Research Council. This study considers to use the Intensity-Modulated Continuous-Wave (IM-CW) lidar to monitor global atmospheric CO2 distribution and variability from space. Development and demonstration of space lidar for atmospheric CO2 measurements have been made through joint adventure of NASA Langley Research Center and Exelis, Inc. As prototype space IPDA lidars, airborne laser absorption lidar systems operating in 1.57 CO2 absorption band have been developed and tested through lab, ground-based range, and flight campaigns. Very encouraging results have been obtained. The signal-to-noise ratio (SNR) for clear sky IPDA measurements of CO2 differential absorption optical depth (DAOD) for a 10-s integration over vegetated areas with about 10 km range was found to be as high as 1300, resulting in an error 0.077% or equivalent CO2 mixing ratio (XCO2) column precision of 0.3 ppm. Precise range measurements using the IM-CW lidar approach were also achieved, and the uncertainties have been shown to be at sub meter level. Based on the airborne lidar development, space lidar and atmospheric CO2 observations are simulated. It shows that with the IM-CW approach, accurate atmospheric CO2 measurements can be achieved from space, and a space mission such as that proposed by the DS will meet science goals of atmospheric CO2 monitoring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okumura, Teppei; Seljak, Uroš; Desjacques, Vincent, E-mail: teppei@ewha.ac.kr, E-mail: useljak@berkeley.edu, E-mail: dvince@physik.uzh.ch
It was recently shown that the power spectrum in redshift space can be written as a sum of cross-power spectra between number weighted velocity moments, of which the lowest are density and momentum density. We investigate numerically the properties of these power spectra for simulated galaxies and dark matter halos and compare them to the dark matter power spectra, generalizing the concept of the bias in density-density power spectra. Because all of the quantities are number weighted this approach is well defined even for sparse systems such as massive halos. This contrasts to the previous approaches to RSD where velocitymore » correlations have been explored, but velocity field is a poorly defined concept for sparse systems. We find that the number density weighting leads to a strong scale dependence of the bias terms for momentum density auto-correlation and cross-correlation with density. This trend becomes more significant for the more biased halos and leads to an enhancement of RSD power relative to the linear theory. Fingers-of-god effects, which in this formalism come from the correlations of the higher order moments beyond the momentum density, lead to smoothing of the power spectrum and can reduce this enhancement of power from the scale dependent bias, but are relatively small for halos with no small scale velocity dispersion. In comparison, for a more realistic galaxy sample with satellites the small scale velocity dispersion generated by satellite motions inside the halos leads to a larger power suppression on small scales, but this depends on the satellite fraction and on the details of how the satellites are distributed inside the halo. We investigate several statistics such as the two-dimensional power spectrum P(k,μ), where μ is the angle between the Fourier mode and line of sight, its multipole moments, its powers of μ{sup 2}, and configuration space statistics. Overall we find that the nonlinear effects in realistic galaxy samples such as luminous red galaxies affect the redshift space clustering on very large scales: for example, the quadrupole moment is affected by 10% for k < 0.1hMpc{sup −1}, which means that these effects need to be understood if we want to extract cosmological information from the redshift space distortions.« less
Backscattering from a Gaussian distributed, perfectly conducting, rough surface
NASA Technical Reports Server (NTRS)
Brown, G. S.
1977-01-01
The problem of scattering by random surfaces possessing many scales of roughness is analyzed. The approach is applicable to bistatic scattering from dielectric surfaces, however, this specific analysis is restricted to backscattering from a perfectly conducting surface in order to more clearly illustrate the method. The surface is assumed to be Gaussian distributed so that the surface height can be split into large and small scale components, relative to the electromagnetic wavelength. A first order perturbation approach is employed wherein the scattering solution for the large scale structure is perturbed by the small scale diffraction effects. The scattering from the large scale structure is treated via geometrical optics techniques. The effect of the large scale surface structure is shown to be equivalent to a convolution in k-space of the height spectrum with the following: the shadowing function, a polarization and surface slope dependent function, and a Gaussian factor resulting from the unperturbed geometrical optics solution. This solution provides a continuous transition between the near normal incidence geometrical optics and wide angle Bragg scattering results.
NASA Astrophysics Data System (ADS)
Simón-Moral, Andres; Santiago, Jose Luis; Krayenhoff, E. Scott; Martilli, Alberto
2014-06-01
A Reynolds-averaged Navier-Stokes model is used to investigate the evolution of the sectional drag coefficient and turbulent length scales with the layouts of aligned arrays of cubes. Results show that the sectional drag coefficient is determined by the non-dimensional streamwise distance (sheltering parameter), and the non-dimensional spanwise distance (channelling parameter) between obstacles. This is different than previous approaches that consider only plan area density . On the other hand, turbulent length scales behave similarly to the staggered case (e. g. they are function of only). Analytical formulae are proposed for the length scales and for the sectional drag coefficient as a function of sheltering and channelling parameters, and implemented in a column model. This approach demonstrates good skill in the prediction of vertical profiles of the spatially-averaged horizontal wind speed.
Management applications of discontinuity theory | Science ...
1.Human impacts on the environment are multifaceted and can occur across distinct spatiotemporal scales. Ecological responses to environmental change are therefore difficult to predict, and entail large degrees of uncertainty. Such uncertainty requires robust tools for management to sustain ecosystem goods and services and maintain resilient ecosystems. 2.We propose an approach based on discontinuity theory that accounts for patterns and processes at distinct spatial and temporal scales, an inherent property of ecological systems. Discontinuity theory has not been applied in natural resource management and could therefore improve ecosystem management because it explicitly accounts for ecological complexity. 3.Synthesis and applications. We highlight the application of discontinuity approaches for meeting management goals. Specifically, discontinuity approaches have significant potential to measure and thus understand the resilience of ecosystems, to objectively identify critical scales of space and time in ecological systems at which human impact might be most severe, to provide warning indicators of regime change, to help predict and understand biological invasions and extinctions and to focus monitoring efforts. Discontinuity theory can complement current approaches, providing a broader paradigm for ecological management and conservation This manuscript provides insight on using discontinuity approaches to aid in managing complex ecological systems. In part
A Process Algebra Approach to Quantum Electrodynamics
NASA Astrophysics Data System (ADS)
Sulis, William
2017-12-01
The process algebra program is directed towards developing a realist model of quantum mechanics free of paradoxes, divergences and conceptual confusions. From this perspective, fundamental phenomena are viewed as emerging from primitive informational elements generated by processes. The process algebra has been shown to successfully reproduce scalar non-relativistic quantum mechanics (NRQM) without the usual paradoxes and dualities. NRQM appears as an effective theory which emerges under specific asymptotic limits. Space-time, scalar particle wave functions and the Born rule are all emergent in this framework. In this paper, the process algebra model is reviewed, extended to the relativistic setting, and then applied to the problem of electrodynamics. A semiclassical version is presented in which a Minkowski-like space-time emerges as well as a vector potential that is discrete and photon-like at small scales and near-continuous and wave-like at large scales. QED is viewed as an effective theory at small scales while Maxwell theory becomes an effective theory at large scales. The process algebra version of quantum electrodynamics is intuitive and realist, free from divergences and eliminates the distinction between particle, field and wave. Computations are carried out using the configuration space process covering map, although the connection to second quantization has not been fully explored.
An Analytical Thermal Model for Autonomous Soaring Research
NASA Technical Reports Server (NTRS)
Allen, Michael
2006-01-01
A viewgraph presentation describing an analytical thermal model used to enable research on autonomous soaring for a small UAV aircraft is given. The topics include: 1) Purpose; 2) Approach; 3) SURFRAD Data; 4) Convective Layer Thickness; 5) Surface Heat Budget; 6) Surface Virtual Potential Temperature Flux; 7) Convective Scaling Velocity; 8) Other Calculations; 9) Yearly trends; 10) Scale Factors; 11) Scale Factor Test Matrix; 12) Statistical Model; 13) Updraft Strength Calculation; 14) Updraft Diameter; 15) Updraft Shape; 16) Smoothed Updraft Shape; 17) Updraft Spacing; 18) Environment Sink; 19) Updraft Lifespan; 20) Autonomous Soaring Research; 21) Planned Flight Test; and 22) Mixing Ratio.
Controlled ecological life-support system - Use of plants for human life-support in space
NASA Technical Reports Server (NTRS)
Chamberland, D.; Knott, W. M.; Sager, J. C.; Wheeler, R.
1992-01-01
Scientists and engineers within NASA are conducting research which will lead to development of advanced life-support systems that utilize higher plants in a unique approach to solving long-term life-support problems in space. This biological solution to life-support, Controlled Ecological Life-Support System (CELSS), is a complex, extensively controlled, bioengineered system that relies on plants to provide the principal elements from gas exchange and food production to potable water reclamation. Research at John F. Kennedy Space Center (KSC) is proceeding with a comprehensive investigation of the individual parts of the CELSS system at a one-person scale in an approach called the Breadboard Project. Concurrently a relatively new NASA sponsored research effort is investigating plant growth and metabolism in microgravity, innovative hydroponic nutrient delivery systems, and use of highly efficient light emitting diodes for artificial plant illumination.
An Efficient and Versatile Means for Assembling and Manufacturing Systems in Space
NASA Technical Reports Server (NTRS)
Dorsey, John T.; Doggett, William R.; Hafley, Robert A.; Komendera, Erik; Correll, Nikolaus; King, Bruce
2012-01-01
Within NASA Space Science, Exploration and the Office of Chief Technologist, there are Grand Challenges and advanced future exploration, science and commercial mission applications that could benefit significantly from large-span and large-area structural systems. Of particular and persistent interest to the Space Science community is the desire for large (in the 10- 50 meter range for main aperture diameter) space telescopes that would revolutionize space astronomy. Achieving these systems will likely require on-orbit assembly, but previous approaches for assembling large-scale telescope truss structures and systems in space have been perceived as very costly because they require high precision and custom components. These components rely on a large number of mechanical connections and supporting infrastructure that are unique to each application. In this paper, a new assembly paradigm that mitigates these concerns is proposed and described. A new assembly approach, developed to implement the paradigm, is developed incorporating: Intelligent Precision Jigging Robots, Electron-Beam welding, robotic handling/manipulation, operations assembly sequence and path planning, and low precision weldable structural elements. Key advantages of the new assembly paradigm, as well as concept descriptions and ongoing research and technology development efforts for each of the major elements are summarized.
NASA's Orbital Space Plane Risk Reduction Strategy
NASA Technical Reports Server (NTRS)
Dumbacher, Dan
2003-01-01
This paper documents the transformation of NASA s Space Launch Initiative (SLI) Second Generation Reusable Launch Vehicle Program under the revised Integrated Space Transportation Plan, announced November 2002. Outlining the technology development approach followed by the original SLI, this paper gives insight into the current risk-reduction strategy that will enable confident development of the Nation s first orbital space plane (OSP). The OSP will perform an astronaut and contingency cargo transportation function, with an early crew rescue capability, thus enabling increased crew size and enhanced science operations aboard the International Space Station. The OSP design chosen for full-scale development will take advantage of the latest innovations American industry has to offer. The OSP Program identifies critical technologies that must be advanced to field a safe, reliable, affordable space transportation system for U.S. access to the Station and low-Earth orbit. OSP flight demonstrators will test crew safety features, validate autonomous operations, and mature thermal protection systems. Additional enabling technologies may be identified during the OSP design process as part of an overall risk-management strategy. The OSP Program uses a comprehensive and evolutionary systems acquisition approach, while applying appropriate lessons learned.
Laplacian scale-space behavior of planar curve corners.
Zhang, Xiaohong; Qu, Ying; Yang, Dan; Wang, Hongxing; Kymer, Jeff
2015-11-01
Scale-space behavior of corners is important for developing an efficient corner detection algorithm. In this paper, we analyze the scale-space behavior with the Laplacian of Gaussian (LoG) operator on a planar curve which constructs Laplacian Scale Space (LSS). The analytical expression of a Laplacian Scale-Space map (LSS map) is obtained, demonstrating the Laplacian Scale-Space behavior of the planar curve corners, based on a newly defined unified corner model. With this formula, some Laplacian Scale-Space behavior is summarized. Although LSS demonstrates some similarities to Curvature Scale Space (CSS), there are still some differences. First, no new extreme points are generated in the LSS. Second, the behavior of different cases of a corner model is consistent and simple. This makes it easy to trace the corner in a scale space. At last, the behavior of LSS is verified in an experiment on a digital curve.
Gething, Peter W; Patil, Anand P; Hay, Simon I
2010-04-01
Risk maps estimating the spatial distribution of infectious diseases are required to guide public health policy from local to global scales. The advent of model-based geostatistics (MBG) has allowed these maps to be generated in a formal statistical framework, providing robust metrics of map uncertainty that enhances their utility for decision-makers. In many settings, decision-makers require spatially aggregated measures over large regions such as the mean prevalence within a country or administrative region, or national populations living under different levels of risk. Existing MBG mapping approaches provide suitable metrics of local uncertainty--the fidelity of predictions at each mapped pixel--but have not been adapted for measuring uncertainty over large areas, due largely to a series of fundamental computational constraints. Here the authors present a new efficient approximating algorithm that can generate for the first time the necessary joint simulation of prevalence values across the very large prediction spaces needed for global scale mapping. This new approach is implemented in conjunction with an established model for P. falciparum allowing robust estimates of mean prevalence at any specified level of spatial aggregation. The model is used to provide estimates of national populations at risk under three policy-relevant prevalence thresholds, along with accompanying model-based measures of uncertainty. By overcoming previously unchallenged computational barriers, this study illustrates how MBG approaches, already at the forefront of infectious disease mapping, can be extended to provide large-scale aggregate measures appropriate for decision-makers.
Coupling Fluid and Kineitc Effects in Space Weather: an interdisciplinary task
NASA Astrophysics Data System (ADS)
Lapenta, Giovanni; González-Herrero, Diego; Boella, Elisabetta; Siddi, Lorenzo; Cazzola, Emanuele
2017-04-01
Two agents are key to space weather: electromagentic fields and energetic particles. Magnetic fields carried by plasmas in the solar wind interact with the Earth magnetosphere and solar energetic particles produced by solar events or in cosmic rays affect the space environment. Describing both is challenging. Magnetized plasmas are most effectively described by magneto-hydrodynamics, MHD, a fluid theory based on describing some fields defined in space: electromagnetic fields, density, velocity and temperature of the plasma. High energy particles, instead need a more detailed approach , kinetic theory, where statistical distributions of particles are governed by the Boltzmann equation. While fluid models are based on the ordinary space and time, kinetic models require a six dimensional space, called phase space, besides time. The two methods are not separated, the processes leading to the production of energetic particles are the same that involve space plasamas and fields. Arriving at a single self-consistent model has been the goal of the Swiff project funded by the EC in FP7 and it is now a key goal of the ongoing DEEP-ER project. We present a new approach developed with the goal of extending the reach of kinetic models to the fluid scales. Kinetic models are a higher order description and all fluid effects are included in them. However, the cost in terms of computing power is much higher and it has been so far prohibitively expensive to treat space weather events fully kinetically. We have now designed a new method capable of reducing that cost by several orders of magnitude making it possible for kinetic models to study space weather events [1,2]. We will report the new methodology and show its application to space weather mdeling. [1] Giovanni Lapenta,Exactly Energy Conserving Semi-Implicit Particle in Cell Formulation, to appear, JCP, arXiv:1602.06326 [2] Giovanni Lapenta, Diego Gonzalez-Herrero, Elisabetta Boella, Multiple scale kinetic simulations with the energy conserving semi implicit particle in cell (PIC) method, submitted JPP, arXiv:1612.08289
NASA Astrophysics Data System (ADS)
Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish
2017-07-01
Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.
In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.
Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.; ...
2012-05-01
In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.
Tensor scale: An analytic approach with efficient computation and applications☆
Xu, Ziyue; Saha, Punam K.; Dasgupta, Soura
2015-01-01
Scale is a widely used notion in computer vision and image understanding that evolved in the form of scale-space theory where the key idea is to represent and analyze an image at various resolutions. Recently, we introduced a notion of local morphometric scale referred to as “tensor scale” using an ellipsoidal model that yields a unified representation of structure size, orientation and anisotropy. In the previous work, tensor scale was described using a 2-D algorithmic approach and a precise analytic definition was missing. Also, the application of tensor scale in 3-D using the previous framework is not practical due to high computational complexity. In this paper, an analytic definition of tensor scale is formulated for n-dimensional (n-D) images that captures local structure size, orientation and anisotropy. Also, an efficient computational solution in 2- and 3-D using several novel differential geometric approaches is presented and the accuracy of results is experimentally examined. Also, a matrix representation of tensor scale is derived facilitating several operations including tensor field smoothing to capture larger contextual knowledge. Finally, the applications of tensor scale in image filtering and n-linear interpolation are presented and the performance of their results is examined in comparison with respective state-of-art methods. Specifically, the performance of tensor scale based image filtering is compared with gradient and Weickert’s structure tensor based diffusive filtering algorithms. Also, the performance of tensor scale based n-linear interpolation is evaluated in comparison with standard n-linear and windowed-sinc interpolation methods. PMID:26236148
Bias correction of satellite-based rainfall data
NASA Astrophysics Data System (ADS)
Bhattacharya, Biswa; Solomatine, Dimitri
2015-04-01
Limitation in hydro-meteorological data availability in many catchments limits the possibility of reliable hydrological analyses especially for near-real-time predictions. However, the variety of satellite based and meteorological model products for rainfall provides new opportunities. Often times the accuracy of these rainfall products, when compared to rain gauge measurements, is not impressive. The systematic differences of these rainfall products from gauge observations can be partially compensated by adopting a bias (error) correction. Many of such methods correct the satellite based rainfall data by comparing their mean value to the mean value of rain gauge data. Refined approaches may also first find out a suitable time scale at which different data products are better comparable and then employ a bias correction at that time scale. More elegant methods use quantile-to-quantile bias correction, which however, assumes that the available (often limited) sample size can be useful in comparing probabilities of different rainfall products. Analysis of rainfall data and understanding of the process of its generation reveals that the bias in different rainfall data varies in space and time. The time aspect is sometimes taken into account by considering the seasonality. In this research we have adopted a bias correction approach that takes into account the variation of rainfall in space and time. A clustering based approach is employed in which every new data point (e.g. of Tropical Rainfall Measuring Mission (TRMM)) is first assigned to a specific cluster of that data product and then, by identifying the corresponding cluster of gauge data, the bias correction specific to that cluster is adopted. The presented approach considers the space-time variation of rainfall and as a result the corrected data is more realistic. Keywords: bias correction, rainfall, TRMM, satellite rainfall
Clark, M.R.; Gangopadhyay, S.; Hay, L.; Rajagopalan, B.; Wilby, R.
2004-01-01
A number of statistical methods that are used to provide local-scale ensemble forecasts of precipitation and temperature do not contain realistic spatial covariability between neighboring stations or realistic temporal persistence for subsequent forecast lead times. To demonstrate this point, output from a global-scale numerical weather prediction model is used in a stepwise multiple linear regression approach to downscale precipitation and temperature to individual stations located in and around four study basins in the United States. Output from the forecast model is downscaled for lead times up to 14 days. Residuals in the regression equation are modeled stochastically to provide 100 ensemble forecasts. The precipitation and temperature ensembles from this approach have a poor representation of the spatial variability and temporal persistence. The spatial correlations for downscaled output are considerably lower than observed spatial correlations at short forecast lead times (e.g., less than 5 days) when there is high accuracy in the forecasts. At longer forecast lead times, the downscaled spatial correlations are close to zero. Similarly, the observed temporal persistence is only partly present at short forecast lead times. A method is presented for reordering the ensemble output in order to recover the space-time variability in precipitation and temperature fields. In this approach, the ensemble members for a given forecast day are ranked and matched with the rank of precipitation and temperature data from days randomly selected from similar dates in the historical record. The ensembles are then reordered to correspond to the original order of the selection of historical data. Using this approach, the observed intersite correlations, intervariable correlations, and the observed temporal persistence are almost entirely recovered. This reordering methodology also has applications for recovering the space-time variability in modeled streamflow. ?? 2004 American Meteorological Society.
A brain MRI bias field correction method created in the Gaussian multi-scale space
NASA Astrophysics Data System (ADS)
Chen, Mingsheng; Qin, Mingxin
2017-07-01
A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.
Space-coiling fractal metamaterial with multi-bandgaps on subwavelength scale
NASA Astrophysics Data System (ADS)
Man, Xianfeng; Liu, Tingting; Xia, Baizhan; Luo, Zhen; Xie, Longxiang; Liu, Jian
2018-06-01
Acoustic metamaterials are remarkably different from conventional materials, as they can flexibly manipulate and control the propagation of sound waves. Unlike the locally resonant metamaterials introduced in earlier studies, we designed an ultraslow artificial structure with a sound speed much lower than that in air. In this paper, the space-coiling approach is proposed for achieving artificial metamaterial for extremely low-frequency airborne sound. In addition, the self-similar fractal technique is utilized for designing space-coiling Mie-resonance-based metamaterials (MRMMs) to obtain a band-dispersive spectrum. The band structures of two-dimensional (2D) acoustic metamaterials with different fractal levels are illustrated using the finite element method. The low-frequency bandgap can easily be formed, and multi-bandgap properties are observed in high-level fractals. Furthermore, the designed MRMMs with higher order fractal space coiling shows a good robustness against irregular arrangement. Besides, the proposed artificial structure was found to modify and control the radiation field arbitrarily. Thus, this work provides useful guidelines for the design of acoustic filtering devices and acoustic wavefront shaping applications on the subwavelength scale.
Lapshin, Rostislav V
2009-06-01
Prospects for a feature-oriented scanning (FOS) approach to investigations of sample surfaces, at the micrometer and nanometer scales, with the use of scanning probe microscopy under space laboratory or planet exploration rover conditions, are examined. The problems discussed include decreasing sensitivity of the onboard scanning probe microscope (SPM) to temperature variations, providing autonomous operation, implementing the capabilities for remote control, self-checking, self-adjustment, and self-calibration. A number of topical problems of SPM measurements in outer space or on board a planet exploration rover may be solved via the application of recently proposed FOS methods.
NASA Astrophysics Data System (ADS)
Blume, T.; Zehe, E.; Bronstert, A.
2009-07-01
Spatial patterns as well as temporal dynamics of soil moisture have a major influence on runoff generation. The investigation of these dynamics and patterns can thus yield valuable information on hydrological processes, especially in data scarce or previously ungauged catchments. The combination of spatially scarce but temporally high resolution soil moisture profiles with episodic and thus temporally scarce moisture profiles at additional locations provides information on spatial as well as temporal patterns of soil moisture at the hillslope transect scale. This approach is better suited to difficult terrain (dense forest, steep slopes) than geophysical techniques and at the same time less cost-intensive than a high resolution grid of continuously measuring sensors. Rainfall simulation experiments with dye tracers while continuously monitoring soil moisture response allows for visualization of flow processes in the unsaturated zone at these locations. Data was analyzed at different spacio-temporal scales using various graphical methods, such as space-time colour maps (for the event and plot scale) and binary indicator maps (for the long-term and hillslope scale). Annual dynamics of soil moisture and decimeter-scale variability were also investigated. The proposed approach proved to be successful in the investigation of flow processes in the unsaturated zone and showed the importance of preferential flow in the Malalcahuello Catchment, a data-scarce catchment in the Andes of Southern Chile. Fast response times of stream flow indicate that preferential flow observed at the plot scale might also be of importance at the hillslope or catchment scale. Flow patterns were highly variable in space but persistent in time. The most likely explanation for preferential flow in this catchment is a combination of hydrophobicity, small scale heterogeneity in rainfall due to redistribution in the canopy and strong gradients in unsaturated conductivities leading to self-reinforcing flow paths.
Mesoscale to Synoptic Scale Cloud Variability
NASA Technical Reports Server (NTRS)
Rossow, William B.
1998-01-01
The atmospheric circulation and its interaction with the oceanic circulation involve non-linear and non-local exchanges of energy and water over a very large range of space and time scales. These exchanges are revealed, in part, by the related variations of clouds, which occur on a similar range of scales as the atmospheric motions that produce them. Collection of comprehensive measurements of the properties of the atmosphere, clouds and surface allows for diagnosis of some of these exchanges. The use of a multi-satellite-network approach by the International Satellite Cloud Climatology Project (ISCCP) comes closest to providing complete coverage of the relevant range space and time scales over which the clouds, atmosphere and ocean vary. A nearly 15-yr dataset is now available that covers the range from 3 hr and 30 km to decade and planetary. This paper considers three topics: (1) cloud variations at the smallest scales and how they may influence radiation-cloud interactions, and (2) cloud variations at "moderate" scales and how they may cause natural climate variability, and (3) cloud variations at the largest scales and how they affect the climate. The emphasis in this discussion is on the more mature subject of cloud-radiation interactions. There is now a need to begin similar detailed diagnostic studies of water exchange processes.
Fast large scale structure perturbation theory using one-dimensional fast Fourier transforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmittfull, Marcel; Vlah, Zvonimir; McDonald, Patrick
The usual fluid equations describing the large-scale evolution of mass density in the universe can be written as local in the density, velocity divergence, and velocity potential fields. As a result, the perturbative expansion in small density fluctuations, usually written in terms of convolutions in Fourier space, can be written as a series of products of these fields evaluated at the same location in configuration space. Based on this, we establish a new method to numerically evaluate the 1-loop power spectrum (i.e., Fourier transform of the 2-point correlation function) with one-dimensional fast Fourier transforms. This is exact and a fewmore » orders of magnitude faster than previously used numerical approaches. Numerical results of the new method are in excellent agreement with the standard quadrature integration method. This fast model evaluation can in principle be extended to higher loop order where existing codes become painfully slow. Our approach follows by writing higher order corrections to the 2-point correlation function as, e.g., the correlation between two second-order fields or the correlation between a linear and a third-order field. These are then decomposed into products of correlations of linear fields and derivatives of linear fields. In conclusion, the method can also be viewed as evaluating three-dimensional Fourier space convolutions using products in configuration space, which may also be useful in other contexts where similar integrals appear.« less
Fast large scale structure perturbation theory using one-dimensional fast Fourier transforms
Schmittfull, Marcel; Vlah, Zvonimir; McDonald, Patrick
2016-05-01
The usual fluid equations describing the large-scale evolution of mass density in the universe can be written as local in the density, velocity divergence, and velocity potential fields. As a result, the perturbative expansion in small density fluctuations, usually written in terms of convolutions in Fourier space, can be written as a series of products of these fields evaluated at the same location in configuration space. Based on this, we establish a new method to numerically evaluate the 1-loop power spectrum (i.e., Fourier transform of the 2-point correlation function) with one-dimensional fast Fourier transforms. This is exact and a fewmore » orders of magnitude faster than previously used numerical approaches. Numerical results of the new method are in excellent agreement with the standard quadrature integration method. This fast model evaluation can in principle be extended to higher loop order where existing codes become painfully slow. Our approach follows by writing higher order corrections to the 2-point correlation function as, e.g., the correlation between two second-order fields or the correlation between a linear and a third-order field. These are then decomposed into products of correlations of linear fields and derivatives of linear fields. In conclusion, the method can also be viewed as evaluating three-dimensional Fourier space convolutions using products in configuration space, which may also be useful in other contexts where similar integrals appear.« less
NASA Technical Reports Server (NTRS)
1971-01-01
Preliminary design and analysis of purge system concepts and purge subsystem approaches are defined and evaluated. Acceptable purge subsystem approaches were combined into four predesign layouts which are presented for comparison and evaluation. Two predesigns were selected for further detailed design and evaluation for eventual selection of the best design for a full scale test configuration. An operation plan is included as an appendix for reference to shuttle-oriented operational parameters.
Scale relativity: from quantum mechanics to chaotic dynamics.
NASA Astrophysics Data System (ADS)
Nottale, L.
Scale relativity is a new approach to the problem of the origin of fundamental scales and of scaling laws in physics, which consists in generalizing Einstein's principle of relativity to the case of scale transformations of resolutions. We recall here how it leads one to the concept of fractal space-time, and to introduce a new complex time derivative operator which allows to recover the Schrödinger equation, then to generalize it. In high energy quantum physics, it leads to the introduction of a Lorentzian renormalization group, in which the Planck length is reinterpreted as a lowest, unpassable scale, invariant under dilatations. These methods are successively applied to two problems: in quantum mechanics, that of the mass spectrum of elementary particles; in chaotic dynamics, that of the distribution of planets in the Solar System.
An Autonomous Sensor Tasking Approach for Large Scale Space Object Cataloging
NASA Astrophysics Data System (ADS)
Linares, R.; Furfaro, R.
The field of Space Situational Awareness (SSA) has progressed over the last few decades with new sensors coming online, the development of new approaches for making observations, and new algorithms for processing them. Although there has been success in the development of new approaches, a missing piece is the translation of SSA goals to sensors and resource allocation; otherwise known as the Sensor Management Problem (SMP). This work solves the SMP using an artificial intelligence approach called Deep Reinforcement Learning (DRL). Stable methods for training DRL approaches based on neural networks exist, but most of these approaches are not suitable for high dimensional systems. The Asynchronous Advantage Actor-Critic (A3C) method is a recently developed and effective approach for high dimensional systems, and this work leverages these results and applies this approach to decision making in SSA. The decision space for the SSA problems can be high dimensional, even for tasking of a single telescope. Since the number of SOs in space is relatively high, each sensor will have a large number of possible actions at a given time. Therefore, efficient DRL approaches are required when solving the SMP for SSA. This work develops a A3C based method for DRL applied to SSA sensor tasking. One of the key benefits of DRL approaches is the ability to handle high dimensional data. For example DRL methods have been applied to image processing for the autonomous car application. For example, a 256x256 RGB image has 196608 parameters (256*256*3=196608) which is very high dimensional, and deep learning approaches routinely take images like this as inputs. Therefore, when applied to the whole catalog the DRL approach offers the ability to solve this high dimensional problem. This work has the potential to, for the first time, solve the non-myopic sensor tasking problem for the whole SO catalog (over 22,000 objects) providing a truly revolutionary result.
NASA Technical Reports Server (NTRS)
Kreifeldt, J. G.; Parkin, L.; Wempe, T. E.; Huff, E. F.
1975-01-01
Perceived orderliness in the ground tracks of five A/C during their simulated flights was studied. Dynamically developing ground tracks for five A/C from 21 separate runs were reproduced from computer storage and displayed on CRTS to professional pilots and controllers for their evaluations and preferences under several criteria. The ground tracks were developed in 20 seconds as opposed to the 5 minutes of simulated flight using speedup techniques for display. Metric and nonmetric multidimensional scaling techniques are being used to analyze the subjective responses in an effort to: (1) determine the meaningfulness of basing decisions on such complex subjective criteria; (2) compare pilot/controller perceptual spaces; (3) determine the dimensionality of the subjects' perceptual spaces; and thereby (4) determine objective measures suitable for comparing alternative traffic management simulations.
Li, Zhifei; Qin, Dongliang
2014-01-01
In defense related programs, the use of capability-based analysis, design, and acquisition has been significant. In order to confront one of the most challenging features of a huge design space in capability based analysis (CBA), a literature review of design space exploration was first examined. Then, in the process of an aerospace system of systems design space exploration, a bilayer mapping method was put forward, based on the existing experimental and operating data. Finally, the feasibility of the foregoing approach was demonstrated with an illustrative example. With the data mining RST (rough sets theory) and SOM (self-organized mapping) techniques, the alternative to the aerospace system of systems architecture was mapping from P-space (performance space) to C-space (configuration space), and then from C-space to D-space (design space), respectively. Ultimately, the performance space was mapped to the design space, which completed the exploration and preliminary reduction of the entire design space. This method provides a computational analysis and implementation scheme for large-scale simulation. PMID:24790572
Li, Zhifei; Qin, Dongliang; Yang, Feng
2014-01-01
In defense related programs, the use of capability-based analysis, design, and acquisition has been significant. In order to confront one of the most challenging features of a huge design space in capability based analysis (CBA), a literature review of design space exploration was first examined. Then, in the process of an aerospace system of systems design space exploration, a bilayer mapping method was put forward, based on the existing experimental and operating data. Finally, the feasibility of the foregoing approach was demonstrated with an illustrative example. With the data mining RST (rough sets theory) and SOM (self-organized mapping) techniques, the alternative to the aerospace system of systems architecture was mapping from P-space (performance space) to C-space (configuration space), and then from C-space to D-space (design space), respectively. Ultimately, the performance space was mapped to the design space, which completed the exploration and preliminary reduction of the entire design space. This method provides a computational analysis and implementation scheme for large-scale simulation.
Samuel A. Cushman; Michael Chase; Curtice Griffin
2005-01-01
Autocorrelation in animal movements can be both a serious nuisance to analysis and a source of valuable information about the scale and patterns of animal behavior, depending on the question and the techniques employed. In this paper we present an approach to analyzing the patterns of autocorrelation in animal movements that provides a detailed picture of seasonal...
Curtis H. Flather; Kenneth R. Wilson; Susan A. Shriner
2009-01-01
Conservation science is concerned with understanding why distribution and abundance patterns of species vary in time and space. Although these patterns have strong signatures tied to the availability of energy and nutrients, variation in climate, physiographic heterogeneity, and differences in the structural complexity of natural vegetation, it is becoming more...
Greenbaum, Gili
2015-09-07
Evaluation of the time scale of the fixation of neutral mutations is crucial to the theoretical understanding of the role of neutral mutations in evolution. Diffusion approximations of the Wright-Fisher model are most often used to derive analytic formulations of genetic drift, as well as for the time scales of the fixation of neutral mutations. These approximations require a set of assumptions, most notably that genetic drift is a stochastic process in a continuous allele-frequency space, an assumption appropriate for large populations. Here equivalent approximations are derived using a coalescent theory approach which relies on a different set of assumptions than the diffusion approach, and adopts a discrete allele-frequency space. Solutions for the mean and variance of the time to fixation of a neutral mutation derived from the two approaches converge for large populations but slightly differ for small populations. A Markov chain analysis of the Wright-Fisher model for small populations is used to evaluate the solutions obtained, showing that both the mean and the variance are better approximated by the coalescent approach. The coalescence approximation represents a tighter upper-bound for the mean time to fixation than the diffusion approximation, while the diffusion approximation and coalescence approximation form an upper and lower bound, respectively, for the variance. The converging solutions and the small deviations of the two approaches strongly validate the use of diffusion approximations, but suggest that coalescent theory can provide more accurate approximations for small populations. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lebassi-Habtezion, Bereket; Diffenbaugh, Noah S.
2013-10-01
potential importance of local-scale climate phenomena motivates development of approaches to enable computationally feasible nonhydrostatic climate simulations. To that end, we evaluate the potential viability of nested nonhydrostatic model approaches, using the summer climate of the western United States (WUSA) as a case study. We use the Weather Research and Forecast (WRF) model to carry out five simulations of summer 2010. This suite allows us to test differences between nonhydrostatic and hydrostatic resolutions, single and multiple nesting approaches, and high- and low-resolution reanalysis boundary conditions. WRF simulations were evaluated against station observations, gridded observations, and reanalysis data over domains that cover the 11 WUSA states at nonhydrostatic grid spacing of 4 km and hydrostatic grid spacing of 25 km and 50 km. Results show that the nonhydrostatic simulations more accurately resolve the heterogeneity of surface temperature, precipitation, and wind speed features associated with the topography and orography of the WUSA region. In addition, we find that the simulation in which the nonhydrostatic grid is nested directly within the regional reanalysis exhibits the greatest overall agreement with observational data. Results therefore indicate that further development of nonhydrostatic nesting approaches is likely to yield important insights into the response of local-scale climate phenomena to increases in global greenhouse gas concentrations. However, the biases in regional precipitation, atmospheric circulation, and moisture flux identified in a subset of the nonhydrostatic simulations suggest that alternative nonhydrostatic modeling approaches such as superparameterization and variable-resolution global nonhydrostatic modeling will provide important complements to the nested approaches tested here.
NASA Astrophysics Data System (ADS)
Huang, Yanhui; Zhao, He; Wang, Yixing; Ratcliff, Tyree; Breneman, Curt; Brinson, L. Catherine; Chen, Wei; Schadler, Linda S.
2017-08-01
It has been found that doping dielectric polymers with a small amount of nanofiller or molecular additive can stabilize the material under a high field and lead to increased breakdown strength and lifetime. Choosing appropriate fillers is critical to optimizing the material performance, but current research largely relies on experimental trial and error. The employment of computer simulations for nanodielectric design is rarely reported. In this work, we propose a multi-scale modeling approach that employs ab initio, Monte Carlo, and continuum scales to predict the breakdown strength and lifetime of polymer nanocomposites based on the charge trapping effect of the nanofillers. The charge transfer, charge energy relaxation, and space charge effects are modeled in respective hierarchical scales by distinctive simulation techniques, and these models are connected together for high fidelity and robustness. The preliminary results show good agreement with the experimental data, suggesting its promise for use in the computer aided material design of high performance dielectrics.
Genet, Martin; Houmard, Manuel; Eslava, Salvador; Saiz, Eduardo; Tomsia, Antoni P.
2012-01-01
This paper introduces our approach to modeling the mechanical behavior of cellular ceramics, through the example of calcium phosphate scaffolds made by robocasting for bone-tissue engineering. The Weibull theory is used to deal with the scaffolds’ constitutive rods statistical failure, and the Sanchez-Palencia theory of periodic homogenization is used to link the rod- and scaffold-scales. Uniaxial compression of scaffolds and three-point bending of rods were performed to calibrate and validate the model. If calibration based on rod-scale data leads to over-conservative predictions of scaffold’s properties (as rods’ successive failures are not taken into account), we show that, for a given rod diameter, calibration based on scaffold-scale data leads to very satisfactory predictions for a wide range of rod spacing, i.e. of scaffold porosity, as well as for different loading conditions. This work establishes the proposed model as a reliable tool for understanding and optimizing cellular ceramics’ mechanical properties. PMID:23439936
NASA Astrophysics Data System (ADS)
Wells, James D.; Zhang, Zhengkang
2018-05-01
Dismissing traditional naturalness concerns while embracing the Higgs boson mass measurement and unification motivates careful analysis of trans-TeV supersymmetric theories. We take an effective field theory (EFT) approach, matching the Minimal Supersymmetric Standard Model (MSSM) onto the Standard Model (SM) EFT by integrating out heavy superpartners, and evolving MSSM and SMEFT parameters according to renormalization group equations in each regime. Our matching calculation is facilitated by the recent covariant diagrams formulation of functional matching techniques, with the full one-loop SUSY threshold corrections encoded in just 30 diagrams. Requiring consistent matching onto the SMEFT with its parameters (those in the Higgs potential in particular) measured at low energies, and in addition requiring unification of bottom and tau Yukawa couplings at the scale of gauge coupling unification, we detail the solution space of superpartner masses from the TeV scale to well above. We also provide detailed views of parameter space where Higgs coupling measurements have probing capability at future colliders beyond the reach of direct superpartner searches at the LHC.
NASA Astrophysics Data System (ADS)
Weingartner, Nicholas; Pueblo, Chris; Nogueira, Flavio; Kelton, Kenneth; Nussinov, Zohar
A fundamental understanding of the phenomenology of the metastable supercooled liquid state remains elusive. Two of the most pressing questions in this field are how to describe the temperature dependence of the viscosity, and determine whether or not the dynamical behaviors are universal. To address these questions, we have devised a simple first-principles classical phase space description of supercooled liquids that (along with a complementary quantum approach) predicts a unique functional form for the viscosity which relies on only a single parameter. We tested this form for 45 liquids of all types and fragilities, and have demonstrated that it provides a statistically significant fit to all liquids. Additionally, by scaling the viscosity of all studied liquids using the single parameter, we have observed a complete collapse of the data of all 45 liquids to a single scaling curve over 16 decades, suggesting an underlying universality in the dynamics of supercooled liquids. In this talk I will outline the basic approach of our model, as well as demonstrate the quality of the model performance and collapse of the data.
NASA Astrophysics Data System (ADS)
Peigney, B. E.; Larroche, O.; Tikhonchuk, V.
2014-12-01
In this article, we study the hydrodynamics and burn of the thermonuclear fuel in inertial confinement fusion pellets at the ion kinetic level. The analysis is based on a two-velocity-scale Vlasov-Fokker-Planck kinetic model that is specially tailored to treat fusion products (suprathermal α-particles) in a self-consistent manner with the thermal bulk. The model assumes spherical symmetry in configuration space and axial symmetry in velocity space around the mean flow velocity. A typical hot-spot ignition design is considered. Compared with fluid simulations where a multi-group diffusion scheme is applied to model α transport, the full ion-kinetic approach reveals significant non-local effects on the transport of energetic α-particles. This has a direct impact on hydrodynamic spatial profiles during combustion: the hot spot reactivity is reduced, while the inner dense fuel layers are pre-heated by the escaping α-suprathermal particles, which are transported farther out of the hot spot. We show how the kinetic transport enhancement of fusion products leads to a significant reduction of the fusion yield.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peigney, B. E.; Larroche, O.; Tikhonchuk, V.
2014-12-15
In this article, we study the hydrodynamics and burn of the thermonuclear fuel in inertial confinement fusion pellets at the ion kinetic level. The analysis is based on a two-velocity-scale Vlasov-Fokker-Planck kinetic model that is specially tailored to treat fusion products (suprathermal α-particles) in a self-consistent manner with the thermal bulk. The model assumes spherical symmetry in configuration space and axial symmetry in velocity space around the mean flow velocity. A typical hot-spot ignition design is considered. Compared with fluid simulations where a multi-group diffusion scheme is applied to model α transport, the full ion-kinetic approach reveals significant non-local effectsmore » on the transport of energetic α-particles. This has a direct impact on hydrodynamic spatial profiles during combustion: the hot spot reactivity is reduced, while the inner dense fuel layers are pre-heated by the escaping α-suprathermal particles, which are transported farther out of the hot spot. We show how the kinetic transport enhancement of fusion products leads to a significant reduction of the fusion yield.« less
Estimation of critical behavior from the density of states in classical statistical models
NASA Astrophysics Data System (ADS)
Malakis, A.; Peratzakis, A.; Fytas, N. G.
2004-12-01
We present a simple and efficient approximation scheme which greatly facilitates the extension of Wang-Landau sampling (or similar techniques) in large systems for the estimation of critical behavior. The method, presented in an algorithmic approach, is based on a very simple idea, familiar in statistical mechanics from the notion of thermodynamic equivalence of ensembles and the central limit theorem. It is illustrated that we can predict with high accuracy the critical part of the energy space and by using this restricted part we can extend our simulations to larger systems and improve the accuracy of critical parameters. It is proposed that the extensions of the finite-size critical part of the energy space, determining the specific heat, satisfy a scaling law involving the thermal critical exponent. The method is applied successfully for the estimation of the scaling behavior of specific heat of both square and simple cubic Ising lattices. The proposed scaling law is verified by estimating the thermal critical exponent from the finite-size behavior of the critical part of the energy space. The density of states of the zero-field Ising model on these lattices is obtained via a multirange Wang-Landau sampling.
Pore space analysis of NAPL distribution in sand-clay media
Matmon, D.; Hayden, N.J.
2003-01-01
This paper introduces a conceptual model of clays and non-aqueous phase liquids (NAPLs) at the pore scale that has been developed from a mathematical unit cell model, and direct micromodel observation and measurement of clay-containing porous media. The mathematical model uses a unit cell concept with uniform spherical grains for simulating the sand in the sand-clay matrix (???10% clay). Micromodels made with glass slides and including different clay-containing porous media were used to investigate the two clays (kaolinite and montmorillonite) and NAPL distribution within the pore space. The results were used to understand the distribution of NAPL advancing into initially saturated sand and sand-clay media, and provided a detailed analysis of the pore-scale geometry, pore size distribution, NAPL entry pressures, and the effect of clay on this geometry. Interesting NAPL saturation profiles were observed as a result of the complexity of the pore space geometry with the different packing angles and the presence of clays. The unit cell approach has applications for enhancing the mechanistic understanding and conceptualization, both visually and mathematically, of pore-scale processes such as NAPL and clay distribution. ?? 2003 Elsevier Science Ltd. All rights reserved.
Visualizing and Quantifying Pore Scale Fluid Flow Processes With X-ray Microtomography
NASA Astrophysics Data System (ADS)
Wildenschild, D.; Hopmans, J. W.; Vaz, C. M.; Rivers, M. L.
2001-05-01
When using mathematical models based on Darcy's law it is often necessary to simplify geometry, physics or both and the capillary bundle-of-tubes approach neglects a fundamentally important characteristic of porous solids, namely interconnectedness of the pore space. New approaches to pore-scale modeling that arrange capillary tubes in two- or three-dimensional pore space have been and are still under development: Network models generally represent the pore space by spheres while the pore throats are usually represented by cylinders or conical shapes. Lattice Boltzmann approaches numerically solve the Navier-Stokes equations in a realistic microscopically disordered geometry, which offers the ability to study the microphysical basis of macroscopic flow without the need for a simplified geometry or physics. In addition to these developments in numerical modeling techniques, new theories have proposed that interfacial area should be considered as a primary variable in modeling of a multi-phase flow system. In the wake of this progress emerges an increasing need for new ways of evaluating pore-scale models, and for techniques that can resolve and quantify phase interfaces in porous media. The mechanisms operating at the pore-scale cannot be measured with traditional experimental techniques, however x-ray computerized microtomography (CMT) provides non-invasive observation of, for instance, changing fluid phase content and distribution on the pore scale. Interfacial areas have thus far been measured indirectly, but with the advances in high-resolution imaging using CMT it is possible to track interfacial area and curvature as a function of phase saturation or capillary pressure. We present results obtained at the synchrotron-based microtomography facility (GSECARS, sector 13) at the Advanced Photon Source at Argonne National Laboratory. Cylindrical sand samples of either 6 or 1.5 mm diameter were scanned at different stages of drainage and for varying boundary conditions. A significant difference in fluid saturation and phase distribution was observed for different drainage conditions, clearly showing preferential flow and a dependence on the applied flow rate. For the 1.5 mm sample individual pores and water/air interfaces could be resolved and quantified using image analysis techniques. Use of the Advanced Photon Source was supported by the U.S. Department of Energy, Basic Energy Sciences, Office of Science, under Contract No. W-31-109-Eng-38.
Novel trends in pair distribution function approaches on bulk systems with nanoscale heterogeneities
Emil S. Bozin; Billinge, Simon J. L.
2016-07-29
Novel materials for high performance applications increasingly exhibit structural order on the nanometer length scale; a domain where crystallography, the basis of Rietveld refinement, fails [1]. In such instances the total scattering approach, which treats Bragg and diffuse scattering on an equal basis, is a powerful approach. In recent years, the analysis of the total scattering data became an invaluable tool and the gold standard for studying nanocrystalline, nanoporous, and disordered crystalline materials. The data may be analyzed in reciprocal space directly, or Fourier transformed to the real-space atomic pair distribution function (PDF) and this intuitive function examined for localmore » structural information. Here we give a number of illustrative examples, for convenience picked from our own work, of recent developments and applications of total scattering and PDF analysis to novel complex materials. There are many other wonderful examples from the work of others.« less
Report of the 90-day study on human exploration of the Moon and Mars
NASA Technical Reports Server (NTRS)
1989-01-01
The basic mission sequence to achieve the President's goal is clear: begin with Space Station Freedom in the 1990's, return to the Moon to stay early in the Next century, and then journey to Mars. Five reference approaches are modeled building on past programs and recent studies to reflect wide-ranging strategies that incorporate varied program objectives, schedules, technologies, and resource availabilities. The reference approaches are (1) balance and speed; (2) the earliest possible landing on Mars; (3) reduce logistics from Earth; (4) schedule adapted to Space Station Freedom; and (5) reduced scales. The study and programmatic assessment have shown that the Human Exploration Initiative is indeed a feasible approach to achieving the President's goals. Several reasonable alternatives exist, but a long-range commitment and significant resources will be required. However, the value of the program and the benefits to the Nation are immeasurable.
Creating targeted initial populations for genetic product searches in heterogeneous markets
NASA Astrophysics Data System (ADS)
Foster, Garrett; Turner, Callaway; Ferguson, Scott; Donndelinger, Joseph
2014-12-01
Genetic searches often use randomly generated initial populations to maximize diversity and enable a thorough sampling of the design space. While many of these initial configurations perform poorly, the trade-off between population diversity and solution quality is typically acceptable for small-scale problems. Navigating complex design spaces, however, often requires computationally intelligent approaches that improve solution quality. This article draws on research advances in market-based product design and heuristic optimization to strategically construct 'targeted' initial populations. Targeted initial designs are created using respondent-level part-worths estimated from discrete choice models. These designs are then integrated into a traditional genetic search. Two case study problems of differing complexity are presented to illustrate the benefits of this approach. In both problems, targeted populations lead to computational savings and product configurations with improved market share of preferences. Future research efforts to tailor this approach and extend it towards multiple objectives are also discussed.
The color-vision approach to emotional space: cortical evoked potential data.
Boucsein, W; Schaefer, F; Sokolov, E N; Schröder, C; Furedy, J J
2001-01-01
A framework for accounting for emotional phenomena proposed by Sokolov and Boucsein (2000) employs conceptual dimensions that parallel those of hue, brightness, and saturation in color vision. The approach that employs the concepts of emotional quality. intensity, and saturation has been supported by psychophysical emotional scaling data gathered from a few trained observers. We report cortical evoked potential data obtained during the change between different emotions expressed in schematic faces. Twenty-five subjects (13 male, 12 female) were presented with a positive, a negative, and a neutral computer-generated face with random interstimulus intervals in a within-subjects design, together with four meaningful and four meaningless control stimuli made up from the same elements. Frontal, central, parietal, and temporal ERPs were recorded from each hemisphere. Statistically significant outcomes in the P300 and N200 range support the potential fruitfulness of the proposed color-vision-model-based approach to human emotional space.
Assessing sufficiency of thermal riverscapes for resilient ...
Resilient salmon populations require river networks that provide water temperature regimes sufficient to support a diversity of salmonid life histories across space and time. Efforts to protect, enhance and restore watershed thermal regimes for salmon may target specific locations and features within stream networks hypothesized to provide disproportionately high-value functional resilience to salmon populations. These include relatively small-scale features such as thermal refuges, and larger-scale features such as entire watersheds or aquifers that support thermal regimes buffered from local climatic conditions. Quantifying the value of both small and large scale thermal features to salmon populations has been challenged by both the difficulty of mapping thermal regimes at sufficient spatial and temporal resolutions, and integrating thermal regimes into population models. We attempt to address these challenges by using newly-available datasets and modeling approaches to link thermal regimes to salmon populations across scales. We will describe an individual-based modeling approach for assessing sufficiency of thermal refuges for migrating salmon and steelhead in large rivers, as well as a population modeling approach for assessing large-scale climate refugia for salmon in the Pacific Northwest. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec
Phase Space Dissimilarity Measures for Structural Health Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bubacz, Jacob A; Chmielewski, Hana T; Pape, Alexander E
A novel method for structural health monitoring (SHM), known as the Phase Space Dissimilarity Measures (PSDM) approach, is proposed and developed. The patented PSDM approach has already been developed and demonstrated for a variety of equipment and biomedical applications. Here, we investigate SHM of bridges via analysis of time serial accelerometer measurements. This work has four aspects. The first is algorithm scalability, which was found to scale linearly from one processing core to four cores. Second, the same data are analyzed to determine how the use of the PSDM approach affects sensor placement. We found that a relatively low-density placementmore » sufficiently captures the dynamics of the structure. Third, the same data are analyzed by unique combinations of accelerometer axes (vertical, longitudinal, and lateral with respect to the bridge) to determine how the choice of axes affects the analysis. The vertical axis is found to provide satisfactory SHM data. Fourth, statistical methods were investigated to validate the PSDM approach for this application, yielding statistically significant results.« less
Xu, Hongwei; Logan, John R.; Short, Susan E.
2014-01-01
Research on neighborhoods and health increasingly acknowledges the need to conceptualize, measure, and model spatial features of social and physical environments. In ignoring underlying spatial dynamics, we run the risk of biased statistical inference and misleading results. In this paper, we propose an integrated multilevel-spatial approach for Poisson models of discrete responses. In an empirical example of child mortality in 1880 Newark, New Jersey, we compare this multilevel-spatial approach with the more typical aspatial multilevel approach. Results indicate that spatially-defined egocentric neighborhoods, or distance-based measures, outperform administrative areal units, such as census units. In addition, although results did not vary by specific definitions of egocentric neighborhoods, they were sensitive to geographic scale and modeling strategy. Overall, our findings confirm that adopting a spatial-multilevel approach enhances our ability to disentangle the effect of space from that of place, and point to the need for more careful spatial thinking in population research on neighborhoods and health. PMID:24763980
Bittracher, Andreas; Koltai, Péter; Klus, Stefan; Banisch, Ralf; Dellnitz, Michael; Schütte, Christof
2018-01-01
We consider complex dynamical systems showing metastable behavior, but no local separation of fast and slow time scales. The article raises the question of whether such systems exhibit a low-dimensional manifold supporting its effective dynamics. For answering this question, we aim at finding nonlinear coordinates, called reaction coordinates, such that the projection of the dynamics onto these coordinates preserves the dominant time scales of the dynamics. We show that, based on a specific reducibility property, the existence of good low-dimensional reaction coordinates preserving the dominant time scales is guaranteed. Based on this theoretical framework, we develop and test a novel numerical approach for computing good reaction coordinates. The proposed algorithmic approach is fully local and thus not prone to the curse of dimension with respect to the state space of the dynamics. Hence, it is a promising method for data-based model reduction of complex dynamical systems such as molecular dynamics.
Controlling nitrogen migration through micro-nano networks
NASA Astrophysics Data System (ADS)
Cai, Dongqing; Wu, Zhengyan; Jiang, Jiang; Wu, Yuejin; Feng, Huiyun; Brown, Ian G.; Chu, Paul K.; Yu, Zengliang
2014-01-01
Nitrogen fertilizer unabsorbed by crops eventually discharges into the environment through runoff, leaching and volatilization, resulting in three-dimensional (3D) pollution spanning from underground into space. Here we describe an approach for controlling nitrogen loss, developed using loss control fertilizer (LCF) prepared by adding modified natural nanoclay (attapulgite) to traditional fertilizer. In the aqueous phase, LCF self-assembles to form 3D micro/nano networks via hydrogen bonds and other weak interactions, obtaining a higher nitrogen spatial scale so that it is retained by a soil filtering layer. Thus nitrogen loss is reduced and sufficient nutrition for crops is supplied, while the pollution risk of the fertilizer is substantially lowered. As such, self-fabrication of nano-material was used to manipulate the nitrogen spatial scale, which provides a novel and promising approach for the research and control of the migration of other micro-scaled pollutants in environmental medium.
Controlling nitrogen migration through micro-nano networks.
Cai, Dongqing; Wu, Zhengyan; Jiang, Jiang; Wu, Yuejin; Feng, Huiyun; Brown, Ian G; Chu, Paul K; Yu, Zengliang
2014-01-14
Nitrogen fertilizer unabsorbed by crops eventually discharges into the environment through runoff, leaching and volatilization, resulting in three-dimensional (3D) pollution spanning from underground into space. Here we describe an approach for controlling nitrogen loss, developed using loss control fertilizer (LCF) prepared by adding modified natural nanoclay (attapulgite) to traditional fertilizer. In the aqueous phase, LCF self-assembles to form 3D micro/nano networks via hydrogen bonds and other weak interactions, obtaining a higher nitrogen spatial scale so that it is retained by a soil filtering layer. Thus nitrogen loss is reduced and sufficient nutrition for crops is supplied, while the pollution risk of the fertilizer is substantially lowered. As such, self-fabrication of nano-material was used to manipulate the nitrogen spatial scale, which provides a novel and promising approach for the research and control of the migration of other micro-scaled pollutants in environmental medium.
Maximum one-shot dissipated work from Rényi divergences
NASA Astrophysics Data System (ADS)
Yunger Halpern, Nicole; Garner, Andrew J. P.; Dahlsten, Oscar C. O.; Vedral, Vlatko
2018-05-01
Thermodynamics describes large-scale, slowly evolving systems. Two modern approaches generalize thermodynamics: fluctuation theorems, which concern finite-time nonequilibrium processes, and one-shot statistical mechanics, which concerns small scales and finite numbers of trials. Combining these approaches, we calculate a one-shot analog of the average dissipated work defined in fluctuation contexts: the cost of performing a protocol in finite time instead of quasistatically. The average dissipated work has been shown to be proportional to a relative entropy between phase-space densities, to a relative entropy between quantum states, and to a relative entropy between probability distributions over possible values of work. We derive one-shot analogs of all three equations, demonstrating that the order-infinity Rényi divergence is proportional to the maximum possible dissipated work in each case. These one-shot analogs of fluctuation-theorem results contribute to the unification of these two toolkits for small-scale, nonequilibrium statistical physics.
Maximum one-shot dissipated work from Rényi divergences.
Yunger Halpern, Nicole; Garner, Andrew J P; Dahlsten, Oscar C O; Vedral, Vlatko
2018-05-01
Thermodynamics describes large-scale, slowly evolving systems. Two modern approaches generalize thermodynamics: fluctuation theorems, which concern finite-time nonequilibrium processes, and one-shot statistical mechanics, which concerns small scales and finite numbers of trials. Combining these approaches, we calculate a one-shot analog of the average dissipated work defined in fluctuation contexts: the cost of performing a protocol in finite time instead of quasistatically. The average dissipated work has been shown to be proportional to a relative entropy between phase-space densities, to a relative entropy between quantum states, and to a relative entropy between probability distributions over possible values of work. We derive one-shot analogs of all three equations, demonstrating that the order-infinity Rényi divergence is proportional to the maximum possible dissipated work in each case. These one-shot analogs of fluctuation-theorem results contribute to the unification of these two toolkits for small-scale, nonequilibrium statistical physics.
Transition Manifolds of Complex Metastable Systems
NASA Astrophysics Data System (ADS)
Bittracher, Andreas; Koltai, Péter; Klus, Stefan; Banisch, Ralf; Dellnitz, Michael; Schütte, Christof
2018-04-01
We consider complex dynamical systems showing metastable behavior, but no local separation of fast and slow time scales. The article raises the question of whether such systems exhibit a low-dimensional manifold supporting its effective dynamics. For answering this question, we aim at finding nonlinear coordinates, called reaction coordinates, such that the projection of the dynamics onto these coordinates preserves the dominant time scales of the dynamics. We show that, based on a specific reducibility property, the existence of good low-dimensional reaction coordinates preserving the dominant time scales is guaranteed. Based on this theoretical framework, we develop and test a novel numerical approach for computing good reaction coordinates. The proposed algorithmic approach is fully local and thus not prone to the curse of dimension with respect to the state space of the dynamics. Hence, it is a promising method for data-based model reduction of complex dynamical systems such as molecular dynamics.
Controlling nitrogen migration through micro-nano networks
Cai, Dongqing; Wu, Zhengyan; Jiang, Jiang; Wu, Yuejin; Feng, Huiyun; Brown, Ian G.; Chu, Paul K.; Yu, Zengliang
2014-01-01
Nitrogen fertilizer unabsorbed by crops eventually discharges into the environment through runoff, leaching and volatilization, resulting in three-dimensional (3D) pollution spanning from underground into space. Here we describe an approach for controlling nitrogen loss, developed using loss control fertilizer (LCF) prepared by adding modified natural nanoclay (attapulgite) to traditional fertilizer. In the aqueous phase, LCF self-assembles to form 3D micro/nano networks via hydrogen bonds and other weak interactions, obtaining a higher nitrogen spatial scale so that it is retained by a soil filtering layer. Thus nitrogen loss is reduced and sufficient nutrition for crops is supplied, while the pollution risk of the fertilizer is substantially lowered. As such, self-fabrication of nano-material was used to manipulate the nitrogen spatial scale, which provides a novel and promising approach for the research and control of the migration of other micro-scaled pollutants in environmental medium. PMID:24419037
Nanoelectronics: Opportunities for future space applications
NASA Technical Reports Server (NTRS)
Frazier, Gary
1995-01-01
Further improvements in the performance of integrated electronics will eventually halt due to practical fundamental limits on our ability to downsize transistors and interconnect wiring. Avoiding these limits requires a revolutionary approach to switching device technology and computing architecture. Nanoelectronics, the technology of exploiting physics on the nanometer scale for computation and communication, attempts to avoid conventional limits by developing new approaches to switching, circuitry, and system integration. This presentation overviews the basic principles that operate on the nanometer scale that can be assembled into practical devices and circuits. Quantum resonant tunneling (RT) is used as the center-piece of the overview since RT devices already operate at high temperature (120 degrees C) and can be scaled, in principle, to a few nanometers in semiconductors. Near- and long-term applications of GaAs and silicon quantum devices are suggested for signal and information processing, memory, optoelectronics, and radio frequency (RF) communication.
MAPPING GROWTH AND GRAVITY WITH ROBUST REDSHIFT SPACE DISTORTIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwan, Juliana; Lewis, Geraint F.; Linder, Eric V.
2012-04-01
Redshift space distortions (RSDs) caused by galaxy peculiar velocities provide a window onto the growth rate of large-scale structure and a method for testing general relativity. We investigate through a comparison of N-body simulations to various extensions of perturbation theory beyond the linear regime, the robustness of cosmological parameter extraction, including the gravitational growth index {gamma}. We find that the Kaiser formula and some perturbation theory approaches bias the growth rate by 1{sigma} or more relative to the fiducial at scales as large as k > 0.07 h Mpc{sup -1}. This bias propagates to estimates of the gravitational growth indexmore » as well as {Omega}{sub m} and the equation-of-state parameter and presents a significant challenge to modeling RSDs. We also determine an accurate fitting function for a combination of line-of-sight damping and higher order angular dependence that allows robust modeling of the redshift space power spectrum to substantially higher k.« less
Large-scale kinetic energy spectra from Eulerian analysis of EOLE wind data
NASA Technical Reports Server (NTRS)
Desbois, M.
1975-01-01
A data set of 56,000 winds determined from the horizontal displacements of EOLE balloons at the 200 mb level in the Southern Hemisphere during the period October 1971-February 1972 is utilized for the computation of planetary- and synoptic-scale kinetic energy space spectra. However, the random distribution of measurements in space and time presents some problems for the spectral analysis. Two different approaches are used, i.e., a harmonic analysis of daily wind values at equi-distant points obtained by space-time interpolation of the data, and a correlation method using the direct measurements. Both methods give similar results for small wavenumbers, but the second is more accurate for higher wavenumbers (k above or equal to 10). The spectra show a maximum at wavenumbers 5 and 6 due to baroclinic instability and then decrease for high wavenumbers up to wavenumber 35 (which is the limit of the analysis), according to the inverse power law k to the negative p, with p close to 3.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
We present the Clenshaw–Curtis Spectral Quadrature (SQ) method for real-space O(N) Density Functional Theory (DFT) calculations. In this approach, all quantities of interest are expressed as bilinear forms or sums over bilinear forms, which are then approximated by spatially localized Clenshaw–Curtis quadrature rules. This technique is identically applicable to both insulating and metallic systems, and in conjunction with local reformulation of the electrostatics, enables the O(N) evaluation of the electronic density, energy, and atomic forces. The SQ approach also permits infinite-cell calculations without recourse to Brillouin zone integration or large supercells. We employ a finite difference representation in order tomore » exploit the locality of electronic interactions in real space, enable systematic convergence, and facilitate large-scale parallel implementation. In particular, we derive expressions for the electronic density, total energy, and atomic forces that can be evaluated in O(N) operations. We demonstrate the systematic convergence of energies and forces with respect to quadrature order as well as truncation radius to the exact diagonalization result. In addition, we show convergence with respect to mesh size to established O(N 3) planewave results. In conclusion, we establish the efficiency of the proposed approach for high temperature calculations and discuss its particular suitability for large-scale parallel computation.« less
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
2015-12-02
We present the Clenshaw–Curtis Spectral Quadrature (SQ) method for real-space O(N) Density Functional Theory (DFT) calculations. In this approach, all quantities of interest are expressed as bilinear forms or sums over bilinear forms, which are then approximated by spatially localized Clenshaw–Curtis quadrature rules. This technique is identically applicable to both insulating and metallic systems, and in conjunction with local reformulation of the electrostatics, enables the O(N) evaluation of the electronic density, energy, and atomic forces. The SQ approach also permits infinite-cell calculations without recourse to Brillouin zone integration or large supercells. We employ a finite difference representation in order tomore » exploit the locality of electronic interactions in real space, enable systematic convergence, and facilitate large-scale parallel implementation. In particular, we derive expressions for the electronic density, total energy, and atomic forces that can be evaluated in O(N) operations. We demonstrate the systematic convergence of energies and forces with respect to quadrature order as well as truncation radius to the exact diagonalization result. In addition, we show convergence with respect to mesh size to established O(N 3) planewave results. In conclusion, we establish the efficiency of the proposed approach for high temperature calculations and discuss its particular suitability for large-scale parallel computation.« less
NASA Astrophysics Data System (ADS)
Matthaeus, W. H.; Yang, Y.; Servidio, S.; Parashar, T.; Chasapis, A.; Roytershteyn, V.
2017-12-01
Turbulence cascade transfers energy from large scale to small scale but what happens once kinetic scales are reached? In a collisional medium, viscosity and resistivity remove fluctuation energy in favor of heat. In the weakly collisional solar wind, (or corona, m-sheath, etc.), the sequence of events must be different. Heating occurs, but through what mechanisms? In standard approaches, dissipation occurs though linear wave modes or instabilities and one seeks to identify them. A complementary view is that cascade leads to several channels of energy conversion, interchange and spatial rearrangement that collectively leads to production of internal energy. Channels may be described using compressible MHD & multispecies Vlasov Maxwell formulations. Key steps are: Conservative rearrangement of energy in space; Parallel incompressible and compressible cascades - conservative rearrangment in scale; electromagnetic work on particles that drives flows, both macroscopic and microscopic; and pressure-stress interactions, both compressive and shear-like, that produces internal energy. Examples given from MHD, PIC simulations and MMS observations. A more subtle issue is how entropy is related to this degeneration (or, "dissipation") of macroscopic, fluid-scale fluctuations. We discuss this in terms of Boltzmann and thermodynamic entropies, and velocity space effects of collisions.
Effective pore-scale dispersion upscaling with a correlated continuous time random walk approach
NASA Astrophysics Data System (ADS)
Le Borgne, T.; Bolster, D.; Dentz, M.; de Anna, P.; Tartakovsky, A.
2011-12-01
We investigate the upscaling of dispersion from a pore-scale analysis of Lagrangian velocities. A key challenge in the upscaling procedure is to relate the temporal evolution of spreading to the pore-scale velocity field properties. We test the hypothesis that one can represent Lagrangian velocities at the pore scale as a Markov process in space. The resulting effective transport model is a continuous time random walk (CTRW) characterized by a correlated random time increment, here denoted as correlated CTRW. We consider a simplified sinusoidal wavy channel model as well as a more complex heterogeneous pore space. For both systems, the predictions of the correlated CTRW model, with parameters defined from the velocity field properties (both distribution and correlation), are found to be in good agreement with results from direct pore-scale simulations over preasymptotic and asymptotic times. In this framework, the nontrivial dependence of dispersion on the pore boundary fluctuations is shown to be related to the competition between distribution and correlation effects. In particular, explicit inclusion of spatial velocity correlation in the effective CTRW model is found to be important to represent incomplete mixing in the pore throats.
Alcohol expectancy multiaxial assessment: a memory network-based approach.
Goldman, Mark S; Darkes, Jack
2004-03-01
Despite several decades of activity, alcohol expectancy research has yet to merge measurement approaches with developing memory theory. This article offers an expectancy assessment approach built on a conceptualization of expectancy as an information processing network. The authors began with multidimensional scaling models of expectancy space, which served as heuristics to suggest confirmatory factor analytic dimensional models for entry into covariance structure predictive models. It is argued that this approach permits a relatively thorough assessment of the broad range of potential expectancy dimensions in a format that is very flexible in terms of instrument length and specificity versus breadth of focus. ((c) 2004 APA, all rights reserved)
Redshift-space distortions with the halo occupation distribution - II. Analytic model
NASA Astrophysics Data System (ADS)
Tinker, Jeremy L.
2007-01-01
We present an analytic model for the galaxy two-point correlation function in redshift space. The cosmological parameters of the model are the matter density Ωm, power spectrum normalization σ8, and velocity bias of galaxies αv, circumventing the linear theory distortion parameter β and eliminating nuisance parameters for non-linearities. The model is constructed within the framework of the halo occupation distribution (HOD), which quantifies galaxy bias on linear and non-linear scales. We model one-halo pairwise velocities by assuming that satellite galaxy velocities follow a Gaussian distribution with dispersion proportional to the virial dispersion of the host halo. Two-halo velocity statistics are a combination of virial motions and host halo motions. The velocity distribution function (DF) of halo pairs is a complex function with skewness and kurtosis that vary substantially with scale. Using a series of collisionless N-body simulations, we demonstrate that the shape of the velocity DF is determined primarily by the distribution of local densities around a halo pair, and at fixed density the velocity DF is close to Gaussian and nearly independent of halo mass. We calibrate a model for the conditional probability function of densities around halo pairs on these simulations. With this model, the full shape of the halo velocity DF can be accurately calculated as a function of halo mass, radial separation, angle and cosmology. The HOD approach to redshift-space distortions utilizes clustering data from linear to non-linear scales to break the standard degeneracies inherent in previous models of redshift-space clustering. The parameters of the occupation function are well constrained by real-space clustering alone, separating constraints on bias and cosmology. We demonstrate the ability of the model to separately constrain Ωm,σ8 and αv in models that are constructed to have the same value of β at large scales as well as the same finger-of-god distortions at small scales.
Garber, Paul A; Porter, Leila M
2014-05-01
Recent studies of spatial memory in wild nonhuman primates indicate that foragers may rely on a combination of navigational strategies to locate nearby and distant feeding sites. When traveling in large-scale space, tamarins are reported to encode spatial information in the form of a route-based map. However, little is known concerning how wild tamarins navigate in small-scale space (between feeding sites located at a distance of ≤60 m). Therefore, we collected data on range use, diet, and the angle and distance traveled to visit sequential feeding sites in the same group of habituated Bolivian saddleback tamarins (Saguinus fuscicollis weddelli) in 2009 and 2011. For 7-8 hr a day for 54 observation days, we recorded the location of the study group at 10 min intervals using a GPS unit. We then used GIS software to map and analyze the monkeys' movements and travel paths taken between feeding sites. Our results indicate that in small-scale space the tamarins relied on multiple spatial strategies. In 31% of cases travel was route-based. In the remaining 69% of cases, however, the tamarins appeared to attend to the spatial positions of one or more near-to-site landmarks to relocate feeding sites. In doing so they approached the same feeding site from a mean of 4.5 different directions, frequently utilized different arboreal pathways, and traveled approximately 30% longer than then the straight-line distance. In addition, the monkeys' use of non-direct travel paths allowed them to monitor insect and fruit availability in areas within close proximity of currently used food patches. We conclude that the use of an integrated spatial strategy (route-based travel and attention to near-to-goal landmarks) provides tamarins with the opportunity to relocate productive feeding sites as well as monitor the availability of nearby resources in small-scale space. © 2013 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Konkel, Carl R.; Powers, Allen K.; Dewitt, J. Russell
1991-01-01
The first interactive Space Station Freedom (SSF) lab robot exhibit was installed at the Space and Rocket Center in Huntsville, AL, and has been running daily since. IntraVehicular Activity (IVA) the robot is mounted in a full scale U.S. Lab (USL) mockup to educate the public on possible automation and robotic applications aboard the SSF. Responding to audio and video instructions at the Command Console, exhibit patrons may prompt IVA to perform a housekeeping task or give a speaking tour of the module. Other exemplary space station tasks are simulated and the public can even challenge IVA to a game of tic tac toe. In anticipation of such a system being built for the Space Station, a discussion is provided of the approach taken, along with suggestions for applicability to the Space Station Environment.
Selection and Manufacturing of Membrane Materials for Solar Sails
NASA Technical Reports Server (NTRS)
Bryant, Robert G.; Seaman, Shane T.; Wilkie, W. Keats; Miyaucchi, Masahiko; Working, Dennis C.
2013-01-01
Commercial metallized polyimide or polyester films and hand-assembly techniques are acceptable for small solar sail technology demonstrations, although scaling this approach to large sail areas is impractical. Opportunities now exist to use new polymeric materials specifically designed for solar sailing applications, and take advantage of integrated sail manufacturing to enable large-scale solar sail construction. This approach has, in part, been demonstrated on the JAXA IKAROS solar sail demonstrator, and NASA Langley Research Center is now developing capabilities to produce ultrathin membranes for solar sails by integrating resin synthesis with film forming and sail manufacturing processes. This paper will discuss the selection and development of polymer material systems for space, and these new processes for producing ultrathin high-performance solar sail membrane films.
NASA Astrophysics Data System (ADS)
Blume, T.; Zehe, E.; Bronstert, A.
2007-08-01
Spatial patterns as well as temporal dynamics of soil moisture have a major influence on runoff generation. The investigation of these dynamics and patterns can thus yield valuable information on hydrological processes, especially in data scarce or previously ungauged catchments. The combination of spatially scarce but temporally high resolution soil moisture profiles with episodic and thus temporally scarce moisture profiles at additional locations provides information on spatial as well as temporal patterns of soil moisture at the hillslope transect scale. This approach is better suited to difficult terrain (dense forest, steep slopes) than geophysical techniques and at the same time less cost-intensive than a high resolution grid of continuously measuring sensors. Rainfall simulation experiments with dye tracers while continuously monitoring soil moisture response allows for visualization of flow processes in the unsaturated zone at these locations. Data was analyzed at different spacio-temporal scales using various graphical methods, such as space-time colour maps (for the event and plot scale) and indicator maps (for the long-term and hillslope scale). Annual dynamics of soil moisture and decimeter-scale variability were also investigated. The proposed approach proved to be successful in the investigation of flow processes in the unsaturated zone and showed the importance of preferential flow in the Malalcahuello Catchment, a data-scarce catchment in the Andes of Southern Chile. Fast response times of stream flow indicate that preferential flow observed at the plot scale might also be of importance at the hillslope or catchment scale. Flow patterns were highly variable in space but persistent in time. The most likely explanation for preferential flow in this catchment is a combination of hydrophobicity, small scale heterogeneity in rainfall due to redistribution in the canopy and strong gradients in unsaturated conductivities leading to self-reinforcing flow paths.
Nonlinear dynamic theory for photorefractive phase hologram formation
NASA Technical Reports Server (NTRS)
Kim, D. M.; Shah, R. R.; Rabson, T. A.; Tittle, F. K.
1976-01-01
A nonlinear dynamic theory is developed for the formation of photorefractive volume phase holograms. A feedback mechanism existing between the photogenerated field and free-electron density, treated explicitly, yields the growth and saturation of the space-charge field in a time scale characterized by the coupling strength between them. The expression for the field reduces in the short-time limit to previous theories and approaches in the long-time limit the internal or photovoltaic field. Additionally, the phase of the space charge field is shown to be time-dependent.
The Orbiting Carbon Observatory Mission: Watching the Earth Breathe Mapping CO2 from Space
NASA Technical Reports Server (NTRS)
Boain, Ron
2007-01-01
Approach: Collect spatially resolved, high resolution spectroscopic observations of CO2 and O2 absorption in reflected sunlight. Use these data to resolve spatial and temporal variations in the column averaged CO2 dry air mole fraction, X(sub CO2) over the sunlit hemisphere. Employ independent calibration and validation approaches to produce X(sub CO2) estimates with random errors and biases no larger than 1-2 ppm (0.3-0.5%) on regional scales at monthly intervals.
Tactile display landing safety and precision improvements for the Space Shuttle
NASA Astrophysics Data System (ADS)
Olson, John M.
A tactile display belt using 24 electro-mechanical tactile transducers (tactors) was used to determine if a modified tactile display system, known as the Tactile Situation Awareness System (TSAS) improved the safety and precision of a complex spacecraft (i.e. the Space Shuttle Orbiter) in guided precision approaches and landings. The goal was to determine if tactile cues enhance safety and mission performance through reduced workload, increased situational awareness (SA), and an improved operational capability by increasing secondary cognitive workload capacity and human-machine interface efficiency and effectiveness. Using both qualitative and quantitative measures such as NASA's Justiz Numerical Measure and Synwork1 scores, an Overall Workload (OW) measure, the Cooper-Harper rating scale, and the China Lake Situational Awareness scale, plus Pre- and Post-Flight Surveys, the data show that tactile displays decrease OW, improve SA, counteract fatigue, and provide superior warning and monitoring capacity for dynamic, off-nominal, high concurrent workload scenarios involving complex, cognitive, and multi-sensory critical scenarios. Use of TSAS for maintaining guided precision approaches and landings was generally intuitive, reduced training times, and improved task learning effects. Ultimately, the use of a homogeneous, experienced, and statistically robust population of test pilots demonstrated that the use of tactile displays for Space Shuttle approaches and landings with degraded vehicle systems, weather, and environmental conditions produced substantial improvements in safety, consistency, reliability, and ease of operations under demanding conditions. Recommendations for further analysis and study are provided in order to leverage the results from this research and further explore the potential to reduce the risk of spaceflight and aerospace operations in general.
In-Space Chemical Propulsion System Model
NASA Technical Reports Server (NTRS)
Byers, David C.; Woodcock, Gordon; Benfield, Michael P. J.
2004-01-01
Multiple, new technologies for chemical systems are becoming available and include high temperature rockets, very light propellant tanks and structures, new bipropellant and monopropellant options, lower mass propellant control components, and zero boil off subsystems. Such technologies offer promise of increasing the performance of in-space chemical propulsion for energetic space missions. A mass model for pressure-fed, Earth and space-storable, advanced chemical propulsion systems (ACPS) was developed in support of the NASA MSFC In-Space Propulsion Program. Data from flight systems and studies defined baseline system architectures and subsystems and analyses were formulated for parametric scaling relationships for all ACPS subsystem. The paper will first provide summary descriptions of the approaches used for the systems and the subsystems and then present selected analyses to illustrate use of the model for missions with characteristics of current interest.
In-Space Chemical Propulsion System Model
NASA Technical Reports Server (NTRS)
Byers, David C.; Woodcock, Gordon; Benfield, M. P. J.
2004-01-01
Multiple, new technologies for chemical systems are becoming available and include high temperature rockets, very light propellant tanks and structures, new bipropellant and monopropellant options, lower mass propellant control components, and zero boil off subsystems. Such technologies offer promise of increasing the performance of in-space chemical propulsion for energetic space missions. A mass model for pressure-fed, Earth and space-storable, advanced chemical propulsion systems (ACPS) was developed in support of the NASA MSFC In-Space Propulsion Program. Data from flight systems and studies defined baseline system architectures and subsystems and analyses were formulated for parametric scaling relationships for all ACPS subsystems. The paper will first provide summary descriptions of the approaches used for the systems and the subsystems and then present selected analyses to illustrate use of the model for missions with characteristics of current interest.
Mapping the Energy Cascade in the North Atlantic Ocean: The Coarse-graining Approach
Aluie, Hussein; Hecht, Matthew; Vallis, Geoffrey K.
2017-11-14
A coarse-graining framework is implemented to analyze nonlinear processes, measure energy transfer rates and map out the energy pathways from simulated global ocean data. Traditional tools to measure the energy cascade from turbulence theory, such as spectral flux or spectral transfer rely on the assumption of statistical homogeneity, or at least a large separation between the scales of motion and the scales of statistical inhomogeneity. The coarse-graining framework allows for probing the fully nonlinear dynamics simultaneously in scale and in space, and is not restricted by those assumptions. This study describes how the framework can be applied to ocean flows.
Metrologically useful states of spin-1 Bose condensates with macroscopic magnetization
NASA Astrophysics Data System (ADS)
Kajtoch, Dariusz; Pawłowski, Krzysztof; Witkowska, Emilia
2018-02-01
We study theoretically the usefulness of spin-1 Bose condensates with macroscopic magnetization in a homogeneous magnetic field for quantum metrology. We demonstrate Heisenberg scaling of the quantum Fisher information for states in thermal equilibrium. The scaling applies to both antiferromagnetic and ferromagnetic interactions. The effect preserves as long as fluctuations of magnetization are sufficiently small. Scaling of the quantum Fisher information with the total particle number is derived within the mean-field approach in the zero-temperature limit and exactly in the high-magnetic-field limit for any temperature. The precision gain is intuitively explained owing to subtle features of the quasidistribution function in the phase space.
Mapping the Energy Cascade in the North Atlantic Ocean: The Coarse-graining Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aluie, Hussein; Hecht, Matthew; Vallis, Geoffrey K.
A coarse-graining framework is implemented to analyze nonlinear processes, measure energy transfer rates and map out the energy pathways from simulated global ocean data. Traditional tools to measure the energy cascade from turbulence theory, such as spectral flux or spectral transfer rely on the assumption of statistical homogeneity, or at least a large separation between the scales of motion and the scales of statistical inhomogeneity. The coarse-graining framework allows for probing the fully nonlinear dynamics simultaneously in scale and in space, and is not restricted by those assumptions. This study describes how the framework can be applied to ocean flows.
A program for handling map projections of small-scale geospatial raster data
Finn, Michael P.; Steinwand, Daniel R.; Trent, Jason R.; Buehler, Robert A.; Mattli, David M.; Yamamoto, Kristina H.
2012-01-01
Scientists routinely accomplish small-scale geospatial modeling using raster datasets of global extent. Such use often requires the projection of global raster datasets onto a map or the reprojection from a given map projection associated with a dataset. The distortion characteristics of these projection transformations can have significant effects on modeling results. Distortions associated with the reprojection of global data are generally greater than distortions associated with reprojections of larger-scale, localized areas. The accuracy of areas in projected raster datasets of global extent is dependent on spatial resolution. To address these problems of projection and the associated resampling that accompanies it, methods for framing the transformation space, direct point-to-point transformations rather than gridded transformation spaces, a solution to the wrap-around problem, and an approach to alternative resampling methods are presented. The implementations of these methods are provided in an open-source software package called MapImage (or mapIMG, for short), which is designed to function on a variety of computer architectures.
2011-12-11
CAPE CANAVERAL, Fla. – The high-fidelity space shuttle model which was on display at the NASA Kennedy Space Center Visitor Complex in Florida approaches the 525-foot-tall Vehicle Assembly Building as it makes its way to Kennedy's Launch Complex 39 turn basin. The shuttle was part of a display at the visitor complex that also included an external tank and two solid rocket boosters that were used to show visitors the size of actual space shuttle components. The full-scale shuttle model is being transferred from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will stay at the turn basin for a few months until it is ready to be transported to Texas via barge. The move also helps clear the way for the Kennedy Space Center Visitor Complex to begin construction of a new facility next year to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Dimitri Gerondidakis
Exploring Replica-Exchange Wang-Landau sampling in higher-dimensional parameter space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valentim, Alexandra; Rocha, Julio C. S.; Tsai, Shan-Ho
We considered a higher-dimensional extension for the replica-exchange Wang-Landau algorithm to perform a random walk in the energy and magnetization space of the two-dimensional Ising model. This hybrid scheme combines the advantages of Wang-Landau and Replica-Exchange algorithms, and the one-dimensional version of this approach has been shown to be very efficient and to scale well, up to several thousands of computing cores. This approach allows us to split the parameter space of the system to be simulated into several pieces and still perform a random walk over the entire parameter range, ensuring the ergodicity of the simulation. Previous work, inmore » which a similar scheme of parallel simulation was implemented without using replica exchange and with a different way to combine the result from the pieces, led to discontinuities in the final density of states over the entire range of parameters. From our simulations, it appears that the replica-exchange Wang-Landau algorithm is able to overcome this diculty, allowing exploration of higher parameter phase space by keeping track of the joint density of states.« less
NASA Astrophysics Data System (ADS)
Christen, A.; Crawford, B.; Ketler, R.; Lee, J. K.; McKendry, I. G.; Nesic, Z.; Caitlin, S.
2015-12-01
Measurements of long-lived greenhouse gases in the urban atmosphere are potentially useful to constrain and validate urban emission inventories, or space-borne remote-sensing products. We summarize and compare three different approaches, operating at different scales, that directly or indirectly identify, attribute and quantify emissions (and uptake) of carbon dioxide (CO2) in urban environments. All three approaches are illustrated using in-situ measurements in the atmosphere in and over Vancouver, Canada. Mobile sensing may be a promising way to quantify and map CO2 mixing ratios at fine scales across heterogenous and complex urban environments. We developed a system for monitoring CO2 mixing ratios at street level using a network of mobile CO2 sensors deployable on vehicles and bikes. A total of 5 prototype sensors were built and simultaneously used in a measurement campaign across a range of urban land use types and densities within a short time frame (3 hours). The dataset is used to aid in fine scale emission mapping in combination with simultaneous tower-based flux measurements. Overall, calculated CO2 emissions are realistic when compared against a spatially disaggregated scale emission inventory. The second approach is based on mass flux measurements of CO2 using a tower-based eddy covariance (EC) system. We present a continuous 7-year long dataset of CO2 fluxes measured by EC at the 28m tall flux tower 'Vancouver-Sunset'. We show how this dataset can be combined with turbulent source area models to quantify and partition different emission processes at the neighborhood-scale. The long-term EC measurements are within 10% of a spatially disaggregated scale emission inventory. Thirdly, at the urban scale, we present a dataset of CO2 mixing ratios measured using a tethered balloon system in the urban boundary layer above Vancouver. Using a simple box model, net city-scale CO2 emissions can be determined using measured rate of change of CO2 mixing ratios, estimated CO2 advection and entrainment fluxes. Daily city-scale emissions totals predicted by the model are within 32% of a spatially scaled municipal greenhouse gas inventory. In summary, combining information from different approaches and scales is a promising approach to establish long-term emission monitoring networks in cities.
A new unified approach to determine geocentre motion using space geodetic and GRACE gravity data
NASA Astrophysics Data System (ADS)
Wu, Xiaoping; Kusche, Jürgen; Landerer, Felix W.
2017-06-01
Geocentre motion between the centre-of-mass of the Earth system and the centre-of-figure of the solid Earth surface is a critical signature of degree-1 components of global surface mass transport process that includes sea level rise, ice mass imbalance and continental-scale hydrological change. To complement GRACE data for complete-spectrum mass transport monitoring, geocentre motion needs to be measured accurately. However, current methods of geodetic translational approach and global inversions of various combinations of geodetic deformation, simulated ocean bottom pressure and GRACE data contain substantial biases and systematic errors. Here, we demonstrate a new and more reliable unified approach to geocentre motion determination using a recently formed satellite laser ranging based geocentric displacement time-series of an expanded geodetic network of all four space geodetic techniques and GRACE gravity data. The unified approach exploits both translational and deformational signatures of the displacement data, while the addition of GRACE's near global coverage significantly reduces biases found in the translational approach and spectral aliasing errors in the inversion.
NASA Astrophysics Data System (ADS)
Cheon, M.; Chang, I.
1999-04-01
The scaling behavior for a binary fragmentation of critical percolation clusters is investigated by a large-cell Monte Carlo real-space renormalization group method in two and three dimensions. We obtain accurate values of critical exponents λ and phi describing the scaling of fragmentation rate and the distribution of fragments' masses produced by a binary fragmentation. Our results for λ and phi show that the fragmentation rate is proportional to the size of mother cluster, and the scaling relation σ = 1 + λ - phi conjectured by Edwards et al. to be valid for all dimensions is satisfied in two and three dimensions, where σ is the crossover exponent of the average cluster number in percolation theory, which excludes the other scaling relations.
Cross scale interactions, nonlinearities, and forecasting catastrophic events
Peters, Debra P.C.; Pielke, Roger A.; Bestelmeyer, Brandon T.; Allen, Craig D.; Munson-McGee, Stuart; Havstad, Kris M.
2004-01-01
Catastrophic events share characteristic nonlinear behaviors that are often generated by cross-scale interactions and feedbacks among system elements. These events result in surprises that cannot easily be predicted based on information obtained at a single scale. Progress on catastrophic events has focused on one of the following two areas: nonlinear dynamics through time without an explicit consideration of spatial connectivity [Holling, C. S. (1992) Ecol. Monogr. 62, 447–502] or spatial connectivity and the spread of contagious processes without a consideration of cross-scale interactions and feedbacks [Zeng, N., Neeling, J. D., Lau, L. M. & Tucker, C. J. (1999) Science 286, 1537–1540]. These approaches rarely have ventured beyond traditional disciplinary boundaries. We provide an interdisciplinary, conceptual, and general mathematical framework for understanding and forecasting nonlinear dynamics through time and across space. We illustrate the generality and usefulness of our approach by using new data and recasting published data from ecology (wildfires and desertification), epidemiology (infectious diseases), and engineering (structural failures). We show that decisions that minimize the likelihood of catastrophic events must be based on cross-scale interactions, and such decisions will often be counterintuitive. Given the continuing challenges associated with global change, approaches that cross disciplinary boundaries to include interactions and feedbacks at multiple scales are needed to increase our ability to predict catastrophic events and develop strategies for minimizing their occurrence and impacts. Our framework is an important step in developing predictive tools and designing experiments to examine cross-scale interactions.
Modular Software Interfaces for Revolutionary Flexibility in Space Operations
NASA Technical Reports Server (NTRS)
Glass, Brian; Braham, Stephen; Pollack, Jay
2005-01-01
To make revolutionary improvements in exploration, space systems need to be flexible, realtime reconfigurable, and able to trade data transparently among themselves and mission operations. Onboard operations systems, space assembly coordination and EVA systems in exploration and construction all require real-time modular reconfigurability and data sharing. But NASA's current exploration systems are still largely legacies from hastily-developed, one-off Apollo-era practices. Today's rovers, vehicles, spacesuits, space stations, and instruments are not able to plug-and-play, Lego-like: into different combinations. Point-to-point dominates - individual suit to individual vehicle, individual instrument to rover. All are locally optimized, all unique, each of the data interfaces has been recoded for each possible combination. This will be an operations and maintenance nightmare in the much larger Project Constellation system of systems. This legacy approach does not scale to the hundreds of networked space components needed for space construction and for new, space-based approaches to Earth-Moon operations. By comparison, battlefield information management systems, which are considered critical to military force projection, have long since abandoned a point-to-point approach to systems integration. From a system-of-systems viewpoint, a clean-sheet redesign of the interfaces of all exploration systems is a necessary prerequisite before designing the interfaces of the individual exploration systems. Existing communications and Global Information Grid and middleware technologies are probably sufficient for command and control and information interfaces, with some hardware and time-delay modifications for space environments. NASA's future advanced space operations must also be information and data compatible with aerospace operations and surveillance systems being developed by other US Government agencies such as the Department of Homeland Security, Federal Aviation Administration and Department of Defense. This paper discusses fundamental system-of-systems infrastructure: approaches and architectures for modular plug-and-play software interfaces for revolutionary improvements in flexibility, modularity, robustness, ease of maintenance, reconfigurability, safety and productivity. Starting with middleware, databases, and mobile communications technologies, our technical challenges will be to apply these ideas to the requirements of constellations of space systems and to implement them initially on prototype space hardware. This is necessary to demonstrate an integrated information sharing architecture and services. It is a bottom-up approach, one that solves the problem of space operations data integration. Exploration demands uniform software mechanisms for application information interchange, and the corresponding uniformly available software services to enhance these mechanisms. We will examine the issues in plug-and-play, real-time-configurable systems, including common definition and management and tracking of data and information among many different space systems. Different field test approaches are discussed, including the use of the International Space Station and terrestrial analog mission operations at field sites.
Li, Qiang; Mannall, Gareth J; Ali, Shaukat; Hoare, Mike
2013-08-01
Escherichia coli is frequently used as a microbial host to express recombinant proteins but it lacks the ability to secrete proteins into medium. One option for protein release is to use high-pressure homogenization followed by a centrifugation step to remove cell debris. While this does not give selective release of proteins in the periplasmic space, it does provide a robust process. An ultra scale-down (USD) approach based on focused acoustics is described to study rec E. coli cell disruption by high-pressure homogenization for recovery of an antibody fragment (Fab') and the impact of fermentation harvest time. This approach is followed by microwell-based USD centrifugation to study the removal of the resultant cell debris. Successful verification of this USD approach is achieved using pilot scale high-pressure homogenization and pilot scale, continuous flow, disc stack centrifugation comparing performance parameters such as the fraction of Fab' release, cell debris size distribution and the carryover of cell debris fine particles in the supernatant. The integration of fermentation and primary recovery stages is examined using USD monitoring of different phases of cell growth. Increasing susceptibility of the cells to disruption is observed with time following induction. For a given recovery process this results in a higher fraction of product release and a greater proportion of fine cell debris particles that are difficult to remove by centrifugation. Such observations are confirmed at pilot scale. Copyright © 2013 Wiley Periodicals, Inc.
A Heuristic Approach to Global Landslide Susceptibility Mapping
NASA Technical Reports Server (NTRS)
Stanley, Thomas; Kirschbaum, Dalia B.
2017-01-01
Landslides can have significant and pervasive impacts to life and property around the world. Several attempts have been made to predict the geographic distribution of landslide activity at continental and global scales. These efforts shared common traits such as resolution, modeling approach, and explanatory variables. The lessons learned from prior research have been applied to build a new global susceptibility map from existing and previously unavailable data. Data on slope, faults, geology, forest loss, and road networks were combined using a heuristic fuzzy approach. The map was evaluated with a Global Landslide Catalog developed at the National Aeronautics and Space Administration, as well as several local landslide inventories. Comparisons to similar susceptibility maps suggest that the subjective methods commonly used at this scale are, for the most part, reproducible. However, comparisons of landslide susceptibility across spatial scales must take into account the susceptibility of the local subset relative to the larger study area. The new global landslide susceptibility map is intended for use in disaster planning, situational awareness, and for incorporation into global decision support systems.
Sarah C. Elmendorf; Gregory H.R. Henry; Robert D. Hollister; Robert G. Björk; Anne D. Bjorkman; Terry V. Callaghan; [and others] NO-VALUE; William Gould; Joel Mercado
2012-01-01
Understanding the sensitivity of tundra vegetation to climate warming is critical to forecasting future biodiversity and vegetation feedbacks to climate. In situ warming experiments accelerate climate change on a small scale to forecast responses of local plant communities. Limitations of this approach include the apparent site-specificity of results and uncertainty...
Planetary boundaries: exploring the safe operating space for humanity
Johan Rockström; Will Steffen; Kevin Noone; Asa Persson; F. Stuart Chapin; Eric Lambin; Timothy M. Lenton; Marten Scheffer; Carl Folke; Hans Joachim Schellnhuber; Björn Nykvist; Cynthia A. de Wit; Terry Hughes; Sander van der Leeuw; Henning Rodhe; Sverker Sörlin; Peter K. Snyder; Robert Costanza; Uno Svedin; Malin Falkenmark; Louise Karlberg; Robert W. Corell; Victoria J. Fabry; James Hansen; Brian Walker; Diana Liverman; Katherine Richardson; Paul Crutzen; Jonathan Foley
2009-01-01
Anthropogenic pressures on the Earth System have reached a scale where abrupt global environmental change can no longer be excluded. We propose a new approach to global sustainability in which we define planetary boundaries within which we expect that humanity can operate safely. Transgressing one or more planetary boundaries may be deleterious or even catastrophic due...
Guide for preparing active solar heating systems operation and maintenance manuals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-01-01
This book presents a systematic and standardized approach to the preparation of operation and maintenance manuals for active solar heating systems. Provides an industry consensus of the best operating and maintenance procedures for large commercial-scale solar service water and space heating systems. A sample O M manual is included. 3-ring binder included.
77 FR 66837 - Workshop To Define Approaches To Assess the Effectiveness of Policies To Reduce PM2.5
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-07
... composition of air pollution in urban areas that will occur over both time and space. The purposes of this... implementation of these large-scale changes in levels of air pollution. Consistent with the recent North American... verify the relationship between reductions in air pollution emissions, ambient concentrations, human...
Understanding the role of sediment waves and channel conditions over time and space
Thomas E. Lisle
1997-01-01
Abstract - Dynamic equilibrium in stream channels has traditionally been applied on the reach scale, where fluxes of water and sediment into a reach result in rapid but minor adjustments of channel dimensions, hydraulics or roughness (equilibrium), or aggradation and degradation (disequilibrium). Such an essentially one-dimensional spatial approach to sediment-channel...
Solar power satellite: System definition study. Part 1, volume 1: Executive summary
NASA Technical Reports Server (NTRS)
1977-01-01
A study of the solar power satellite system, which represents a means of tapping baseload electric utility power from the sun on a large scale, was summarized. Study objectives, approach, and planning are presented along with an energy conversion evaluation. Basic requirements were considered in regard to space transportation, construction, and maintainability.
NASA Astrophysics Data System (ADS)
Min, Junhong; Carlini, Lina; Unser, Michael; Manley, Suliana; Ye, Jong Chul
2015-09-01
Localization microscopy such as STORM/PALM can achieve a nanometer scale spatial resolution by iteratively localizing fluorescence molecules. It was shown that imaging of densely activated molecules can accelerate temporal resolution which was considered as major limitation of localization microscopy. However, this higher density imaging needs to incorporate advanced localization algorithms to deal with overlapping point spread functions (PSFs). In order to address this technical challenges, previously we developed a localization algorithm called FALCON1, 2 using a quasi-continuous localization model with sparsity prior on image space. It was demonstrated in both 2D/3D live cell imaging. However, it has several disadvantages to be further improved. Here, we proposed a new localization algorithm using annihilating filter-based low rank Hankel structured matrix approach (ALOHA). According to ALOHA principle, sparsity in image domain implies the existence of rank-deficient Hankel structured matrix in Fourier space. Thanks to this fundamental duality, our new algorithm can perform data-adaptive PSF estimation and deconvolution of Fourier spectrum, followed by truly grid-free localization using spectral estimation technique. Furthermore, all these optimizations are conducted on Fourier space only. We validated the performance of the new method with numerical experiments and live cell imaging experiment. The results confirmed that it has the higher localization performances in both experiments in terms of accuracy and detection rate.
Phase Transitions and Scaling in Systems Far from Equilibrium
NASA Astrophysics Data System (ADS)
Täuber, Uwe C.
2017-03-01
Scaling ideas and renormalization group approaches proved crucial for a deep understanding and classification of critical phenomena in thermal equilibrium. Over the past decades, these powerful conceptual and mathematical tools were extended to continuous phase transitions separating distinct nonequilibrium stationary states in driven classical and quantum systems. In concordance with detailed numerical simulations and laboratory experiments, several prominent dynamical universality classes have emerged that govern large-scale, long-time scaling properties both near and far from thermal equilibrium. These pertain to genuine specific critical points as well as entire parameter space regions for steady states that display generic scale invariance. The exploration of nonstationary relaxation properties and associated physical aging scaling constitutes a complementary potent means to characterize cooperative dynamics in complex out-of-equilibrium systems. This review describes dynamic scaling features through paradigmatic examples that include near-equilibrium critical dynamics, driven lattice gases and growing interfaces, correlation-dominated reaction-diffusion systems, and basic epidemic models.
NASA Technical Reports Server (NTRS)
1971-01-01
Technical models and analytical approaches used to develop the weight data for vehicle system concepts using advanced technology are reported. Weight data are supplied for the following major system elements: engine, pressurization, propellant containers, structural shells and secondary structure, and environmental protection shields for the meteoroid and thermal design requirements. Scaling laws, improved and a simplified set, are developed from the system weight data. The laws consider the implications of the major design parameters and mission requirements on the stage inert mass.
A parallel orbital-updating based plane-wave basis method for electronic structure calculations
NASA Astrophysics Data System (ADS)
Pan, Yan; Dai, Xiaoying; de Gironcoli, Stefano; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui
2017-11-01
Motivated by the recently proposed parallel orbital-updating approach in real space method [1], we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.
Passive Plasma Contact Mechanisms for Small-Scale Spacecraft
NASA Astrophysics Data System (ADS)
McTernan, Jesse K.
Small-scale spacecraft represent a paradigm shift in how entities such as academia, industry, engineering firms, and the scientific community operate in space. However, although the paradigm shift produces unique opportunities to build satellites in unique ways for novel missions, there are also significant challenges that must be addressed. This research addresses two of the challenges associated with small-scale spacecraft: 1) the miniaturization of spacecraft and associated instrumentation and 2) the need to transport charge across the spacecraft-environment boundary. As spacecraft decrease in size, constraints on the size, weight, and power of on-board instrumentation increase--potentially limiting the instrument's functionality or ability to integrate with the spacecraft. These constraints drive research into mechanisms or techniques that use little or no power and efficiently utilize existing resources. One limited resource on small-scale spacecraft is outer surface area, which is often covered with solar panels to meet tight power budgets. This same surface area could also be needed for passive neutralization of spacecraft charging. This research explores the use of a transparent, conductive layer on the solar cell coverglass that is electrically connected to spacecraft ground potential. This dual-purpose material facilitates the use of outer surfaces for both energy harvesting of solar photons as well as passive ion collection. Mission capabilities such as in-situ plasma measurements that were previously infeasible on small-scale platforms become feasible with the use of indium tin oxide-coated solar panel coverglass. We developed test facilities that simulate the space environment in low Earth orbit to test the dual-purpose material and the various application of this approach. Particularly, this research is in support of two upcoming missions: OSIRIS-3U, by Penn State's Student Space Programs Lab, and MiTEE, by the University of Michigan. The purpose of OSIRIS-3U is to investigate the effects of space weather on the ionosphere. The spacecraft will use a pulsed Langmuir probe, an instrument now enabled on small-scale spacecraft through the techniques outlined in this research.
NASA Astrophysics Data System (ADS)
Mahmud, M. R.
2014-02-01
This paper presents the simplified and operational approach of mapping the water yield in tropical watershed using space-based multi sensor remote sensing data. Two main critical hydrological rainfall variables namely rainfall and evapotranspiration are being estimated by satellite measurement and reinforce the famous Thornthwaite & Mather water balance model. The satellite rainfall and ET estimates were able to represent the actual value on the ground with accuracy under considerable conditions. The satellite derived water yield had good agreement and relation with actual streamflow. A high bias measurement may result due to; i) influence of satellite rainfall estimates during heavy storm, and ii) large uncertainties and standard deviation of MODIS temperature data product. The output of this study managed to improve the regional scale of hydrology assessment in Peninsular Malaysia.
NASA Astrophysics Data System (ADS)
Guo, Yang; Sivalingam, Kantharuban; Valeev, Edward F.; Neese, Frank
2016-03-01
Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still two important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling "partially contracted" NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient "electron pair prescreening" that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed comparison between the partial and strong contraction schemes is made, with conclusions that discourage the strong contraction scheme as a basis for local correlation methods due to its non-invariance with respect to rotations in the inactive and external subspaces. A minimal set of conservatively chosen truncation thresholds controls the accuracy of the method. With the default thresholds, about 99.9% of the canonical partially contracted NEVPT2 correlation energy is recovered while the crossover of the computational cost with the already very efficient canonical method occurs reasonably early; in linear chain type compounds at a chain length of around 80 atoms. Calculations are reported for systems with more than 300 atoms and 5400 basis functions.
Djuris, Jelena; Djuric, Zorica
2017-11-30
Mathematical models can be used as an integral part of the quality by design (QbD) concept throughout the product lifecycle for variety of purposes, including appointment of the design space and control strategy, continual improvement and risk assessment. Examples of different mathematical modeling techniques (mechanistic, empirical and hybrid) in the pharmaceutical development and process monitoring or control are provided in the presented review. In the QbD context, mathematical models are predominantly used to support design space and/or control strategies. Considering their impact to the final product quality, models can be divided into the following categories: high, medium and low impact models. Although there are regulatory guidelines on the topic of modeling applications, review of QbD-based submission containing modeling elements revealed concerns regarding the scale-dependency of design spaces and verification of models predictions at commercial scale of manufacturing, especially regarding real-time release (RTR) models. Authors provide critical overview on the good modeling practices and introduce concepts of multiple-unit, adaptive and dynamic design space, multivariate specifications and methods for process uncertainty analysis. RTR specification with mathematical model and different approaches to multivariate statistical process control supporting process analytical technologies are also presented. Copyright © 2017 Elsevier B.V. All rights reserved.
Generalized probabilistic scale space for image restoration.
Wong, Alexander; Mishra, Akshaya K
2010-10-01
A novel generalized sampling-based probabilistic scale space theory is proposed for image restoration. We explore extending the definition of scale space to better account for both noise and observation models, which is important for producing accurately restored images. A new class of scale-space realizations based on sampling and probability theory is introduced to realize this extended definition in the context of image restoration. Experimental results using 2-D images show that generalized sampling-based probabilistic scale-space theory can be used to produce more accurate restored images when compared with state-of-the-art scale-space formulations, particularly under situations characterized by low signal-to-noise ratios and image degradation.
A downscaling method for the assessment of local climate change
NASA Astrophysics Data System (ADS)
Bruno, E.; Portoghese, I.; Vurro, M.
2009-04-01
The use of complimentary models is necessary to study the impact of climate change scenarios on the hydrological response at different space-time scales. However, the structure of GCMs is such that their space resolution (hundreds of kilometres) is too coarse and not adequate to describe the variability of extreme events at basin scale (Burlando and Rosso, 2002). To bridge the space-time gap between the climate scenarios and the usual scale of the inputs for hydrological prediction models is a fundamental requisite for the evaluation of climate change impacts on water resources. Since models operate a simplification of a complex reality, their results cannot be expected to fit with climate observations. Identifying local climate scenarios for impact analysis implies the definition of more detailed local scenario by downscaling GCMs or RCMs results. Among the output correction methods we consider the statistical approach by Déqué (2007) reported as a ‘Variable correction method' in which the correction of model outputs is obtained by a function build with the observation dataset and operating a quantile-quantile transformation (Q-Q transform). However, in the case of daily precipitation fields the Q-Q transform is not able to correct the temporal property of the model output concerning the dry-wet lacunarity process. An alternative correction method is proposed based on a stochastic description of the arrival-duration-intensity processes in coherence with the Poissonian Rectangular Pulse scheme (PRP) (Eagleson, 1972). In this proposed approach, the Q-Q transform is applied to the PRP variables derived from the daily rainfall datasets. Consequently the corrected PRP parameters are used for the synthetic generation of statistically homogeneous rainfall time series that mimic the persistency of daily observations for the reference period. Then the PRP parameters are forced through the GCM scenarios to generate local scale rainfall records for the 21st century. The statistical parameters characterizing daily storm occurrence, storm intensity and duration needed to apply the PRP scheme are considered among STARDEX collection of extreme indices.
A methodology for rapid vehicle scaling and configuration space exploration
NASA Astrophysics Data System (ADS)
Balaba, Davis
2009-12-01
The Configuration-space Exploration and Scaling Methodology (CESM) entails the representation of component or sub-system geometries as matrices of points in 3D space. These typically large matrices are reduced using minimal convex sets or convex hulls. This reduction leads to significant gains in collision detection speed at minimal approximation expense. (The Gilbert-Johnson-Keerthi algorithm [79] is used for collision detection purposes in this methodology.) Once the components are laid out, their collective convex hull (from here on out referred to as the super-hull) is used to approximate the inner mold line of the minimum enclosing envelope of the vehicle concept. A sectional slicing algorithm is used to extract the sectional dimensions of this envelope. An offset is added to these dimensions in order to come up with the sectional fuselage dimensions. Once the lift and control surfaces are added, vehicle level objective functions can be evaluated and compared to other designs. The size of the design space coupled with the fact that some key constraints such as the number of collisions are discontinuous, dictate that a domain-spanning optimization routine be used. Also, as this is a conceptual design tool, the goal is to provide the designer with a diverse baseline geometry space from which to chose. For these reasons, a domain-spanning algorithm with counter-measures against speciation and genetic drift is the recommended optimization approach. The Non-dominated Sorting Genetic Algorithm (NSGA-II) [60] is shown to work well for the proof of concept study. There are two major reasons why the need to evaluate higher fidelity, custom geometric scaling laws became a part of this body of work. First of all, historical-data based regressions become implicitly unreliable when the vehicle concept in question is designed around a disruptive technology. Second, it was shown that simpler approaches such as photographic scaling can result in highly suboptimal concepts even for very small scaling factors. Yet good scaling information is critical to the success of any conceptual design process. In the CESM methodology, it is assumed that the new technology has matured enough to permit the prediction of the scaling behavior of the various subsystems in response to requirement changes. Updated subsystem geometry data is generated by applying the new requirement settings to the affected subsystems. All collisions are then eliminated using the NSGA-II algorithm. This is done while minimizing the adverse impact on the vehicle packing density. Once all collisions are eliminated, the vehicle geometry is reconstructed and system level data such as fuselage volume can be harvested. This process is repeated for all requirement settings. Dimensional analysis and regression can be carried out using this data and all other pertinent metrics in the manner described by Mendez [124] and Segel [173]. The dominant parameters for each response show up as in the dimensionally consistent groups that form the independent variables. More importantly the impact of changes in any of these variables on system level dependent variables can be easily and rapidly evaluated. In this way, the conceptual design process can be accelerated without sacrificing analysis accuracy. Scaling laws for take-off gross weight and fuselage volume as functions of fuel cell specific power and power density for a notional General Aviation vehicle are derived for the proof of concept. CESM enables the designer to maintain design freedom by portably carrying multiple designs deeper into the design process. Also since CESM is a bottom-up approach, all proposed baseline concepts are implicitly volumetrically feasible. System level geometry parameters become fall-outs as opposed to inputs. This is a critical attribute as, without the benefit of experience, a designer would be hard pressed to set the appropriate ranges for such parameters for a vehicle built around a disruptive technology. Furthermore, scaling laws generated from custom data for each concept are subject to less design noise than say, regression based approaches. Through these laws, key physics-based characteristics of vehicle subsystems such as energy density can be mapped onto key system level metrics such as fuselage volume or take-off gross weight. These laws can then substitute some historical-data based analyses thereby improving the fidelity of the analyses and reducing design time. (Abstract shortened by UMI.)
An abstract approach to music.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaper, H. G.; Tipei, S.
1999-04-19
In this article we have outlined a formal framework for an abstract approach to music and music composition. The model is formulated in terms of objects that have attributes, obey relationships, and are subject to certain well-defined operations. The motivation for this approach uses traditional terms and concepts of music theory, but the approach itself is formal and uses the language of mathematics. The universal object is an audio wave; partials, sounds, and compositions are special objects, which are placed in a hierarchical order based on time scales. The objects have both static and dynamic attributes. When we realize amore » composition, we assign values to each of its attributes: a (scalar) value to a static attribute, an envelope and a size to a dynamic attribute. A composition is then a trajectory in the space of aural events, and the complex audio wave is its formal representation. Sounds are fibers in the space of aural events, from which the composer weaves the trajectory of a composition. Each sound object in turn is made up of partials, which are the elementary building blocks of any music composition. The partials evolve on the fastest time scale in the hierarchy of partials, sounds, and compositions. The ideas outlined in this article are being implemented in a digital instrument for additive sound synthesis and in software for music composition. A demonstration of some preliminary results has been submitted by the authors for presentation at the conference.« less
Global Interior Robot Localisation by a Colour Content Image Retrieval System
NASA Astrophysics Data System (ADS)
Chaari, A.; Lelandais, S.; Montagne, C.; Ahmed, M. Ben
2007-12-01
We propose a new global localisation approach to determine a coarse position of a mobile robot in structured indoor space using colour-based image retrieval techniques. We use an original method of colour quantisation based on the baker's transformation to extract a two-dimensional colour pallet combining as well space and vicinity-related information as colourimetric aspect of the original image. We conceive several retrieving approaches bringing to a specific similarity measure [InlineEquation not available: see fulltext.] integrating the space organisation of colours in the pallet. The baker's transformation provides a quantisation of the image into a space where colours that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image. Whereas the distance [InlineEquation not available: see fulltext.] provides for partial invariance to translation, sight point small changes, and scale factor. In addition to this study, we developed a hierarchical search module based on the logic classification of images following rooms. This hierarchical module reduces the searching indoor space and ensures an improvement of our system performances. Results are then compared with those brought by colour histograms provided with several similarity measures. In this paper, we focus on colour-based features to describe indoor images. A finalised system must obviously integrate other type of signature like shape and texture.
Enabling the 2nd Generation in Space: Building Blocks for Large Scale Space Endeavours
NASA Astrophysics Data System (ADS)
Barnhardt, D.; Garretson, P.; Will, P.
Today the world operates within a "first generation" space industrial enterprise, i.e. all industry is on Earth, all value from space is from bits (data essentially), and the focus is Earth-centric, with very limited parts of our population and industry participating in space. We are limited in access, manoeuvring, on-orbit servicing, in-space power, in-space manufacturing and assembly. The transition to a "Starship culture" requires the Earth to progress to a "second generation" space industrial base, which implies the need to expand the economic sphere of activity of mankind outside of an Earth-centric zone and into CIS-lunar space and beyond, with an equal ability to tap the indigenous resources in space (energy, location, materials) that will contribute to an expanding space economy. Right now, there is no comfortable place for space applications that are not discovery science, exploration, military, or established earth bound services. For the most part, space applications leave out -- or at least leave nebulous, unconsolidated, and without a critical mass -- programs and development efforts for infrastructure, industrialization, space resources (survey and process maturation), non-traditional and persistent security situational awareness, and global utilities -- all of which, to a far greater extent than a discovery and exploration program, may help determine the elements of a 2nd generation space capability. We propose a focus to seed the pre-competitive research that will enable global industry to develop the necessary competencies that we currently lack to build large scale space structures on-orbit, that in turn would lay the foundation for long duration spacecraft travel (i.e. key technologies in access, manoeuvrability, etc.). This paper will posit a vision-to-reality for a step wise approach to the types of activities the US and global space providers could embark upon to lay the foundation for the 2nd generation of Earth in space.
Multigrid methods with space–time concurrency
Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.; ...
2017-10-06
Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less
Incoherent coincidence imaging of space objects
NASA Astrophysics Data System (ADS)
Mao, Tianyi; Chen, Qian; He, Weiji; Gu, Guohua
2016-10-01
Incoherent Coincidence Imaging (ICI), which is based on the second or higher order correlation of fluctuating light field, has provided great potentialities with respect to standard conventional imaging. However, the deployment of reference arm limits its practical applications in the detection of space objects. In this article, an optical aperture synthesis with electronically connected single-pixel photo-detectors was proposed to remove the reference arm. The correlation in our proposed method is the second order correlation between the intensity fluctuations observed by any two detectors. With appropriate locations of single-pixel detectors, this second order correlation is simplified to absolute-square Fourier transform of source and the unknown object. We demonstrate the image recovery with the Gerchberg-Saxton-like algorithms and investigate the reconstruction quality of our approach. Numerical experiments has been made to show that both binary and gray-scale objects can be recovered. This proposed method provides an effective approach to promote detection of space objects and perhaps even the exo-planets.
Multigrid methods with space–time concurrency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.
Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less
Source imaging of potential fields through a matrix space-domain algorithm
NASA Astrophysics Data System (ADS)
Baniamerian, Jamaledin; Oskooi, Behrooz; Fedi, Maurizio
2017-01-01
Imaging of potential fields yields a fast 3D representation of the source distribution of potential fields. Imaging methods are all based on multiscale methods allowing the source parameters of potential fields to be estimated from a simultaneous analysis of the field at various scales or, in other words, at many altitudes. Accuracy in performing upward continuation and differentiation of the field has therefore a key role for this class of methods. We here describe an accurate method for performing upward continuation and vertical differentiation in the space-domain. We perform a direct discretization of the integral equations for upward continuation and Hilbert transform; from these equations we then define matrix operators performing the transformation, which are symmetric (upward continuation) or anti-symmetric (differentiation), respectively. Thanks to these properties, just the first row of the matrices needs to be computed, so to decrease dramatically the computation cost. Our approach allows a simple procedure, with the advantage of not involving large data extension or tapering, as due instead in case of Fourier domain computation. It also allows level-to-drape upward continuation and a stable differentiation at high frequencies; finally, upward continuation and differentiation kernels may be merged into a single kernel. The accuracy of our approach is shown to be important for multi-scale algorithms, such as the continuous wavelet transform or the DEXP (depth from extreme point method), because border errors, which tend to propagate largely at the largest scales, are radically reduced. The application of our algorithm to synthetic and real-case gravity and magnetic data sets confirms the accuracy of our space domain strategy over FFT algorithms and standard convolution procedures.
A Multi-Scale, Integrated Approach to Representing Watershed Systems
NASA Astrophysics Data System (ADS)
Ivanov, Valeriy; Kim, Jongho; Fatichi, Simone; Katopodes, Nikolaos
2014-05-01
Understanding and predicting process dynamics across a range of scales are fundamental challenges for basic hydrologic research and practical applications. This is particularly true when larger-spatial-scale processes, such as surface-subsurface flow and precipitation, need to be translated to fine space-time scale dynamics of processes, such as channel hydraulics and sediment transport, that are often of primary interest. Inferring characteristics of fine-scale processes from uncertain coarse-scale climate projection information poses additional challenges. We have developed an integrated model simulating hydrological processes, flow dynamics, erosion, and sediment transport, tRIBS+VEGGIE-FEaST. The model targets to take the advantage of the current generation of wealth of data representing watershed topography, vegetation, soil, and landuse, as well as to explore the hydrological effects of physical factors and their feedback mechanisms over a range of scales. We illustrate how the modeling system connects precipitation-hydrologic runoff partition process to the dynamics of flow, erosion, and sedimentation, and how the soil's substrate condition can impact the latter processes, resulting in a non-unique response. We further illustrate an approach to using downscaled climate change information with a process-based model to infer the moments of hydrologic variables in future climate conditions and explore the impact of climate information uncertainty.
On Time/Space Aggregation of Fine-Scale Error Estimates (Invited)
NASA Astrophysics Data System (ADS)
Huffman, G. J.
2013-12-01
Estimating errors inherent in fine time/space-scale satellite precipitation data sets is still an on-going problem and a key area of active research. Complicating features of these data sets include the intrinsic intermittency of the precipitation in space and time and the resulting highly skewed distribution of precipitation rates. Additional issues arise from the subsampling errors that satellites introduce, the errors due to retrieval algorithms, and the correlated error that retrieval and merger algorithms sometimes introduce. Several interesting approaches have been developed recently that appear to make progress on these long-standing issues. At the same time, the monthly averages over 2.5°x2.5° grid boxes in the Global Precipitation Climatology Project (GPCP) Satellite-Gauge (SG) precipitation data set follow a very simple sampling-based error model (Huffman 1997) with coefficients that are set using coincident surface and GPCP SG data. This presentation outlines the unsolved problem of how to aggregate the fine-scale errors (discussed above) to an arbitrary time/space averaging volume for practical use in applications, reducing in the limit to simple Gaussian expressions at the monthly 2.5°x2.5° scale. Scatter diagrams with different time/space averaging show that the relationship between the satellite and validation data improves due to the reduction in random error. One of the key, and highly non-linear, issues is that fine-scale estimates tend to have large numbers of cases with points near the axes on the scatter diagram (one of the values is exactly or nearly zero, while the other value is higher). Averaging 'pulls' the points away from the axes and towards the 1:1 line, which usually happens for higher precipitation rates before lower rates. Given this qualitative observation of how aggregation affects error, we observe that existing aggregation rules, such as the Steiner et al. (2003) power law, only depend on the aggregated precipitation rate. Is this sufficient, or is it necessary to aggregate the precipitation error estimates across the time/space data cube used for averaging? At least for small time/space data cubes it would seem that the detailed variables that affect each precipitation error estimate in the aggregation, such as sensor type, land/ocean surface type, convective/stratiform type, and so on, drive variations that must be accounted for explicitly.
NASA Technical Reports Server (NTRS)
Hyers, Robert W.; Motakef, S.; Witt, A. F.; Wuensch, B.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
Realization of the full potential of photorefractive materials in device technology is seriously impeded by our inability to achieve controlled formation of critical defects during single crystal growth and by difficulties in meeting the required degree of compositional uniformity on a micro-scale over macroscopic dimensions. The exact nature and origin of the critical defects which control photorefractivity could not as yet be identified because of gravitational interference. There exists, however, strong evidence that the density of defect formation and their spatial distribution are adversely affected by gravitational interference which precludes the establishment of quantifiable and controllable heat and mass transfer conditions during crystal growth. The current, NASA sponsored research at MIT is directed at establishing a basis for the development of a comprehensive approach to the optimization of property control during melt growth of photorefractive materials, making use of the m-g environment, provided in the International Space Station. The objectives to be pursued in m-g research on photorefractive BSO (Bi12SiO20) are: (a) identification of the x-level(s) responsible for photorefractivity in undoped BSO; (b) development of approaches leading to the control of x-level formation at uniform spatial distribution; (c) development of doping and processing procedures for optimization of the critical, application specific parameters, spectral response, sensitivity, response time and matrix stability. The presentation will focus on: the rationale for the justification of the space experiment, ground-based development efforts, design considerations for the space experiments, strategic plan of the space experiments, and approaches to the quantitative analysis of the space experiments.
Scale in Remote Sensing and GIS: An Advancement in Methods Towards a Science of Scale
NASA Technical Reports Server (NTRS)
Quattrochi, Dale A.
1998-01-01
The term "scale", both in space and time, is central to remote sensing and geographic information systems (GIS). The emergence and widespread use of GIS technologies, including remote sensing, has generated significant interest in addressing scale as a generic topic, and in the development and implementation of techniques for dealing explicitly with the vicissitudes of scale as a multidisciplinary issue. As science becomes more complex and utilizes databases that are capable of performing complex space-time data analyses, it becomes paramount that we develop the tools and techniques needed to operate at multiple scales, to work with data whose scales are not necessarily ideal, and to produce results that can be aggregated or disaggregated in ways that suit the decision-making process. Contemporary science is constantly coping with compromises, and the data available for a particular study rarely fit perfectly with the scales at which the processes being investigated operate, or the scales that policy-makers require to make sound, rational decisions. This presentation discusses some of the problems associated with scale as related to remote sensing and GIS, and describes some of the questions that need to be addressed in approaching the development of a multidisciplinary "science of scale". Techniques for dealing with multiple scaled data that have been developed or explored recently are described as a means for recognizing scale as a generic issue, along with associated theory and tools that can be of simultaneous value to a large number of disciplines. These can be used to seek answers to a host of interrelated questions in the interest of providing a formal structure for the management and manipulation of scale and its universality as a key concept from a multidisciplinary perspective.
Scale in Remote Sensing and GIS: An Advancement in Methods Towards a Science of Scale
NASA Technical Reports Server (NTRS)
Quattrochi, D. A.
1998-01-01
The term "scale", both in space and time, is central to remote sensing and Geographic Information Systems (GIS). The emergence and widespread use of GIS technologies, including remote sensing, has generated significant interest in addressing scale as a generic topic, and in the development and implementation of techniques for dealing explicitly with the vicissitudes of scale as a multidisciplinary issue. As science becomes more complex and utilizes databases that are capable of performing complex space-time data analyses, it becomes paramount that we develop the tools and techniques needed to operate at multiple scales, to work with data whose scales are not necessarily ideal, and to produce results that can be aggregated or disaggregated ways that suit the decision-making process. Contemporary science is constantly coping with compromises, and the data available for a particular study rarely fit perfectly with the scales at which the processes being investigated operate, or the scales that policy-makers require to make sound, rational decisions. This presentation discusses some of the problems associated with scale as related to remote sensing and GIS, and describes some of the questions that need to be addressed in approaching the development of a multidisciplinary "science of scale". Techniques for dealing with multiple scaled data that have been developed or explored recently are described as a means for recognizing scale as a generic issue, along with associated theory and tools that can be of simultaneous value to a large number of disciplines. These can be used to seek answers to a host of interrelated questions in the interest of providing a formal structure for the management and manipulation of scale and its universality as a key concept from a multidisciplinary perspective.
Time-Domain Filtering for Spatial Large-Eddy Simulation
NASA Technical Reports Server (NTRS)
Pruett, C. David
1997-01-01
An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.
Multiscale spatial and temporal estimation of the b-value
NASA Astrophysics Data System (ADS)
García-Hernández, R.; D'Auria, L.; Barrancos, J.; Padilla, G.
2017-12-01
The estimation of the spatial and temporal variations of the Gutenberg-Richter b-value is of great importance in different seismological applications. One of the problems affecting its estimation is the heterogeneous distribution of the seismicity which makes its estimate strongly dependent upon the selected spatial and/or temporal scale. This is especially important in volcanoes where dense clusters of earthquakes often overlap the background seismicity. Proposed solutions for estimating temporal variations of the b-value include considering equally spaced time intervals or variable intervals having an equal number of earthquakes. Similar approaches have been proposed to image the spatial variations of this parameter as well.We propose a novel multiscale approach, based on the method of Ogata and Katsura (1993), allowing a consistent estimation of the b-value regardless of the considered spatial and/or temporal scales. Our method, named MUST-B (MUltiscale Spatial and Temporal characterization of the B-value), basically consists in computing estimates of the b-value at multiple temporal and spatial scales, extracting for a give spatio-temporal point a statistical estimator of the value, as well as and indication of the characteristic spatio-temporal scale. This approach includes also a consistent estimation of the completeness magnitude (Mc) and of the uncertainties over both b and Mc.We applied this method to example datasets for volcanic (Tenerife, El Hierro) and tectonic areas (Central Italy) as well as an example application at global scale.
Coaching the exploration and exploitation in active learning for interactive video retrieval.
Wei, Xiao-Yong; Yang, Zhen-Qun
2013-03-01
Conventional active learning approaches for interactive video/image retrieval usually assume the query distribution is unknown, as it is difficult to estimate with only a limited number of labeled instances available. Thus, it is easy to put the system in a dilemma whether to explore the feature space in uncertain areas for a better understanding of the query distribution or to harvest in certain areas for more relevant instances. In this paper, we propose a novel approach called coached active learning that makes the query distribution predictable through training and, therefore, avoids the risk of searching on a completely unknown space. The estimated distribution, which provides a more global view of the feature space, can be used to schedule not only the timing but also the step sizes of the exploration and the exploitation in a principled way. The results of the experiments on a large-scale data set from TRECVID 2005-2009 validate the efficiency and effectiveness of our approach, which demonstrates an encouraging performance when facing domain-shift, outperforms eight conventional active learning methods, and shows superiority to six state-of-the-art interactive video retrieval systems.
Hamilton, Joshua J.; Dwivedi, Vivek; Reed, Jennifer L.
2013-01-01
Constraint-based methods provide powerful computational techniques to allow understanding and prediction of cellular behavior. These methods rely on physiochemical constraints to eliminate infeasible behaviors from the space of available behaviors. One such constraint is thermodynamic feasibility, the requirement that intracellular flux distributions obey the laws of thermodynamics. The past decade has seen several constraint-based methods that interpret this constraint in different ways, including those that are limited to small networks, rely on predefined reaction directions, and/or neglect the relationship between reaction free energies and metabolite concentrations. In this work, we utilize one such approach, thermodynamics-based metabolic flux analysis (TMFA), to make genome-scale, quantitative predictions about metabolite concentrations and reaction free energies in the absence of prior knowledge of reaction directions, while accounting for uncertainties in thermodynamic estimates. We applied TMFA to a genome-scale network reconstruction of Escherichia coli and examined the effect of thermodynamic constraints on the flux space. We also assessed the predictive performance of TMFA against gene essentiality and quantitative metabolomics data, under both aerobic and anaerobic, and optimal and suboptimal growth conditions. Based on these results, we propose that TMFA is a useful tool for validating phenotypes and generating hypotheses, and that additional types of data and constraints can improve predictions of metabolite concentrations. PMID:23870272
Scaling Relations and Self-Similarity of 3-Dimensional Reynolds-Averaged Navier-Stokes Equations.
Ercan, Ali; Kavvas, M Levent
2017-07-25
Scaling conditions to achieve self-similar solutions of 3-Dimensional (3D) Reynolds-Averaged Navier-Stokes Equations, as an initial and boundary value problem, are obtained by utilizing Lie Group of Point Scaling Transformations. By means of an open-source Navier-Stokes solver and the derived self-similarity conditions, we demonstrated self-similarity within the time variation of flow dynamics for a rigid-lid cavity problem under both up-scaled and down-scaled domains. The strength of the proposed approach lies in its ability to consider the underlying flow dynamics through not only from the governing equations under consideration but also from the initial and boundary conditions, hence allowing to obtain perfect self-similarity in different time and space scales. The proposed methodology can be a valuable tool in obtaining self-similar flow dynamics under preferred level of detail, which can be represented by initial and boundary value problems under specific assumptions.
Stoy, Paul C; Quaife, Tristan
2015-01-01
Upscaling ecological information to larger scales in space and downscaling remote sensing observations or model simulations to finer scales remain grand challenges in Earth system science. Downscaling often involves inferring subgrid information from coarse-scale data, and such ill-posed problems are classically addressed using regularization. Here, we apply two-dimensional Tikhonov Regularization (2DTR) to simulate subgrid surface patterns for ecological applications. Specifically, we test the ability of 2DTR to simulate the spatial statistics of high-resolution (4 m) remote sensing observations of the normalized difference vegetation index (NDVI) in a tundra landscape. We find that the 2DTR approach as applied here can capture the major mode of spatial variability of the high-resolution information, but not multiple modes of spatial variability, and that the Lagrange multiplier (γ) used to impose the condition of smoothness across space is related to the range of the experimental semivariogram. We used observed and 2DTR-simulated maps of NDVI to estimate landscape-level leaf area index (LAI) and gross primary productivity (GPP). NDVI maps simulated using a γ value that approximates the range of observed NDVI result in a landscape-level GPP estimate that differs by ca 2% from those created using observed NDVI. Following findings that GPP per unit LAI is lower near vegetation patch edges, we simulated vegetation patch edges using multiple approaches and found that simulated GPP declined by up to 12% as a result. 2DTR can generate random landscapes rapidly and can be applied to disaggregate ecological information and compare of spatial observations against simulated landscapes.
Stoy, Paul C.; Quaife, Tristan
2015-01-01
Upscaling ecological information to larger scales in space and downscaling remote sensing observations or model simulations to finer scales remain grand challenges in Earth system science. Downscaling often involves inferring subgrid information from coarse-scale data, and such ill-posed problems are classically addressed using regularization. Here, we apply two-dimensional Tikhonov Regularization (2DTR) to simulate subgrid surface patterns for ecological applications. Specifically, we test the ability of 2DTR to simulate the spatial statistics of high-resolution (4 m) remote sensing observations of the normalized difference vegetation index (NDVI) in a tundra landscape. We find that the 2DTR approach as applied here can capture the major mode of spatial variability of the high-resolution information, but not multiple modes of spatial variability, and that the Lagrange multiplier (γ) used to impose the condition of smoothness across space is related to the range of the experimental semivariogram. We used observed and 2DTR-simulated maps of NDVI to estimate landscape-level leaf area index (LAI) and gross primary productivity (GPP). NDVI maps simulated using a γ value that approximates the range of observed NDVI result in a landscape-level GPP estimate that differs by ca 2% from those created using observed NDVI. Following findings that GPP per unit LAI is lower near vegetation patch edges, we simulated vegetation patch edges using multiple approaches and found that simulated GPP declined by up to 12% as a result. 2DTR can generate random landscapes rapidly and can be applied to disaggregate ecological information and compare of spatial observations against simulated landscapes. PMID:26067835
Ensemble downscaling in coupled solar wind-magnetosphere modeling for space weather forecasting.
Owens, M J; Horbury, T S; Wicks, R T; McGregor, S L; Savani, N P; Xiong, M
2014-06-01
Advanced forecasting of space weather requires simulation of the whole Sun-to-Earth system, which necessitates driving magnetospheric models with the outputs from solar wind models. This presents a fundamental difficulty, as the magnetosphere is sensitive to both large-scale solar wind structures, which can be captured by solar wind models, and small-scale solar wind "noise," which is far below typical solar wind model resolution and results primarily from stochastic processes. Following similar approaches in terrestrial climate modeling, we propose statistical "downscaling" of solar wind model results prior to their use as input to a magnetospheric model. As magnetospheric response can be highly nonlinear, this is preferable to downscaling the results of magnetospheric modeling. To demonstrate the benefit of this approach, we first approximate solar wind model output by smoothing solar wind observations with an 8 h filter, then add small-scale structure back in through the addition of random noise with the observed spectral characteristics. Here we use a very simple parameterization of noise based upon the observed probability distribution functions of solar wind parameters, but more sophisticated methods will be developed in the future. An ensemble of results from the simple downscaling scheme are tested using a model-independent method and shown to add value to the magnetospheric forecast, both improving the best estimate and quantifying the uncertainty. We suggest a number of features desirable in an operational solar wind downscaling scheme. Solar wind models must be downscaled in order to drive magnetospheric models Ensemble downscaling is more effective than deterministic downscaling The magnetosphere responds nonlinearly to small-scale solar wind fluctuations.
NASA Technical Reports Server (NTRS)
White, Mark; Huang, Bing; Qin, Jin; Gur, Zvi; Talmor, Michael; Chen, Yuan; Heidecker, Jason; Nguyen, Duc; Bernstein, Joseph
2005-01-01
As microelectronics are scaled in to the deep sub-micron regime, users of advanced technology CMOS, particularly in high-reliability applications, should reassess how scaling effects impact long-term reliability. An experimental based reliability study of industrial grade SRAMs, consisting of three different technology nodes, is proposed to substantiate current acceleration models for temperature and voltage life-stress relationships. This reliability study utilizes step-stress techniques to evaluate memory technologies (0.25mum, 0.15mum, and 0.13mum) embedded in many of today's high-reliability space/aerospace applications. Two acceleration modeling approaches are presented to relate experimental FIT calculations to Mfr's qualification data.
Comparative ruralism and 'opening new windows' on gentrification.
Phillips, Martin; Smith, Darren P
2018-03-01
In response to the five commentaries on our paper 'Comparative approaches to gentrification: lessons from the rural', we open up more 'windows' on rural gentrification and its urban counterpart. First, we highlight the issues of metrocentricity and urbanormativity within gentrification studies, highlighting their employment by our commentators. Second, we consider the issue of displacement and its operation within rural space, as well as gentrification as a coping strategy for neoliberal existence and connections to more-than-human natures. Finally, we consider questions of scale, highlighting the need to avoid naturalistic conceptions of scale and arguing that attention could be paid to the role of material practices, symbolizations and lived experiences in producing scaled geographies of rural and urban gentrification.
Quantification of fibre polymerization through Fourier space image analysis
Nekouzadeh, Ali; Genin, Guy M.
2011-01-01
Quantification of changes in the total length of randomly oriented and possibly curved lines appearing in an image is a necessity in a wide variety of biological applications. Here, we present an automated approach based upon Fourier space analysis. Scaled, band-pass filtered power spectral densities of greyscale images are integrated to provide a quantitative measurement of the total length of lines of a particular range of thicknesses appearing in an image. A procedure is presented to correct for changes in image intensity. The method is most accurate for two-dimensional processes with fibres that do not occlude one another. PMID:24959096
Structural dynamics payload loads estimates
NASA Technical Reports Server (NTRS)
Engels, R. C.
1982-01-01
Methods for the prediction of loads on large space structures are discussed. Existing approaches to the problem of loads calculation are surveyed. A full scale version of an alternate numerical integration technique to solve the response part of a load cycle is presented, and a set of short cut versions of the algorithm developed. The implementation of these techniques using the software package developed is discussed.
Extensions and evaluations of a general quantitative theory of forest structure and dynamics
Enquist, Brian J.; West, Geoffrey B.; Brown, James H.
2009-01-01
Here, we present the second part of a quantitative theory for the structure and dynamics of forests under demographic and resource steady state. The theory is based on individual-level allometric scaling relations for how trees use resources, fill space, and grow. These scale up to determine emergent properties of diverse forests, including size–frequency distributions, spacing relations, canopy configurations, mortality rates, population dynamics, successional dynamics, and resource flux rates. The theory uniquely makes quantitative predictions for both stand-level scaling exponents and normalizations. We evaluate these predictions by compiling and analyzing macroecological datasets from several tropical forests. The close match between theoretical predictions and data suggests that forests are organized by a set of very general scaling rules. Our mechanistic theory is based on allometric scaling relations, is complementary to “demographic theory,” but is fundamentally different in approach. It provides a quantitative baseline for understanding deviations from predictions due to other factors, including disturbance, variation in branching architecture, asymmetric competition, resource limitation, and other sources of mortality, which are not included in the deliberately simplified theory. The theory should apply to a wide range of forests despite large differences in abiotic environment, species diversity, and taxonomic and functional composition. PMID:19363161
Fink, Reinhold F
2010-11-07
A rigorous perturbation theory is proposed, which has the same second order energy as the spin-component-scaled Møller-Plesset second order (SCS-MP2) method of Grimme [J. Chem. Phys. 118, 9095 (2003)]. This upgrades SCS-MP2 to a systematically improvable, true wave-function-based method. The perturbation theory is defined by an unperturbed Hamiltonian, Ĥ(0), that contains the ordinary Fock operator and spin operators Ŝ(2) that act either on the occupied or the virtual orbital spaces. Two choices for Ĥ(0) are discussed and the importance of a spin-pure Ĥ((0)) is underlined. Like the SCS-MP2 approach, the theory contains two parameters (c(os) and c(ss)) that scale the opposite-spin and the same-spin contributions to the second order perturbation energy. It is shown that these parameters can be determined from theoretical considerations by a Feenberg scaling approach or a fit of the wave functions from the perturbation theory to the exact one from a full configuration interaction calculation. The parameters c(os)=1.15 and c(ss)=0.75 are found to be optimal for a reasonable test set of molecules. The meaning of these parameters and the consequences following from a well defined improved MP method are discussed.
Comparing a discrete and continuum model of the intestinal crypt
Murray, Philip J.; Walter, Alex; Fletcher, Alex G.; Edwards, Carina M.; Tindall, Marcus J.; Maini, Philip K.
2011-01-01
The integration of processes at different scales is a key problem in the modelling of cell populations. Owing to increased computational resources and the accumulation of data at the cellular and subcellular scales, the use of discrete, cell-level models, which are typically solved using numerical simulations, has become prominent. One of the merits of this approach is that important biological factors, such as cell heterogeneity and noise, can be easily incorporated. However, it can be difficult to efficiently draw generalisations from the simulation results, as, often, many simulation runs are required to investigate model behaviour in typically large parameter spaces. In some cases, discrete cell-level models can be coarse-grained, yielding continuum models whose analysis can lead to the development of insight into the underlying simulations. In this paper we apply such an approach to the case of a discrete model of cell dynamics in the intestinal crypt. An analysis of the resulting continuum model demonstrates that there is a limited region of parameter space within which steady-state (and hence biologically realistic) solutions exist. Continuum model predictions show good agreement with corresponding results from the underlying simulations and experimental data taken from murine intestinal crypts. PMID:21411869
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Wenhu; Key Laboratory for Thermal Science and Power Engineering of Ministry of Education, Department of Thermal Engineering, Tsinghua University, Beijing 100084; Gao, Yang, E-mail: gaoyang-00@mails.tsinghua.edu.cn
The globally planar detonation in free space is numerically simulated, with particular interest to understand and quantify the emergence and evolution of the one-dimensional pulsating instability and the two-dimensional cellular structure which is inherently also affected by pulsating instability. It is found that the pulsation includes three stages: rapid decay of the overdrive, approach to the Chapman-Jouguet state and emergence of weak pulsations, and the formation of strong pulsations; while evolution of the cellular structure also exhibits distinct behavior at these three stages: no cell formation, formation of small-scale, irregular cells, and formation of regular cells of a larger scale.more » Furthermore, the average shock pressure in the detonation front consists of fine-scale oscillations reflecting the collision dynamics of the triple-shock structure and large-scale oscillations affected by the global pulsation. The common stages of evolution between the cellular structure and the pulsating behavior, as well as the existence of shock-front pressure oscillation, suggest highly correlated mechanisms between them. Detonations with period doubling, period quadrupling, and chaotic amplitudes were also observed and studied for progressively increasing activation energies.« less
Brown, Jessi L.; Bedrosian, Bryan; Bell, Douglas A.; Braham, Melissa A.; Cooper, Jeff; Crandall, Ross H.; DiDonato, Joe; Domenech, Robert; Duerr, Adam E.; Katzner, Todd; Lanzone, Michael J.; LaPlante, David W.; McIntyre, Carol L.; Miller, Tricia A.; Murphy, Robert K.; Shreading, Adam; Slater, Steven J.; Smith, Jeff P.; Smith, Brian W.; Watson, James W.; Woodbridge, Brian
2017-01-01
Conserving wide-ranging animals requires knowledge about their year-round movements and resource use. Golden Eagles (Aquila chrysaetos) exhibit a wide range of movement patterns across North America. We combined tracking data from 571 Golden Eagles from multiple independent satellite-telemetry projects from North America to provide a comprehensive look at the magnitude and extent of these movements on a continental scale. We compared patterns of use relative to four alternative administrative and ecological mapping systems, namely Bird Conservation Regions (BCRs), U.S. administrative migratory bird flyways, Migratory Bird Joint Ventures, and Landscape Conservation Cooperatives. Our analyses suggested that eagles initially captured in eastern North America used space differently than those captured in western North America. Other groups of eagles that exhibited distinct patterns in space use included long-distance migrants from northern latitudes, and southwestern and Californian desert residents. There were also several groupings of eagles in the Intermountain West. Using this collaborative approach, we have identified large-scale movement patterns that may not have been possible with individual studies. These results will support landscape-scale conservation measures for Golden Eagles across North America.
Jackson, Nathan; Muthuswamy, Jit
2009-01-01
We report here a novel approach called MEMS microflex interconnect (MMFI) technology for packaging a new generation of Bio-MEMS devices that involve movable microelectrodes implanted in brain tissue. MMFI addresses the need for (i) operating space for movable parts and (ii) flexible interconnects for mechanical isolation. We fabricated a thin polyimide substrate with embedded bond-pads, vias, and conducting traces for the interconnect with a backside dry etch, so that the flexible substrate can act as a thin-film cap for the MEMS package. A double gold stud bump rivet bonding mechanism was used to form electrical connections to the chip and also to provide a spacing of approximately 15–20 µm for the movable parts. The MMFI approach achieved a chip scale package (CSP) that is lightweight, biocompatible, having flexible interconnects, without an underfill. Reliability tests demonstrated minimal increases of 0.35 mΩ, 0.23 mΩ and 0.15 mΩ in mean contact resistances under high humidity, thermal cycling, and thermal shock conditions respectively. High temperature tests resulted in an increase in resistance of > 90 mΩ when aluminum bond pads were used, but an increase of ~ 4.2 mΩ with gold bond pads. The mean-time-to-failure (MTTF) was estimated to be at least one year under physiological conditions. We conclude that MMFI technology is a feasible and reliable approach for packaging and interconnecting Bio-MEMS devices. PMID:20160981
Multiscale modeling of porous ceramics using movable cellular automaton method
NASA Astrophysics Data System (ADS)
Smolin, Alexey Yu.; Smolin, Igor Yu.; Smolina, Irina Yu.
2017-10-01
The paper presents a multiscale model for porous ceramics based on movable cellular automaton method, which is a particle method in novel computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the unique position in space. As a result, we get the average values of Young's modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behavior at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via effective properties determined earliar. If the pore size distribution function of the material has N maxima we need to perform computations for N-1 levels in order to get the properties step by step from the lowest scale up to the macroscale. The proposed approach was applied to modeling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behavior of the model sample at the macroscale.
Multiscale Cloud System Modeling
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Moncrieff, Mitchell W.
2009-01-01
The central theme of this paper is to describe how cloud system resolving models (CRMs) of grid spacing approximately 1 km have been applied to various important problems in atmospheric science across a wide range of spatial and temporal scales and how these applications relate to other modeling approaches. A long-standing problem concerns the representation of organized precipitating convective cloud systems in weather and climate models. Since CRMs resolve the mesoscale to large scales of motion (i.e., 10 km to global) they explicitly address the cloud system problem. By explicitly representing organized convection, CRMs bypass restrictive assumptions associated with convective parameterization such as the scale gap between cumulus and large-scale motion. Dynamical models provide insight into the physical mechanisms involved with scale interaction and convective organization. Multiscale CRMs simulate convective cloud systems in computational domains up to global and have been applied in place of contemporary convective parameterizations in global models. Multiscale CRMs pose a new challenge for model validation, which is met in an integrated approach involving CRMs, operational prediction systems, observational measurements, and dynamical models in a new international project: the Year of Tropical Convection, which has an emphasis on organized tropical convection and its global effects.
Urban greenspace for resilient city in the future: Case study of Yogyakarta City
NASA Astrophysics Data System (ADS)
Ni'mah, N. M.; Lenonb, S.
2017-06-01
The capacity of adaptation is essential elements towards urban resilience. One adaptation that can be done is to consider the provision of open space and public space in the city. Yogyakarta City development which focused on the built area and negates the open space has blurred the characteristics of the city. Efforts in increasing the availability of public space is one of the seven priorities of the programs included in the environmental and the utilization of space in Yogyakarta City. An understanding of the provision of public green open spaces in Yogyakarta is important because the products and processes that take place in a development will determine the successful implementation of the development plan. The objectives of this study are as follows: (1) to identify the provision green space in Yogyakarta City from the aspects of product and procedure; and (2) to identify the role of green space to build resilient city. This study is used descriptive qualitative approach with in-depth interview, literature review, and triangulation as the method for data collection. Yogyakarta has had instruments for public green open spaces provision called Masterplan Ruang Terbuka Hijau (RTH) Up-Scaling Yogyakarta 2013-2032 which govern the typologies and criteria for green open space development in the city.Public green open spaces development mechanism can be grouped into the planning phase, the utilization phase, and the control phase of each consisting of legal and regulatory aspects, institutional aspects, financial aspects, and technical aspects. The mechanism of green open space provision should regard the need of advocacy for “urban green commons” (UGCs) development as a systematic approach of collective-participatory for urban land management.
Implementation of a finite-amplitude method in a relativistic meson-exchange model
NASA Astrophysics Data System (ADS)
Sun, Xuwei; Lu, Dinghui
2017-08-01
The finite-amplitude method is a feasible numerical approach to large scale random phase approximation calculations. It avoids the storage and calculation of residual interaction elements as well as the diagonalization of the RPA matrix, which will be prohibitive when the configuration space is huge. In this work we finished the implementation of a finite-amplitude method in a relativistic meson exchange mean field model with axial symmetry. The direct variation approach makes our FAM scheme capable of being extended to the multipole excitation case.
Laser Amplifier Development for the Remote Sensing of CO2 from Space
NASA Technical Reports Server (NTRS)
Yu, Anthony W.; Abshire, James B.; Storm, Mark; Betin, Alexander
2015-01-01
Accurate global measurements of tropospheric CO2 mixing ratios are needed to study CO2 emissions and CO2 exchange with the land and oceans. NASA Goddard Space Flight Center (GSFC) is developing a pulsed lidar approach for an integrated path differential absorption (IPDA) lidar to allow global measurements of atmospheric CO2 column densities from space. Our group has developed, and successfully flown, an airborne pulsed lidar instrument that uses two tunable pulsed laser transmitters allowing simultaneous measurement of a single CO2 absorption line in the 1570 nm band, absorption of an O2 line pair in the oxygen A-band (765 nm), range, and atmospheric backscatter profiles in the same path. Both lasers are pulsed at 10 kHz, and the two absorption line regions are sampled at typically a 300 Hz rate. A space-based version of this lidar must have a much larger lidar power-area product due to the approximately x40 longer range and faster along track velocity compared to airborne instrument. Initial link budget analysis indicated that for a 400 km orbit, a 1.5 m diameter telescope and a 10 second integration time, a approximately 2 mJ laser energy is required to attain the precision needed for each measurement. To meet this energy requirement, we have pursued parallel power scaling efforts to enable space-based lidar measurement of CO2 concentrations. These included a multiple aperture approach consists of multi-element large mode area fiber amplifiers and a single-aperture approach consists of a multi-pass Er:Yb:Phosphate glass based planar waveguide amplifier (PWA). In this paper we will present our laser amplifier design approaches and preliminary results.
NASA Technical Reports Server (NTRS)
Engin, Doruk; Mathason, Brian; Stephen, Mark; Yu, Anthony; Cao, He; Fouron, Jean-Luc; Storm, Mark
2016-01-01
Accurate global measurements of tropospheric CO2 mixing ratios are needed to study CO2 emissions and CO2 exchange with the land and oceans. NASA Goddard Space Flight Center (GSFC) is developing a pulsed lidar approach for an integrated path differential absorption (IPDA) lidar to allow global measurements of atmospheric CO2 column densities from space. Our group has developed, and successfully flown, an airborne pulsed lidar instrument that uses two tunable pulsed laser transmitters allowing simultaneous measurement of a single CO2 absorption line in the 1570 nm band, absorption of an O2 line pair in the oxygen A-band (765 nm), range, and atmospheric backscatter profiles in the same path. Both lasers are pulsed at 10 kHz, and the two absorption line regions are sampled at typically a 300 Hz rate. A space-based version of this lidar must have a much larger lidar power-area product due to the x40 longer range and faster along track velocity compared to airborne instrument. Initial link budget analysis indicated that for a 400 km orbit, a 1.5 m diameter telescope and a 10 second integration time, a 2 mJ laser energy is required to attain the precision needed for each measurement. To meet this energy requirement, we have pursued parallel power scaling efforts to enable space-based lidar measurement of CO2 concentrations. These included a multiple aperture approach consists of multi-element large mode area fiber amplifiers and a single-aperture approach consists of a multi-pass Er:Yb:Phosphate glass based planar waveguide amplifier (PWA). In this paper we will present our laser amplifier design approaches and preliminary results.
Schubert, Nicole; Axer, Markus; Schober, Martin; Huynh, Anh-Minh; Huysegoms, Marcel; Palomero-Gallagher, Nicola; Bjaalie, Jan G.; Leergaard, Trygve B.; Kirlangic, Mehmet E.; Amunts, Katrin; Zilles, Karl
2016-01-01
High-resolution multiscale and multimodal 3D models of the brain are essential tools to understand its complex structural and functional organization. Neuroimaging techniques addressing different aspects of brain organization should be integrated in a reference space to enable topographically correct alignment and subsequent analysis of the various datasets and their modalities. The Waxholm Space (http://software.incf.org/software/waxholm-space) is a publicly available 3D coordinate-based standard reference space for the mapping and registration of neuroanatomical data in rodent brains. This paper provides a newly developed pipeline combining imaging and reconstruction steps with a novel registration strategy to integrate new neuroimaging modalities into the Waxholm Space atlas. As a proof of principle, we incorporated large scale high-resolution cyto-, muscarinic M2 receptor, and fiber architectonic images of rat brains into the 3D digital MRI based atlas of the Sprague Dawley rat in Waxholm Space. We describe the whole workflow, from image acquisition to reconstruction and registration of these three modalities into the Waxholm Space rat atlas. The registration of the brain sections into the atlas is performed by using both linear and non-linear transformations. The validity of the procedure is qualitatively demonstrated by visual inspection, and a quantitative evaluation is performed by measurement of the concordance between representative atlas-delineated regions and the same regions based on receptor or fiber architectonic data. This novel approach enables for the first time the generation of 3D reconstructed volumes of nerve fibers and fiber tracts, or of muscarinic M2 receptor density distributions, in an entire rat brain. Additionally, our pipeline facilitates the inclusion of further neuroimaging datasets, e.g., 3D reconstructed volumes of histochemical stainings or of the regional distributions of multiple other receptor types, into the Waxholm Space. Thereby, a multiscale and multimodal rat brain model was created in the Waxholm Space atlas of the rat brain. Since the registration of these multimodal high-resolution datasets into the same coordinate system is an indispensable requisite for multi-parameter analyses, this approach enables combined studies on receptor and cell distributions as well as fiber densities in the same anatomical structures at microscopic scales for the first time. PMID:27199682
Schubert, Nicole; Axer, Markus; Schober, Martin; Huynh, Anh-Minh; Huysegoms, Marcel; Palomero-Gallagher, Nicola; Bjaalie, Jan G; Leergaard, Trygve B; Kirlangic, Mehmet E; Amunts, Katrin; Zilles, Karl
2016-01-01
High-resolution multiscale and multimodal 3D models of the brain are essential tools to understand its complex structural and functional organization. Neuroimaging techniques addressing different aspects of brain organization should be integrated in a reference space to enable topographically correct alignment and subsequent analysis of the various datasets and their modalities. The Waxholm Space (http://software.incf.org/software/waxholm-space) is a publicly available 3D coordinate-based standard reference space for the mapping and registration of neuroanatomical data in rodent brains. This paper provides a newly developed pipeline combining imaging and reconstruction steps with a novel registration strategy to integrate new neuroimaging modalities into the Waxholm Space atlas. As a proof of principle, we incorporated large scale high-resolution cyto-, muscarinic M2 receptor, and fiber architectonic images of rat brains into the 3D digital MRI based atlas of the Sprague Dawley rat in Waxholm Space. We describe the whole workflow, from image acquisition to reconstruction and registration of these three modalities into the Waxholm Space rat atlas. The registration of the brain sections into the atlas is performed by using both linear and non-linear transformations. The validity of the procedure is qualitatively demonstrated by visual inspection, and a quantitative evaluation is performed by measurement of the concordance between representative atlas-delineated regions and the same regions based on receptor or fiber architectonic data. This novel approach enables for the first time the generation of 3D reconstructed volumes of nerve fibers and fiber tracts, or of muscarinic M2 receptor density distributions, in an entire rat brain. Additionally, our pipeline facilitates the inclusion of further neuroimaging datasets, e.g., 3D reconstructed volumes of histochemical stainings or of the regional distributions of multiple other receptor types, into the Waxholm Space. Thereby, a multiscale and multimodal rat brain model was created in the Waxholm Space atlas of the rat brain. Since the registration of these multimodal high-resolution datasets into the same coordinate system is an indispensable requisite for multi-parameter analyses, this approach enables combined studies on receptor and cell distributions as well as fiber densities in the same anatomical structures at microscopic scales for the first time.
Novel Space-based Solar Power Technologies and Architectures for Earth and Beyond
NASA Technical Reports Server (NTRS)
Howell, Joe T.; Fikes, John C.; O'Neill, Mark J.
2005-01-01
Research, development and studies of novel space-based solar power systems, technologies and architectures for Earth and beyond are needed to reduce the cost of clean electrical power for terrestrial use and to provide a stepping stone for providing an abundance of power in space, i.e., manufacturing facilities, tourist facilities, delivery of power between objects in space, and between space and surface sites. The architectures, technologies and systems needed for space to Earth applications may also be used for in-space applications. Advances in key technologies, i.e., power generation, power management and distribution, power beaming and conversion of beamed power are needed to achieve the objectives of both terrestrial and extraterrestrial applications. Power beaming or wireless power transmission (WPT) can involve lasers or microwaves along with the associated power interfaces. Microwave and laser transmission techniques have been studied with several promising approaches to safe and efficient WPT identified. These investigations have included microwave phased array transmitters, as well as laser transmission and associated optics. There is a need to produce "proof-of-concept" validation of critical WPT technologies for both the near-term, as well as far-term applications. Investments may be harvested in near-term beam safe demonstrations of commercial WPT applications. Receiving sites (users) include ground-based stations for terrestrial electrical power, orbital sites to provide power for satellites and other platforms, future space elevator systems, space vehicle propulsion, and space to surface sites. This paper briefly discusses achieving a promising approach to the solar power generation and beamed power conversion. The approach is based on a unique high-power solar concentrator array called Stretched Lens Array (SLA) for both solar power generation and beamed power conversion. Since both versions (solar and laser) of SLA use many identical components (only the photovoltaic cells need to be different), economies of manufacturing and scale may be realized by using SLA on both ends of the laser power beaming system in a space solar power application. Near-term uses of this SLA-laser-SLA system may include terrestrial and space exploration in near Earth space. Later uses may include beamed power for bases or vehicles on Mars.
NASA Astrophysics Data System (ADS)
Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; Gibbs, Paul J.; Gibbs, John W.; Karma, Alain
2015-08-01
We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. We focus on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues for investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.
Beyond information and utility: Transforming public spaces with media facades.
Fischer, Patrick Tobias; Zöllner, Christian; Hoffmann, Thilo; Piatza, Sebastian; Hornecker, Eva
2013-01-01
Media facades (often characterized as a building's digital skin) are public displays that substitute dynamic details and information for usually static structures. SMSlingshot is a media facade system at the confluence of art, architecture, and technology design in the context of urban human-computer interaction. It represents a participative approach to public displays that enlivens public spaces and fosters civic and social dialogue as an alternative to advertising and service-oriented information displays. Observations from SMSlingshot's implementation at festival exhibitions provide insight into the roles of scale, distance, and the spatial situation of media facade contexts. The lessons learned apply to most public-display situations and will be useful for designers and developers of this new medium in urban spaces.
Vertical cities - the new form of high-rise construction evolution
NASA Astrophysics Data System (ADS)
Akristiniy, Vera A.; Boriskina, Yulia I.
2018-03-01
The article considers the basic principles of the vertical cities formation for the creation of a comfortable urban environment in conditions of rapid population growth and limited territories. As urban growth increases, there is a need for new concepts and approaches to urban space planning through the massive introduction of high-rise construction. The authors analyzed and systematized the list of high-tech solutions for arrangement the space of vertical cities, which are an integral part of the creation of the methodology for forming a high-rise buildings. Their concept differs in scale, presence of the big areas of public spaces, tendencies to self-sufficiency and sustainability, opportunity to offer the new unique comfortable environment to the population living in them.
NASA Astrophysics Data System (ADS)
Krumholz, Mark R.; Ting, Yuan-Sen
2018-04-01
The distributions of a galaxy's gas and stars in chemical space encode a tremendous amount of information about that galaxy's physical properties and assembly history. However, present methods for extracting information from chemical distributions are based either on coarse averages measured over galactic scales (e.g. metallicity gradients) or on searching for clusters in chemical space that can be identified with individual star clusters or gas clouds on ˜1 pc scales. These approaches discard most of the information, because in galaxies gas and young stars are observed to be distributed fractally, with correlations on all scales, and the same is likely to be true of metals. In this paper we introduce a first theoretical model, based on stochastically forced diffusion, capable of predicting the multiscale statistics of metal fields. We derive the variance, correlation function, and power spectrum of the metal distribution from first principles, and determine how these quantities depend on elements' astrophysical origin sites and on the large-scale properties of galaxies. Among other results, we explain for the first time why the typical abundance scatter observed in the interstellar media of nearby galaxies is ≈0.1 dex, and we predict that this scatter will be correlated on spatial scales of ˜0.5-1 kpc, and over time-scales of ˜100-300 Myr. We discuss the implications of our results for future chemical tagging studies.
Tunable Nanowire Patterning Using Standing Surface Acoustic Waves
Chen, Yuchao; Ding, Xiaoyun; Lin, Sz-Chin Steven; Yang, Shikuan; Huang, Po-Hsun; Nama, Nitesh; Zhao, Yanhui; Nawaz, Ahmad Ahsan; Guo, Feng; Wang, Wei; Gu, Yeyi; Mallouk, Thomas E.; Huang, Tony Jun
2014-01-01
Patterning of nanowires in a controllable, tunable manner is important for the fabrication of functional nanodevices. Here we present a simple approach for tunable nanowire patterning using standing surface acoustic waves (SSAW). This technique allows for the construction of large-scale nanowire arrays with well-controlled patterning geometry and spacing within 5 seconds. In this approach, SSAWs were generated by interdigital transducers (IDTs), which induced a periodic alternating current (AC) electric field on the piezoelectric substrate and consequently patterned metallic nanowires in suspension. The patterns could be deposited onto the substrate after the liquid evaporated. By controlling the distribution of the SSAW field, metallic nanowires were assembled into different patterns including parallel and perpendicular arrays. The spacing of the nanowire arrays could be tuned by controlling the frequency of the surface acoustic waves. Additionally, we observed 3D spark-shape nanowire patterns in the SSAW field. The SSAW-based nanowire-patterning technique presented here possesses several advantages over alternative patterning approaches, including high versatility, tunability, and efficiency, making it promising for device applications. PMID:23540330
Design and development of a Space Station proximity operations research and development mockup
NASA Technical Reports Server (NTRS)
Haines, Richard F.
1986-01-01
Proximity operations (Prox-Ops) on-orbit refers to all activities taking place within one km of the Space Station. Designing a Prox-Ops control station calls for a comprehensive systems approach which takes into account structural constraints, orbital dynamics including approach/departure flight paths, myriad human factors and other topics. This paper describes a reconfigurable full-scale mock-up of a Prox-Ops station constructed at Ames incorporating an array of windows (with dynamic star field, target vehicle(s), and head-up symbology), head-down perspective display of manned and unmanned vehicles, voice- actuated 'electronic checklist', computer-generated voice system, expert system (to help diagnose subsystem malfunctions), and other displays and controls. The facility is used for demonstrations of selected Prox-Ops approach scenarios, human factors research (work-load assessment, determining external vision envelope requirements, head-down and head-up symbology design, voice synthesis and recognition research, etc.) and development of engineering design guidelines for future module interiors.
Aerial vehicles collision avoidance using monocular vision
NASA Astrophysics Data System (ADS)
Balashov, Oleg; Muraviev, Vadim; Strotov, Valery
2016-10-01
In this paper image-based collision avoidance algorithm that provides detection of nearby aircraft and distance estimation is presented. The approach requires a vision system with a single moving camera and additional information about carrier's speed and orientation from onboard sensors. The main idea is to create a multi-step approach based on a preliminary detection, regions of interest (ROI) selection, contour segmentation, object matching and localization. The proposed algorithm is able to detect small targets but unlike many other approaches is designed to work with large-scale objects as well. To localize aerial vehicle position the system of equations relating object coordinates in space and observed image is solved. The system solution gives the current position and speed of the detected object in space. Using this information distance and time to collision can be estimated. Experimental research on real video sequences and modeled data is performed. Video database contained different types of aerial vehicles: aircrafts, helicopters, and UAVs. The presented algorithm is able to detect aerial vehicles from several kilometers under regular daylight conditions.
Modeling velocity space-time correlations in wind farms
NASA Astrophysics Data System (ADS)
Lukassen, Laura J.; Stevens, Richard J. A. M.; Meneveau, Charles; Wilczek, Michael
2016-11-01
Turbulent fluctuations of wind velocities cause power-output fluctuations in wind farms. The statistics of velocity fluctuations can be described by velocity space-time correlations in the atmospheric boundary layer. In this context, it is important to derive simple physics-based models. The so-called Tennekes-Kraichnan random sweeping hypothesis states that small-scale velocity fluctuations are passively advected by large-scale velocity perturbations in a random fashion. In the present work, this hypothesis is used with an additional mean wind velocity to derive a model for the spatial and temporal decorrelation of velocities in wind farms. It turns out that in the framework of this model, space-time correlations are a convolution of the spatial correlation function with a temporal decorrelation kernel. In this presentation, first results on the comparison to large eddy simulations will be presented and the potential of the approach to characterize power output fluctuations of wind farms will be discussed. Acknowledgements: 'Fellowships for Young Energy Scientists' (YES!) of FOM, the US National Science Foundation Grant IIA 1243482, and support by the Max Planck Society.
Use of randomized sampling for analysis of metabolic networks.
Schellenberger, Jan; Palsson, Bernhard Ø
2009-02-27
Genome-scale metabolic network reconstructions in microorganisms have been formulated and studied for about 8 years. The constraint-based approach has shown great promise in analyzing the systemic properties of these network reconstructions. Notably, constraint-based models have been used successfully to predict the phenotypic effects of knock-outs and for metabolic engineering. The inherent uncertainty in both parameters and variables of large-scale models is significant and is well suited to study by Monte Carlo sampling of the solution space. These techniques have been applied extensively to the reaction rate (flux) space of networks, with more recent work focusing on dynamic/kinetic properties. Monte Carlo sampling as an analysis tool has many advantages, including the ability to work with missing data, the ability to apply post-processing techniques, and the ability to quantify uncertainty and to optimize experiments to reduce uncertainty. We present an overview of this emerging area of research in systems biology.
Statistical physics approach to earthquake occurrence and forecasting
NASA Astrophysics Data System (ADS)
de Arcangelis, Lucilla; Godano, Cataldo; Grasso, Jean Robert; Lippiello, Eugenio
2016-04-01
There is striking evidence that the dynamics of the Earth crust is controlled by a wide variety of mutually dependent mechanisms acting at different spatial and temporal scales. The interplay of these mechanisms produces instabilities in the stress field, leading to abrupt energy releases, i.e., earthquakes. As a consequence, the evolution towards instability before a single event is very difficult to monitor. On the other hand, collective behavior in stress transfer and relaxation within the Earth crust leads to emergent properties described by stable phenomenological laws for a population of many earthquakes in size, time and space domains. This observation has stimulated a statistical mechanics approach to earthquake occurrence, applying ideas and methods as scaling laws, universality, fractal dimension, renormalization group, to characterize the physics of earthquakes. In this review we first present a description of the phenomenological laws of earthquake occurrence which represent the frame of reference for a variety of statistical mechanical models, ranging from the spring-block to more complex fault models. Next, we discuss the problem of seismic forecasting in the general framework of stochastic processes, where seismic occurrence can be described as a branching process implementing space-time-energy correlations between earthquakes. In this context we show how correlations originate from dynamical scaling relations between time and energy, able to account for universality and provide a unifying description for the phenomenological power laws. Then we discuss how branching models can be implemented to forecast the temporal evolution of the earthquake occurrence probability and allow to discriminate among different physical mechanisms responsible for earthquake triggering. In particular, the forecasting problem will be presented in a rigorous mathematical framework, discussing the relevance of the processes acting at different temporal scales for different levels of prediction. In this review we also briefly discuss how the statistical mechanics approach can be applied to non-tectonic earthquakes and to other natural stochastic processes, such as volcanic eruptions and solar flares.
Large-scale Labeled Datasets to Fuel Earth Science Deep Learning Applications
NASA Astrophysics Data System (ADS)
Maskey, M.; Ramachandran, R.; Miller, J.
2017-12-01
Deep learning has revolutionized computer vision and natural language processing with various algorithms scaled using high-performance computing. However, generic large-scale labeled datasets such as the ImageNet are the fuel that drives the impressive accuracy of deep learning results. Large-scale labeled datasets already exist in domains such as medical science, but creating them in the Earth science domain is a challenge. While there are ways to apply deep learning using limited labeled datasets, there is a need in the Earth sciences for creating large-scale labeled datasets for benchmarking and scaling deep learning applications. At the NASA Marshall Space Flight Center, we are using deep learning for a variety of Earth science applications where we have encountered the need for large-scale labeled datasets. We will discuss our approaches for creating such datasets and why these datasets are just as valuable as deep learning algorithms. We will also describe successful usage of these large-scale labeled datasets with our deep learning based applications.
Assurance Technology Challenges of Advanced Space Systems
NASA Technical Reports Server (NTRS)
Chern, E. James
2004-01-01
The initiative to explore space and extend a human presence across our solar system to revisit the moon and Mars post enormous technological challenges to the nation's space agency and aerospace industry. Key areas of technology development needs to enable the endeavor include advanced materials, structures and mechanisms; micro/nano sensors and detectors; power generation, storage and management; advanced thermal and cryogenic control; guidance, navigation and control; command and data handling; advanced propulsion; advanced communication; on-board processing; advanced information technology systems; modular and reconfigurable systems; precision formation flying; solar sails; distributed observing systems; space robotics; and etc. Quality assurance concerns such as functional performance, structural integrity, radiation tolerance, health monitoring, diagnosis, maintenance, calibration, and initialization can affect the performance of systems and subsystems. It is thus imperative to employ innovative nondestructive evaluation methodologies to ensure quality and integrity of advanced space systems. Advancements in integrated multi-functional sensor systems, autonomous inspection approaches, distributed embedded sensors, roaming inspectors, and shape adaptive sensors are sought. Concepts in computational models for signal processing and data interpretation to establish quantitative characterization and event determination are also of interest. Prospective evaluation technologies include ultrasonics, laser ultrasonics, optics and fiber optics, shearography, video optics and metrology, thermography, electromagnetics, acoustic emission, x-ray, data management, biomimetics, and nano-scale sensing approaches for structural health monitoring.
Multiscale approach to contour fitting for MR images
NASA Astrophysics Data System (ADS)
Rueckert, Daniel; Burger, Peter
1996-04-01
We present a new multiscale contour fitting process which combines information about the image and the contour of the object at different levels of scale. The algorithm is based on energy minimizing deformable models but avoids some of the problems associated with these models. The segmentation algorithm starts by constructing a linear scale-space of an image through convolution of the original image with a Gaussian kernel at different levels of scale, where the scale corresponds to the standard deviation of the Gaussian kernel. At high levels of scale large scale features of the objects are preserved while small scale features, like object details as well as noise, are suppressed. In order to maximize the accuracy of the segmentation, the contour of the object of interest is then tracked in scale-space from coarse to fine scales. We propose a hybrid multi-temperature simulated annealing optimization to minimize the energy of the deformable model. At high levels of scale the SA optimization is started at high temperatures, enabling the SA optimization to find a global optimal solution. At lower levels of scale the SA optimization is started at lower temperatures (at the lowest level the temperature is close to 0). This enforces a more deterministic behavior of the SA optimization at lower scales and leads to an increasingly local optimization as high energy barriers cannot be crossed. The performance and robustness of the algorithm have been tested on spin-echo MR images of the cardiovascular system. The task was to segment the ascending and descending aorta in 15 datasets of different individuals in order to measure regional aortic compliance. The results show that the algorithm is able to provide more accurate segmentation results than the classic contour fitting process and is at the same time very robust to noise and initialization.
Light-front holographic QCD and emerging confinement
Brodsky, Stanley J.; de Téramond, Guy F.; Dosch, Hans Günter; ...
2015-05-21
In this study we explore the remarkable connections between light-front dynamics, its holographic mapping to gravity in a higher-dimensional anti-de Sitter (AdS) space, and conformal quantum mechanics. This approach provides new insights into the origin of a fundamental mass scale and the physics underlying confinement dynamics in QCD in the limit of massless quarks. The result is a relativistic light-front wave equation for arbitrary spin with an effective confinement potential derived from a conformal action and its embedding in AdS space. This equation allows for the computation of essential features of hadron spectra in terms of a single scale. Themore » light-front holographic methods described here give a precise interpretation of holographic variables and quantities in AdS space in terms of light-front variables and quantum numbers. This leads to a relation between the AdS wave functions and the boost-invariant light-front wave functions describing the internal structure of hadronic bound-states in physical spacetime. The pion is massless in the chiral limit and the excitation spectra of relativistic light-quark meson and baryon bound states lie on linear Regge trajectories with identical slopes in the radial and orbital quantum numbers. In the light-front holographic approach described here currents are expressed as an infinite sum of poles, and form factors as a product of poles. At large q 2 the form factor incorporates the correct power-law fall-off for hard scattering independent of the specific dynamics and is dictated by the twist. At low q 2 the form factor leads to vector dominance. The approach is also extended to include small quark masses. We briefly review in this report other holographic approaches to QCD, in particular top-down and bottom-up models based on chiral symmetry breaking. We also include a discussion of open problems and future applications.« less
Revealing small-scale diffracting discontinuities by an optimization inversion algorithm
NASA Astrophysics Data System (ADS)
Yu, Caixia; Zhao, Jingtao; Wang, Yanfei
2017-02-01
Small-scale diffracting geologic discontinuities play a significant role in studying carbonate reservoirs. The seismic responses of them are coded in diffracted/scattered waves. However, compared with reflections, the energy of these valuable diffractions is generally one or even two orders of magnitude weaker. This means that the information of diffractions is strongly masked by reflections in the seismic images. Detecting the small-scale cavities and tiny faults from the deep carbonate reservoirs, mainly over 6 km, poses an even bigger challenge to seismic diffractions, as the signals of seismic surveyed data are weak and have a low signal-to-noise ratio (SNR). After analyzing the mechanism of the Kirchhoff migration method, the residual of prestack diffractions located in the neighborhood of the first Fresnel aperture is found to remain in the image space. Therefore, a strategy for extracting diffractions in the image space is proposed and a regularized L 2-norm model with a smooth constraint to the local slopes is suggested for predicting reflections. According to the focusing conditions of residual diffractions in the image space, two approaches are provided for extracting diffractions. Diffraction extraction can be directly accomplished by subtracting the predicted reflections from seismic imaging data if the residual diffractions are focused. Otherwise, a diffraction velocity analysis will be performed for refocusing residual diffractions. Two synthetic examples and one field application demonstrate the feasibility and efficiency of the two proposed methods in detecting the small-scale geologic scatterers, tiny faults and cavities.
Spatio-Temporal Variability of Groundwater Storage in India
NASA Technical Reports Server (NTRS)
Bhanja, Soumendra; Rodell, Matthew; Li, Bailing; Mukherjee, Abhijit
2016-01-01
Groundwater level measurements from 3907 monitoring wells, distributed within 22 major river basins of India, are assessed to characterize their spatial and temporal variability. Ground water storage (GWS) anomalies (relative to the long-term mean) exhibit strong seasonality, with annual maxima observed during the monsoon season and minima during pre-monsoon season. Spatial variability of GWS anomalies increases with the extent of measurements, following the power law relationship, i.e., log-(spatial variability) is linearly dependent on log-(spatial extent).In addition, the impact of well spacing on spatial variability and the power law relationship is investigated. We found that the mean GWS anomaly sampled at a 0.25 degree grid scale closes to unweighted average over all wells. The absolute error corresponding to each basin grows with increasing scale, i.e., from 0.25 degree to 1 degree. It was observed that small changes in extent could create very large changes in spatial variability at large grid scales. Spatial variability of GWS anomaly has been found to vary with climatic conditions. To our knowledge, this is the first study of the effects of well spacing on groundwater spatial variability. The results may be useful for interpreting large scale groundwater variations from unevenly spaced or sparse groundwater well observations or for siting and prioritizing wells in a network for groundwater management. The output of this study could be used to maintain a cost effective groundwater monitoring network in the study region and the approach can also be used in other parts of the globe.
Spatio-temporal variability of groundwater storage in India.
Bhanja, Soumendra N; Rodell, Matthew; Li, Bailing; Mukherjee, Abhijit
2017-01-01
Groundwater level measurements from 3907 monitoring wells, distributed within 22 major river basins of India, are assessed to characterize their spatial and temporal variability. Groundwater storage (GWS) anomalies (relative to the long-term mean) exhibit strong seasonality, with annual maxima observed during the monsoon season and minima during pre-monsoon season. Spatial variability of GWS anomalies increases with the extent of measurements, following the power law relationship, i.e., log-(spatial variability) is linearly dependent on log-(spatial extent). In addition, the impact of well spacing on spatial variability and the power law relationship is investigated. We found that the mean GWS anomaly sampled at a 0.25 degree grid scale closes to unweighted average over all wells. The absolute error corresponding to each basin grows with increasing scale, i.e., from 0.25 degree to 1 degree. It was observed that small changes in extent could create very large changes in spatial variability at large grid scales. Spatial variability of GWS anomaly has been found to vary with climatic conditions. To our knowledge, this is the first study of the effects of well spacing on groundwater spatial variability. The results may be useful for interpreting large scale groundwater variations from unevenly spaced or sparse groundwater well observations or for siting and prioritizing wells in a network for groundwater management. The output of this study could be used to maintain a cost effective groundwater monitoring network in the study region and the approach can also be used in other parts of the globe.
Exploring the brain on multiple scales with correlative two-photon and light sheet microscopy
NASA Astrophysics Data System (ADS)
Silvestri, Ludovico; Allegra Mascaro, Anna Letizia; Costantini, Irene; Sacconi, Leonardo; Pavone, Francesco S.
2014-02-01
One of the unique features of the brain is that its activity cannot be framed in a single spatio-temporal scale, but rather spans many orders of magnitude both in space and time. A single imaging technique can reveal only a small part of this complex machinery. To obtain a more comprehensive view of brain functionality, complementary approaches should be combined into a correlative framework. Here, we describe a method to integrate data from in vivo two-photon fluorescence imaging and ex vivo light sheet microscopy, taking advantage of blood vessels as reference chart. We show how the apical dendritic arbor of a single cortical pyramidal neuron imaged in living thy1-GFP-M mice can be found in the large-scale brain reconstruction obtained with light sheet microscopy. Starting from the apical portion, the whole pyramidal neuron can then be segmented. The correlative approach presented here allows contextualizing within a three-dimensional anatomic framework the neurons whose dynamics have been observed with high detail in vivo.
Spatio-temporal hierarchy in the dynamics of a minimalist protein model
NASA Astrophysics Data System (ADS)
Matsunaga, Yasuhiro; Baba, Akinori; Li, Chun-Biu; Straub, John E.; Toda, Mikito; Komatsuzaki, Tamiki; Berry, R. Stephen
2013-12-01
A method for time series analysis of molecular dynamics simulation of a protein is presented. In this approach, wavelet analysis and principal component analysis are combined to decompose the spatio-temporal protein dynamics into contributions from a hierarchy of different time and space scales. Unlike the conventional Fourier-based approaches, the time-localized wavelet basis captures the vibrational energy transfers among the collective motions of proteins. As an illustrative vehicle, we have applied our method to a coarse-grained minimalist protein model. During the folding and unfolding transitions of the protein, vibrational energy transfers between the fast and slow time scales were observed among the large-amplitude collective coordinates while the other small-amplitude motions are regarded as thermal noise. Analysis employing a Gaussian-based measure revealed that the time scales of the energy redistribution in the subspace spanned by such large-amplitude collective coordinates are slow compared to the other small-amplitude coordinates. Future prospects of the method are discussed in detail.
NASA Astrophysics Data System (ADS)
Morikawa, Y.; Murata, K. T.; Watari, S.; Kato, H.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Shimojo, S.
2010-12-01
Main methodologies of Solar-Terrestrial Physics (STP) so far are theoretical, experimental and observational, and computer simulation approaches. Recently "informatics" is expected as a new (fourth) approach to the STP studies. Informatics is a methodology to analyze large-scale data (observation data and computer simulation data) to obtain new findings using a variety of data processing techniques. At NICT (National Institute of Information and Communications Technology, Japan) we are now developing a new research environment named "OneSpaceNet". The OneSpaceNet is a cloud-computing environment specialized for science works, which connects many researchers with high-speed network (JGN: Japan Gigabit Network). The JGN is a wide-area back-born network operated by NICT; it provides 10G network and many access points (AP) over Japan. The OneSpaceNet also provides with rich computer resources for research studies, such as super-computers, large-scale data storage area, licensed applications, visualization devices (like tiled display wall: TDW), database/DBMS, cluster computers (4-8 nodes) for data processing and communication devices. What is amazing in use of the science cloud is that a user simply prepares a terminal (low-cost PC). Once connecting the PC to JGN2plus, the user can make full use of the rich resources of the science cloud. Using communication devices, such as video-conference system, streaming and reflector servers, and media-players, the users on the OneSpaceNet can make research communications as if they belong to a same (one) laboratory: they are members of a virtual laboratory. The specification of the computer resources on the OneSpaceNet is as follows: The size of data storage we have developed so far is almost 1PB. The number of the data files managed on the cloud storage is getting larger and now more than 40,000,000. What is notable is that the disks forming the large-scale storage are distributed to 5 data centers over Japan (but the storage system performs as one disk). There are three supercomputers allocated on the cloud, one from Tokyo, one from Osaka and the other from Nagoya. One's simulation job data on any supercomputers are saved on the cloud data storage (same directory); it is a kind of virtual computing environment. The tiled display wall has 36 panels acting as one display; the pixel (resolution) size of it is as large as 18000x4300. This size is enough to preview or analyze the large-scale computer simulation data. It also allows us to take a look of multiple (e.g., 100 pictures) on one screen together with many researchers. In our talk we also present a brief report of the initial results using the OneSpaceNet for Global MHD simulations as an example of successful use of our science cloud; (i) Ultra-high time resolution visualization of Global MHD simulations on the large-scale storage and parallel processing system on the cloud, (ii) Database of real-time Global MHD simulation and statistic analyses of the data, and (iii) 3D Web service of Global MHD simulations.
Multivariate analysis of scale-dependent associations between bats and landscape structure
Gorresen, P.M.; Willig, M.R.; Strauss, R.E.
2005-01-01
The assessment of biotic responses to habitat disturbance and fragmentation generally has been limited to analyses at a single spatial scale. Furthermore, methods to compare responses between scales have lacked the ability to discriminate among patterns related to the identity, strength, or direction of associations of biotic variables with landscape attributes. We present an examination of the relationship of population- and community-level characteristics of phyllostomid bats with habitat features that were measured at multiple spatial scales in Atlantic rain forest of eastern Paraguay. We used a matrix of partial correlations between each biotic response variable (i.e., species abundance, species richness, and evenness) and a suite of landscape characteristics to represent the multifaceted associations of bats with spatial structure. Correlation matrices can correspond based on either the strength (i.e., magnitude) or direction (i.e., sign) of association. Therefore, a simulation model independently evaluated correspondence in the magnitude and sign of correlations among scales, and results were combined via a meta-analysis to provide an overall test of significance. Our approach detected both species-specific differences in response to landscape structure and scale dependence in those responses. This matrix-simulation approach has broad applicability to ecological situations in which multiple intercorrelated factors contribute to patterns in space or time. ?? 2005 by the Ecological Society of America.
Multiscale Simulation of Porous Ceramics Based on Movable Cellular Automaton Method
NASA Astrophysics Data System (ADS)
Smolin, A.; Smolin, I.; Eremina, G.; Smolina, I.
2017-10-01
The paper presents a model for simulating mechanical behaviour of multiscale porous ceramics based on movable cellular automaton method, which is a novel particle method in computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the random unique position in space. As a result, we get the average values of Young’s modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behaviour at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via the effective properties determined at the previous scale level. If the pore size distribution function of the material has N maxima we need to perform computations for N - 1 levels in order to get the properties from the lowest scale up to the macroscale step by step. The proposed approach was applied to modelling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behaviour of the model sample at the macroscale.
Large-scale velocities and primordial non-Gaussianity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidt, Fabian
2010-09-15
We study the peculiar velocities of density peaks in the presence of primordial non-Gaussianity. Rare, high-density peaks in the initial density field can be identified with tracers such as galaxies and clusters in the evolved matter distribution. The distribution of relative velocities of peaks is derived in the large-scale limit using two different approaches based on a local biasing scheme. Both approaches agree, and show that halos still stream with the dark matter locally as well as statistically, i.e. they do not acquire a velocity bias. Nonetheless, even a moderate degree of (not necessarily local) non-Gaussianity induces a significant skewnessmore » ({approx}0.1-0.2) in the relative velocity distribution, making it a potentially interesting probe of non-Gaussianity on intermediate to large scales. We also study two-point correlations in redshift space. The well-known Kaiser formula is still a good approximation on large scales, if the Gaussian halo bias is replaced with its (scale-dependent) non-Gaussian generalization. However, there are additional terms not encompassed by this simple formula which become relevant on smaller scales (k > or approx. 0.01h/Mpc). Depending on the allowed level of non-Gaussianity, these could be of relevance for future large spectroscopic surveys.« less
Status of DSMT research program
NASA Technical Reports Server (NTRS)
Mcgowan, Paul E.; Javeed, Mehzad; Edighoffer, Harold H.
1991-01-01
The status of the Dynamic Scale Model Technology (DSMT) research program is presented. DSMT is developing scale model technology for large space structures as part of the Control Structure Interaction (CSI) program at NASA Langley Research Center (LaRC). Under DSMT a hybrid-scale structural dynamics model of Space Station Freedom was developed. Space Station Freedom was selected as the focus structure for DSMT since the station represents the first opportunity to obtain flight data on a complex, three-dimensional space structure. Included is an overview of DSMT including the development of the space station scale model and the resulting hardware. Scaling technology was developed for this model to achieve a ground test article which existing test facilities can accommodate while employing realistically scaled hardware. The model was designed and fabricated by the Lockheed Missile and Space Co., and is assembled at LaRc for dynamic testing. Also, results from ground tests and analyses of the various model components are presented along with plans for future subassembly and matted model tests. Finally, utilization of the scale model for enhancing analysis verification of the full-scale space station is also considered.
Unsupervised individual tree crown detection in high-resolution satellite imagery
Skurikhin, Alexei N.; McDowell, Nate G.; Middleton, Richard S.
2016-01-26
Rapidly and accurately detecting individual tree crowns in satellite imagery is a critical need for monitoring and characterizing forest resources. We present a two-stage semiautomated approach for detecting individual tree crowns using high spatial resolution (0.6 m) satellite imagery. First, active contours are used to recognize tree canopy areas in a normalized difference vegetation index image. Given the image areas corresponding to tree canopies, we then identify individual tree crowns as local extrema points in the Laplacian of Gaussian scale-space pyramid. The approach simultaneously detects tree crown centers and estimates tree crown sizes, parameters critical to multiple ecosystem models. Asmore » a demonstration, we used a ground validated, 0.6 m resolution QuickBird image of a sparse forest site. The two-stage approach produced a tree count estimate with an accuracy of 78% for a naturally regenerating forest with irregularly spaced trees, a success rate equivalent to or better than existing approaches. In addition, our approach detects tree canopy areas and individual tree crowns in an unsupervised manner and helps identify overlapping crowns. Furthermore, the method also demonstrates significant potential for further improvement.« less
Unsupervised individual tree crown detection in high-resolution satellite imagery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skurikhin, Alexei N.; McDowell, Nate G.; Middleton, Richard S.
Rapidly and accurately detecting individual tree crowns in satellite imagery is a critical need for monitoring and characterizing forest resources. We present a two-stage semiautomated approach for detecting individual tree crowns using high spatial resolution (0.6 m) satellite imagery. First, active contours are used to recognize tree canopy areas in a normalized difference vegetation index image. Given the image areas corresponding to tree canopies, we then identify individual tree crowns as local extrema points in the Laplacian of Gaussian scale-space pyramid. The approach simultaneously detects tree crown centers and estimates tree crown sizes, parameters critical to multiple ecosystem models. Asmore » a demonstration, we used a ground validated, 0.6 m resolution QuickBird image of a sparse forest site. The two-stage approach produced a tree count estimate with an accuracy of 78% for a naturally regenerating forest with irregularly spaced trees, a success rate equivalent to or better than existing approaches. In addition, our approach detects tree canopy areas and individual tree crowns in an unsupervised manner and helps identify overlapping crowns. Furthermore, the method also demonstrates significant potential for further improvement.« less
NASA Astrophysics Data System (ADS)
Guo, Tian; Xu, Zili
2018-03-01
Measurement noise is inevitable in practice; thus, it is difficult to identify defects, cracks or damage in a structure while suppressing noise simultaneously. In this work, a novel method is introduced to detect multiple damage in noisy environments. Based on multi-scale space analysis for discrete signals, a method for extracting damage characteristics from the measured displacement mode shape is illustrated. Moreover, the proposed method incorporates a data fusion algorithm to further eliminate measurement noise-based interference. The effectiveness of the method is verified by numerical and experimental methods applied to different structural types. The results demonstrate that there are two advantages to the proposed method. First, damage features are extracted by the difference of the multi-scale representation; this step is taken such that the interference of noise amplification can be avoided. Second, a data fusion technique applied to the proposed method provides a global decision, which retains the damage features while maximally eliminating the uncertainty. Monte Carlo simulations are utilized to validate that the proposed method has a higher accuracy in damage detection.
Scale and legacy controls on catchment nutrient export regimes
NASA Astrophysics Data System (ADS)
Howden, N. J. K.; Burt, T.; Worrall, F.
2017-12-01
Nutrient dynamics in river catchments are complex: water and chemical fluxes are highly variable in low-order streams, but this variability declines as fluxes move through higher-order reaches. This poses a major challenge for process understanding as much effort is focussed on long-term monitoring of the main river channel (a high-order reach), and therefore the data available to support process understanding are predominantly derived from sites where much of the transient response of nutrient export is masked by the effect of averaging over both space and time. This may be further exacerbated at all scales by the accumulation of legacy nutrient sources in soils, aquifers and pore waters, where historical activities have led to nutrient accumulation where the catchment system is transport limited. Therefore it is of particular interest to investigate how the variability of nutrient export changes both with catchment scale (from low to high-order catchment streams) and with the presence of legacy sources, such that the context of infrequent monitoring on high-order streams can be better understood. This is not only a question of characterising nutrient export regimes per se, but also developing a more thorough understanding of how the concepts of scale and legacy may modify the statistical characteristics of observed responses across scales in both space and time. In this paper, we use synthetic data series and develop a model approach to consider how space and timescales combine with impacts of legacy sources to influence observed variability in catchment export. We find that: increasing space and timescales tend to reduce the observed variance in nutrient exports, due to an increase in travel times and greater mixing, and therefore averaging, of sources; increasing the influence of legacy sources inflates the variance, with the level of inflation dictated by the residence time of the respective sources.
NASA Astrophysics Data System (ADS)
Scutt Phillips, Joe; Sen Gupta, Alex; Senina, Inna; van Sebille, Erik; Lange, Michael; Lehodey, Patrick; Hampton, John; Nicol, Simon
2018-05-01
The distribution of marine species is often modeled using Eulerian approaches, in which changes to population density or abundance are calculated at fixed locations in space. Conversely, Lagrangian, or individual-based, models simulate the movement of individual particles moving in continuous space, with broader-scale patterns such as distribution being an emergent property of many, potentially adaptive, individuals. These models offer advantages in examining dynamics across spatiotemporal scales and making comparisons with observations from individual-scale data. Here, we introduce and describe such a model, the Individual-based Kinesis, Advection and Movement of Ocean ANimAls model (Ikamoana), which we use to replicate the movement processes of an existing Eulerian model for marine predators (the Spatial Ecosystem and Population Dynamics Model, SEAPODYM). Ikamoana simulates the movement of either individual or groups of animals by physical ocean currents, habitat-dependent stochastic movements (kinesis), and taxis movements representing active searching behaviours. Applying our model to Pacific skipjack tuna (Katsuwonus pelamis), we show that it accurately replicates the evolution of density distribution simulated by SEAPODYM with low time-mean error and a spatial correlation of density that exceeds 0.96 at all times. We demonstrate how the Lagrangian approach permits easy tracking of individuals' trajectories for examining connectivity between different regions, and show how the model can provide independent estimates of transfer rates between commonly used assessment regions. In particular, we find that retention rates in most assessment regions are considerably smaller (up to a factor of 2) than those estimated by this population of skipjack's primary assessment model. Moreover, these rates are sensitive to ocean state (e.g. El Nino vs La Nina) and so assuming fixed transfer rates between regions may lead to spurious stock estimates. A novel feature of the Lagrangian approach is that individual schools can be tracked through time, and we demonstrate that movement between two assessment regions at broad temporal scales includes extended transits through other regions at finer-scales. Finally, we discuss the utility of this modeling framework for the management of marine reserves, designing effective monitoring programmes, and exploring hypotheses regarding the behaviour of hard-to-observe oceanic animals.
Noise is the new signal: Moving beyond zeroth-order geomorphology (Invited)
NASA Astrophysics Data System (ADS)
Jerolmack, D. J.
2010-12-01
The last several decades have witnessed a rapid growth in our understanding of landscape evolution, led by the development of geomorphic transport laws - time- and space-averaged equations relating mass flux to some physical process(es). In statistical mechanics this approach is called mean field theory (MFT), in which complex many-body interactions are replaced with an external field that represents the average effect of those interactions. Because MFT neglects all fluctuations around the mean, it has been described as a zeroth-order fluctuation model. The mean field approach to geomorphology has enabled the development of landscape evolution models, and led to a fundamental understanding of many landform patterns. Recent research, however, has highlighted two limitations of MFT: (1) The integral (averaging) time and space scales in geomorphic systems are sometimes poorly defined and often quite large, placing the mean field approximation on uncertain footing, and; (2) In systems exhibiting fractal behavior, an integral scale does not exist - e.g., properties like mass flux are scale-dependent. In both cases, fluctuations in sediment transport are non-negligible over the scales of interest. In this talk I will synthesize recent experimental and theoretical work that confronts these limitations. Discrete element models of fluid and grain interactions show promise for elucidating transport mechanics and pattern-forming instabilities, but require detailed knowledge of micro-scale processes and are computationally expensive. An alternative approach is to begin with a reasonable MFT, and then add higher-order terms that capture the statistical dynamics of fluctuations. In either case, moving beyond zeroth-order geomorphology requires a careful examination of the origins and structure of transport “noise”. I will attempt to show how studying the signal in noise can both reveal interesting new physics, and also help to formalize the applicability of geomorphic transport laws. Flooding on an experimental alluvial fan. Intensity is related to the cumulative amount of time flow has visited an area of the fan over the experiment. Dark areas represent an emergent channel network resulting from stochastic migration of river channels.
NASA Astrophysics Data System (ADS)
Cui, Tiangang; Marzouk, Youssef; Willcox, Karen
2016-06-01
Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.
Pair-barcode high-throughput sequencing for large-scale multiplexed sample analysis
2012-01-01
Background The multiplexing becomes the major limitation of the next-generation sequencing (NGS) in application to low complexity samples. Physical space segregation allows limited multiplexing, while the existing barcode approach only permits simultaneously analysis of up to several dozen samples. Results Here we introduce pair-barcode sequencing (PBS), an economic and flexible barcoding technique that permits parallel analysis of large-scale multiplexed samples. In two pilot runs using SOLiD sequencer (Applied Biosystems Inc.), 32 independent pair-barcoded miRNA libraries were simultaneously discovered by the combination of 4 unique forward barcodes and 8 unique reverse barcodes. Over 174,000,000 reads were generated and about 64% of them are assigned to both of the barcodes. After mapping all reads to pre-miRNAs in miRBase, different miRNA expression patterns are captured from the two clinical groups. The strong correlation using different barcode pairs and the high consistency of miRNA expression in two independent runs demonstrates that PBS approach is valid. Conclusions By employing PBS approach in NGS, large-scale multiplexed pooled samples could be practically analyzed in parallel so that high-throughput sequencing economically meets the requirements of samples which are low sequencing throughput demand. PMID:22276739
Pair-barcode high-throughput sequencing for large-scale multiplexed sample analysis.
Tu, Jing; Ge, Qinyu; Wang, Shengqin; Wang, Lei; Sun, Beili; Yang, Qi; Bai, Yunfei; Lu, Zuhong
2012-01-25
The multiplexing becomes the major limitation of the next-generation sequencing (NGS) in application to low complexity samples. Physical space segregation allows limited multiplexing, while the existing barcode approach only permits simultaneously analysis of up to several dozen samples. Here we introduce pair-barcode sequencing (PBS), an economic and flexible barcoding technique that permits parallel analysis of large-scale multiplexed samples. In two pilot runs using SOLiD sequencer (Applied Biosystems Inc.), 32 independent pair-barcoded miRNA libraries were simultaneously discovered by the combination of 4 unique forward barcodes and 8 unique reverse barcodes. Over 174,000,000 reads were generated and about 64% of them are assigned to both of the barcodes. After mapping all reads to pre-miRNAs in miRBase, different miRNA expression patterns are captured from the two clinical groups. The strong correlation using different barcode pairs and the high consistency of miRNA expression in two independent runs demonstrates that PBS approach is valid. By employing PBS approach in NGS, large-scale multiplexed pooled samples could be practically analyzed in parallel so that high-throughput sequencing economically meets the requirements of samples which are low sequencing throughput demand.
Landslide Hazard from Coupled Inherent and Dynamic Probabilities
NASA Astrophysics Data System (ADS)
Strauch, R. L.; Istanbulluoglu, E.; Nudurupati, S. S.
2015-12-01
Landslide hazard research has typically been conducted independently from hydroclimate research. We sought to unify these two lines of research to provide regional scale landslide hazard information for risk assessments and resource management decision-making. Our approach couples an empirical inherent landslide probability, based on a frequency ratio analysis, with a numerical dynamic probability, generated by combining subsurface water recharge and surface runoff from the Variable Infiltration Capacity (VIC) macro-scale land surface hydrologic model with a finer resolution probabilistic slope stability model. Landslide hazard mapping is advanced by combining static and dynamic models of stability into a probabilistic measure of geohazard prediction in both space and time. This work will aid resource management decision-making in current and future landscape and climatic conditions. The approach is applied as a case study in North Cascade National Park Complex in northern Washington State.
“Waves” vs. “particles” in the atmosphere's phase space: A pathway to long-range forecasting?
Ghil, Michael; Robertson, Andrew W.
2002-01-01
Thirty years ago, E. N. Lorenz provided some approximate limits to atmospheric predictability. The details—in space and time—of atmospheric flow fields are lost after about 10 days. Certain gross flow features recur, however, after times of the order of 10–50 days, giving hope for their prediction. Over the last two decades, numerous attempts have been made to predict these recurrent features. The attempts have involved, on the one hand, systematic improvements in numerical weather prediction by increasing the spatial resolution and physical faithfulness in the detailed models used for this prediction. On the other hand, theoretical attempts motivated by the same goal have involved the study of the large-scale atmospheric motions' phase space and the inhomogeneities therein. These “coarse-graining” studies have addressed observed as well as simulated atmospheric data sets. Two distinct approaches have been used in these studies: the episodic or intermittent and the oscillatory or periodic. The intermittency approach describes multiple-flow (or weather) regimes, their persistence and recurrence, and the Markov chain of transitions among them. The periodicity approach studies intraseasonal oscillations, with periods of 15–70 days, and their predictability. We review these two approaches, “particles” vs. “waves,” in the quantum physics analogy alluded to in the title of this article, discuss their complementarity, and outline unsolved problems. PMID:11875201
Tourre, Yves M; Lacaux, Jean-Pierre; Vignolles, Cécile; Lafaye, Murielle
2009-11-11
Climate and environment vary across many spatio-temporal scales, including the concept of climate change, which impact on ecosystems, vector-borne diseases and public health worldwide. To develop a conceptual approach by mapping climatic and environmental conditions from space and studying their linkages with Rift Valley Fever (RVF) epidemics in Senegal. Ponds in which mosquitoes could thrive were identified from remote sensing using high-resolution SPOT-5 satellite images. Additional data on pond dynamics and rainfall events (obtained from the Tropical Rainfall Measuring Mission) were combined with hydrological in-situ data. Localisation of vulnerable hosts such as penned cattle (from QuickBird satellite) were also used. Dynamic spatio-temporal distribution of Aedes vexans density (one of the main RVF vectors) is based on the total rainfall amount and ponds' dynamics. While Zones Potentially Occupied by Mosquitoes are mapped, detailed risk areas, i.e. zones where hazards and vulnerability occur, are expressed in percentages of areas where cattle are potentially exposed to mosquitoes' bites. This new conceptual approach, using precise remote-sensing techniques, simply relies upon rainfall distribution also evaluated from space. It is meant to contribute to the implementation of operational early warning systems for RVF based on both natural and anthropogenic climatic and environmental changes. In a climate change context, this approach could also be applied to other vector-borne diseases and places worldwide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eisenbach, Markus; Li, Ying Wai; Liu, Xianglin
2017-12-01
LSMS is a first principles, Density Functional theory based, electronic structure code targeted mainly at materials applications. LSMS calculates the local spin density approximation to the diagonal part of the electron Green's function. The electron/spin density and energy are easily determined once the Green's function is known. Linear scaling with system size is achieved in the LSMS by using several unique properties of the real space multiple scattering approach to the Green's function.
Ohayon, Elan L; Kalitzin, Stiliyan; Suffczynski, Piotr; Jin, Frank Y; Tsang, Paul W; Borrett, Donald S; Burnham, W McIntyre; Kwan, Hon C
2004-01-01
The problem of demarcating neural network space is formidable. A simple fully connected recurrent network of five units (binary activations, synaptic weight resolution of 10) has 3.2 *10(26) possible initial states. The problem increases drastically with scaling. Here we consider three complementary approaches to help direct the exploration to distinguish epileptic from healthy networks. [1] First, we perform a gross mapping of the space of five-unit continuous recurrent networks using randomized weights and initial activations. The majority of weight patterns (>70%) were found to result in neural assemblies exhibiting periodic limit-cycle oscillatory behavior. [2] Next we examine the activation space of non-periodic networks demonstrating that the emergence of paroxysmal activity does not require changes in connectivity. [3] The next challenge is to focus the search of network space to identify networks with more complex dynamics. Here we rely on a major available indicator critical to clinical assessment but largely ignored by epilepsy modelers, namely: behavioral states. To this end, we connected the above network layout to an external robot in which interactive states were evolved. The first random generation showed a distribution in line with approach [1]. That is, the predominate phenotypes were fixed-point or oscillatory with seizure-like motor output. As evolution progressed the profile changed markedly. Within 20 generations the entire population was able to navigate a simple environment with all individuals exhibiting multiply-stable behaviors with no cases of default locked limit-cycle oscillatory motor behavior. The resultant population may thus afford us a view of the architectural principles demarcating healthy biological networks from the pathological. The approach has an advantage over other epilepsy modeling techniques in providing a way to clarify whether observed dynamics or suggested therapies are pointing to computational viability or dead space.
Postcoalescence evolution of growth stress in polycrystalline films.
González-González, A; Polop, C; Vasco, E
2013-02-01
The growth stress generated once grains coalesce in Volmer-Weber-type thin films is investigated by time-multiscale simulations comprising complementary modules of (i) finite-element modeling to address the interactions between grains happening at atomic vibration time scales (~0.1 ps), (ii) dynamic scaling to account for the surface stress relaxation via morphology changes at surface diffusion time scales (~μs-ms), and (iii) the mesoscopic rate equation approach to simulate the bulk stress relaxation at deposition time scales (~sec-h). On the basis of addressing the main experimental evidence reported so far on the topic dealt with, the simulation results provide key findings concerning the interplay between anisotropic grain interactions at complementary space scales, deposition conditions (such as flux and mobility), and mechanisms of stress accommodation-relaxation, which underlies the origin, nature and spatial distribution, and the flux dependence of the postcoalescence growth stress.
Overview of LIDS Docking and Berthing System Seals
NASA Technical Reports Server (NTRS)
Daniels, Christopher C.; Dunlap, Patrick H., Jr.; deGroh, Henry C., III; Steinetz, Bruce M.; Oswald, Jay J.; Smith, Ian
2007-01-01
This viewgraph presentation describes the Low Impact Docking System (LIDS) docking and berthing system seals. The contents include: 1) Description of the Application: Low Impact Docking System (LIDS); 2) LIDS Seal Locations: Vehicle Undocked (Hatch Closed); 3) LIDS Seal Locations: Mechanical Pass Thru; 4) LIDS Seal Locations: Electrical and Pyro Connectors; 5) LIDS Seal Locations: Vehicle Docked (Hatches Open); 6) LIDS Seal Locations: Main Interface Seal; 7) Main Interface Seal Challenges and Specifications; 8) Approach; 9) Seal Concepts Under Development/Evaluation; 10) Elastomer Material Evaluations; 11) Evaluation of Relevant Seal Properties; 12) Medium-Scale (12") Gask-O-Seal Compression Tests; 13) Medium-Scale Compression Results; 14) Adhesion Forces of Elliptical Top Gask-o-seals; 15) Medium-Scale Seals; 16) Medium-Scale Leakage Results: Effect of Configuration; 17) Full Scale LIDS Seal Test Rig Development; 18) Materials International Space Station Experiment (MISSE 6A and 6B); and 19) Schedule.
Sachse, F. B.
2015-01-01
Microstructural characterization of cardiac tissue and its remodeling in disease is a crucial step in many basic research projects. We present a comprehensive approach for three-dimensional characterization of cardiac tissue at the submicrometer scale. We developed a compression-free mounting method as well as labeling and imaging protocols that facilitate acquisition of three-dimensional image stacks with scanning confocal microscopy. We evaluated the approach with normal and infarcted ventricular tissue. We used the acquired image stacks for segmentation, quantitative analysis and visualization of important tissue components. In contrast to conventional mounting, compression-free mounting preserved cell shapes, capillary lumens and extracellular laminas. Furthermore, the new approach and imaging protocols resulted in high signal-to-noise ratios at depths up to 60 μm. This allowed extensive analyses revealing major differences in volume fractions and distribution of cardiomyocytes, blood vessels, fibroblasts, myofibroblasts and extracellular space in control versus infarct border zone. Our results show that the developed approach yields comprehensive data on microstructure of cardiac tissue and its remodeling in disease. In contrast to other approaches, it allows quantitative assessment of all major tissue components. Furthermore, we suggest that the approach will provide important data for physiological models of cardiac tissue at the submicrometer scale. PMID:26399990
The Impact of Granule Density on Tabletting and Pharmaceutical Product Performance.
van den Ban, Sander; Goodwin, Daniel J
2017-05-01
The impact of granule densification in high-shear wet granulation on tabletting and product performance was investigated, at pharmaceutical production scale. Product performance criteria need to be balanced with the need to deliver manufacturability criteria to assure robust industrial scale tablet manufacturing processes. A Quality by Design approach was used to determine in-process control specifications for tabletting, propose a design space for disintegration and dissolution, and to understand the permitted operating limits and required controls for an industrial tabletting process. Granules of varying density (filling density) were made by varying water amount added, spray rate, and wet massing time in a design of experiment (DoE) approach. Granules were compressed into tablets to a range of thicknesses to obtain tablets of varying breaking force. Disintegration and dissolution performance was evaluated for the tablets made. The impact of granule filling density on tabletting was rationalised with compressibility, tabletability and compactibility. Tabletting and product performance criteria provided competing requirements for porosity. An increase in granule filling density impacted tabletability and compactability and limited the ability to achieve tablets of adequate mechanical strength. An increase in tablet solid fraction (decreased porosity) impacted disintegration and dissolution. An attribute-based design space for disintegration and dissolution was specified to achieve both product performance and manufacturability. The method of granulation and resulting granule filling density is a key design consideration to achieve both product performance and manufacturability required for modern industrial scale pharmaceutical product manufacture and distribution.
On the streaming model for redshift-space distortions
NASA Astrophysics Data System (ADS)
Kuruvilla, Joseph; Porciani, Cristiano
2018-06-01
The streaming model describes the mapping between real and redshift space for 2-point clustering statistics. Its key element is the probability density function (PDF) of line-of-sight pairwise peculiar velocities. Following a kinetic-theory approach, we derive the fundamental equations of the streaming model for ordered and unordered pairs. In the first case, we recover the classic equation while we demonstrate that modifications are necessary for unordered pairs. We then discuss several statistical properties of the pairwise velocities for DM particles and haloes by using a suite of high-resolution N-body simulations. We test the often used Gaussian ansatz for the PDF of pairwise velocities and discuss its limitations. Finally, we introduce a mixture of Gaussians which is known in statistics as the generalised hyperbolic distribution and show that it provides an accurate fit to the PDF. Once inserted in the streaming equation, the fit yields an excellent description of redshift-space correlations at all scales that vastly outperforms the Gaussian and exponential approximations. Using a principal-component analysis, we reduce the complexity of our model for large redshift-space separations. Our results increase the robustness of studies of anisotropic galaxy clustering and are useful for extending them towards smaller scales in order to test theories of gravity and interacting dark-energy models.
Detecting kinematic boundary surfaces in phase space: particle mass measurements in SUSY-like events
Debnath, Dipsikha; Gainer, James S.; Kilic, Can; ...
2017-06-19
We critically examine the classic endpoint method for particle mass determination, focusing on difficult corners of parameter space, where some of the measurements are not independent, while others are adversely affected by the experimental resolution. In such scenarios, mass differences can be measured relatively well, but the overall mass scale remains poorly constrained. Using the example of the standard SUSY decay chain q ~→χ ~ 0 2→ℓ ~→χ ~ 0 1 , we demonstrate that sensitivity to the remaining mass scale parameter can be recovered by measuring the two-dimensional kinematical boundary in the relevant three-dimensional phase space of invariant massesmore » squared. We develop an algorithm for detecting this boundary, which uses the geometric properties of the Voronoi tessellation of the data, and in particular, the relative standard deviation (RSD) of the volumes of the neighbors for each Voronoi cell in the tessellation. We propose a new observable, Σ¯ , which is the average RSD per unit area, calculated over the hypothesized boundary. We show that the location of the Σ¯ maximum correlates very well with the true values of the new particle masses. Our approach represents the natural extension of the one-dimensional kinematic endpoint method to the relevant three dimensions of invariant mass phase space.« less
Detecting kinematic boundary surfaces in phase space: particle mass measurements in SUSY-like events
DOE Office of Scientific and Technical Information (OSTI.GOV)
Debnath, Dipsikha; Gainer, James S.; Kilic, Can
We critically examine the classic endpoint method for particle mass determination, focusing on difficult corners of parameter space, where some of the measurements are not independent, while others are adversely affected by the experimental resolution. In such scenarios, mass differences can be measured relatively well, but the overall mass scale remains poorly constrained. Using the example of the standard SUSY decay chain q ~→χ ~ 0 2→ℓ ~→χ ~ 0 1 , we demonstrate that sensitivity to the remaining mass scale parameter can be recovered by measuring the two-dimensional kinematical boundary in the relevant three-dimensional phase space of invariant massesmore » squared. We develop an algorithm for detecting this boundary, which uses the geometric properties of the Voronoi tessellation of the data, and in particular, the relative standard deviation (RSD) of the volumes of the neighbors for each Voronoi cell in the tessellation. We propose a new observable, Σ¯ , which is the average RSD per unit area, calculated over the hypothesized boundary. We show that the location of the Σ¯ maximum correlates very well with the true values of the new particle masses. Our approach represents the natural extension of the one-dimensional kinematic endpoint method to the relevant three dimensions of invariant mass phase space.« less
NASA Technical Reports Server (NTRS)
Atlas, D.; Korb, C. L.
1980-01-01
The spectrum of weather and climate needs for Lidar observations from space is discussed with emphasis on the requirements for wind, temperature, moisture, and pressure data. It is shown that winds are required to realistically depict all atmospheric scales in the tropics and the smaller scales at higher latitudes, where both temperature and wind profiles are necessary. The need for means to estimate air-sea exchanges of sensible and latent heat also is noted. A concept for achieving this through a combination of Lidar cloud top heights and IR cloud top temperatures of cloud streets formed during cold air outbreaks over the warmer ocean is outlined. Recent theoretical feasibility studies concerning the profiling of temperatures, pressure, and humidity by differential absorption Lidar (DIAL) from space and expected accuracies are reviewed. An alternative approach to Doppler Lidar wind measurements also is presented. The concept involves the measurement of the displacement of the aerosol backscatter pattern, at constant heights, between two successive scans of the same area, one ahead of the spacecraft and the other behind it a few minutes later. Finally, an integrated space Lidar system capable of measuring temperature, pressure, humidity, and winds which combines the DIAL methods with the aerosol pattern displacement concept is described.
Detecting kinematic boundary surfaces in phase space: particle mass measurements in SUSY-like events
NASA Astrophysics Data System (ADS)
Debnath, Dipsikha; Gainer, James S.; Kilic, Can; Kim, Doojin; Matchev, Konstantin T.; Yang, Yuan-Pao
2017-06-01
We critically examine the classic endpoint method for particle mass determination, focusing on difficult corners of parameter space, where some of the measurements are not independent, while others are adversely affected by the experimental resolution. In such scenarios, mass differences can be measured relatively well, but the overall mass scale remains poorly constrained. Using the example of the standard SUSY decay chain \\tilde{q}\\to {\\tilde{χ}}_2^0\\to \\tilde{ℓ}\\to {\\tilde{χ}}_1^0 , we demonstrate that sensitivity to the remaining mass scale parameter can be recovered by measuring the two-dimensional kinematical boundary in the relevant three-dimensional phase space of invariant masses squared. We develop an algorithm for detecting this boundary, which uses the geometric properties of the Voronoi tessellation of the data, and in particular, the relative standard deviation (RSD) of the volumes of the neighbors for each Voronoi cell in the tessellation. We propose a new observable, \\overline{Σ} , which is the average RSD per unit area, calculated over the hypothesized boundary. We show that the location of the \\overline{Σ} maximum correlates very well with the true values of the new particle masses. Our approach represents the natural extension of the one-dimensional kinematic endpoint method to the relevant three dimensions of invariant mass phase space.
Comparative ruralism and ‘opening new windows’ on gentrification
Phillips, Martin; Smith, Darren P
2018-01-01
In response to the five commentaries on our paper ‘Comparative approaches to gentrification: lessons from the rural’, we open up more ‘windows’ on rural gentrification and its urban counterpart. First, we highlight the issues of metrocentricity and urbanormativity within gentrification studies, highlighting their employment by our commentators. Second, we consider the issue of displacement and its operation within rural space, as well as gentrification as a coping strategy for neoliberal existence and connections to more-than-human natures. Finally, we consider questions of scale, highlighting the need to avoid naturalistic conceptions of scale and arguing that attention could be paid to the role of material practices, symbolizations and lived experiences in producing scaled geographies of rural and urban gentrification. PMID:29657709
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Yang; Sivalingam, Kantharuban; Neese, Frank, E-mail: Frank.Neese@cec.mpg.de
2016-03-07
Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still twomore » important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling “partially contracted” NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient “electron pair prescreening” that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed comparison between the partial and strong contraction schemes is made, with conclusions that discourage the strong contraction scheme as a basis for local correlation methods due to its non-invariance with respect to rotations in the inactive and external subspaces. A minimal set of conservatively chosen truncation thresholds controls the accuracy of the method. With the default thresholds, about 99.9% of the canonical partially contracted NEVPT2 correlation energy is recovered while the crossover of the computational cost with the already very efficient canonical method occurs reasonably early; in linear chain type compounds at a chain length of around 80 atoms. Calculations are reported for systems with more than 300 atoms and 5400 basis functions.« less
Extremes and bursts in complex multi-scale plasmas
NASA Astrophysics Data System (ADS)
Watkins, N. W.; Chapman, S. C.; Hnat, B.
2012-04-01
Quantifying the spectrum of sizes and durations of large and/or long-lived fluctuations in complex, multi-scale, space plasmas is a topic of both theoretical and practical importance. The predictions of inherently multi-scale physical theories such as MHD turbulence have given one direct stimulus for its investigation. There are also space weather implications to an improved ability to assess the likelihood of an extreme fluctuation of a given size. Our intuition as scientists tends to be formed on the familiar Gaussian "normal" distribution, which has a very low likelihood of extreme fluctuations. Perhaps surprisingly, there is both theoretical and observational evidence that favours non-Gaussian, heavier-tailed, probability distributions for some space physics datasets. Additionally there is evidence for the existence of long-ranged memory between the values of fluctuations. In this talk I will show how such properties can be captured in a preliminary way by a self-similar, fractal model. I will show how such a fractal model can be used to make predictions for experimental accessible quantities like the size and duration of a buurst (a sequence of values that exceed a given threshold), or the survival probability of a burst [c.f. preliminary results in Watkins et al, PRE, 2009]. In real-world time series scaling behaviour need not be "mild" enough to be captured by a single self-similarity exponent H, but might instead require a "wild" multifractal spectrum of scaling exponents [e.g. Rypdal and Rypdal, JGR, 2011; Moloney and Davidsen, JGR, 2011] to give a complete description. I will discuss preliminary work on extending the burst approach into the multifractal domain [see also Watkins et al, chapter in press for AGU Chapman Conference on Complexity and Extreme Events in the Geosciences, Hyderabad].
Large Scale Processes and Extreme Floods in Brazil
NASA Astrophysics Data System (ADS)
Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.
2016-12-01
Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).
Spectral enstrophy budget in a shear-less flow with turbulent/non-turbulent interface
NASA Astrophysics Data System (ADS)
Cimarelli, Andrea; Cocconi, Giacomo; Frohnapfel, Bettina; De Angelis, Elisabetta
2015-12-01
A numerical analysis of the interaction between decaying shear free turbulence and quiescent fluid is performed by means of global statistical budgets of enstrophy, both, at the single-point and two point levels. The single-point enstrophy budget allows us to recognize three physically relevant layers: a bulk turbulent region, an inhomogeneous turbulent layer, and an interfacial layer. Within these layers, enstrophy is produced, transferred, and finally destroyed while leading to a propagation of the turbulent front. These processes do not only depend on the position in the flow field but are also strongly scale dependent. In order to tackle this multi-dimensional behaviour of enstrophy in the space of scales and in physical space, we analyse the spectral enstrophy budget equation. The picture consists of an inviscid spatial cascade of enstrophy from large to small scales parallel to the interface moving towards the interface. At the interface, this phenomenon breaks, leaving place to an anisotropic cascade where large scale structures exhibit only a cascade process normal to the interface thus reducing their thickness while retaining their lengths parallel to the interface. The observed behaviour could be relevant for both the theoretical and the modelling approaches to flow with interacting turbulent/nonturbulent regions. The scale properties of the turbulent propagation mechanisms highlight that the inviscid turbulent transport is a large-scale phenomenon. On the contrary, the viscous diffusion, commonly associated with small scale mechanisms, highlights a much richer physics involving small lengths, normal to the interface, but at the same time large scales, parallel to the interface.
Human-Robot Control Strategies for the NASA/DARPA Robonaut
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Culbert, Chris J.; Ambrose, Robert O.; Huber, E.; Bluethmann, W. J.
2003-01-01
The Robotic Systems Technology Branch at the NASA Johnson Space Center (JSC) is currently developing robot systems to reduce the Extra-Vehicular Activity (EVA) and planetary exploration burden on astronauts. One such system, Robonaut, is capable of interfacing with external Space Station systems that currently have only human interfaces. Robonaut is human scale, anthropomorphic, and designed to approach the dexterity of a space-suited astronaut. Robonaut can perform numerous human rated tasks, including actuating tether hooks, manipulating flexible materials, soldering wires, grasping handrails to move along space station mockups, and mating connectors. More recently, developments in autonomous control and perception for Robonaut have enabled dexterous, real-time man-machine interaction. Robonaut is now capable of acting as a practical autonomous assistant to the human, providing and accepting tools by reacting to body language. A versatile, vision-based algorithm for matching range silhouettes is used for monitoring human activity as well as estimating tool pose.
Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.
Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D
2017-11-01
We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.
NASA Technical Reports Server (NTRS)
Willmott, C. J.; Field, R. T.
1984-01-01
Algorithms for point interpolation and contouring on the surface of the sphere and in Cartesian two-space are developed from Shepard's (1968) well-known, local search method. These mapping procedures then are used to investigate the errors which appear on small-scale climate maps as a result of the all-too-common practice of of interpolating, from irregularly spaced data points to the nodes of a regular lattice, and contouring Cartesian two-space. Using mean annual air temperatures field over the western half of the northern hemisphere is estimated both on the sphere, assumed to be correct, and in Cartesian two-space. When the spherically- and Cartesian-approximted air temperature fields are mapped and compared, the magnitudes (as large as 5 C to 10 C) and distribution of the errors associated with the latter approach become apparent.
Next Generation Launch Technology Program Lessons Learned
NASA Technical Reports Server (NTRS)
Cook, Stephen; Tyson, Richard
2005-01-01
In November 2002, NASA revised its Integrated Space Transportation Plan (ISTP) to evolve the Space Launch Initiative (SLI) to serve as a theme for two emerging programs. The first of these, the Orbital Space Plane (OSP), was intended to provide crew-escape and crew-transfer functions for the ISS. The second, the NGLT Program, developed technologies needed for safe, routine space access for scientific exploration, commerce, and national defense. The NGLT Program was comprised of 12 projects, ranging from fundamental high-temperature materials research to full-scale engine system developments (turbine and rocket) to scramjet flight test. The Program included technology advancement activities with a broad range of objectives, ultimate applications/timeframes, and technology maturity levels. An over-arching Systems Engineering and Analysis (SE&A) approach was employed to focus technology advancements according to a common set of requirements. Investments were categorized into three segments of technology maturation: propulsion technologies, launch systems technologies, and SE&A.
NASA Technical Reports Server (NTRS)
Alexandrov, Mikhail Dmitrievic; Geogdzhayev, Igor V.; Tsigaridis, Konstantinos; Marshak, Alexander; Levy, Robert; Cairns, Brian
2016-01-01
A novel model for the variability in aerosol optical thickness (AOT) is presented. This model is based on the consideration of AOT fields as realizations of a stochastic process, that is the exponent of an underlying Gaussian process with a specific autocorrelation function. In this approach AOT fields have lognormal PDFs and structure functions having the correct asymptotic behavior at large scales. The latter is an advantage compared with fractal (scale-invariant) approaches. The simple analytical form of the structure function in the proposed model facilitates its use for the parameterization of AOT statistics derived from remote sensing data. The new approach is illustrated using a month-long global MODIS AOT dataset (over ocean) with 10 km resolution. It was used to compute AOT statistics for sample cells forming a grid with 5deg spacing. The observed shapes of the structure functions indicated that in a large number of cases the AOT variability is split into two regimes that exhibit different patterns of behavior: small-scale stationary processes and trends reflecting variations at larger scales. The small-scale patterns are suggested to be generated by local aerosols within the marine boundary layer, while the large-scale trends are indicative of elevated aerosols transported from remote continental sources. This assumption is evaluated by comparison of the geographical distributions of these patterns derived from MODIS data with those obtained from the GISS GCM. This study shows considerable potential to enhance comparisons between remote sensing datasets and climate models beyond regional mean AOTs.
Ensemble downscaling in coupled solar wind-magnetosphere modeling for space weather forecasting
Owens, M J; Horbury, T S; Wicks, R T; McGregor, S L; Savani, N P; Xiong, M
2014-01-01
Advanced forecasting of space weather requires simulation of the whole Sun-to-Earth system, which necessitates driving magnetospheric models with the outputs from solar wind models. This presents a fundamental difficulty, as the magnetosphere is sensitive to both large-scale solar wind structures, which can be captured by solar wind models, and small-scale solar wind “noise,” which is far below typical solar wind model resolution and results primarily from stochastic processes. Following similar approaches in terrestrial climate modeling, we propose statistical “downscaling” of solar wind model results prior to their use as input to a magnetospheric model. As magnetospheric response can be highly nonlinear, this is preferable to downscaling the results of magnetospheric modeling. To demonstrate the benefit of this approach, we first approximate solar wind model output by smoothing solar wind observations with an 8 h filter, then add small-scale structure back in through the addition of random noise with the observed spectral characteristics. Here we use a very simple parameterization of noise based upon the observed probability distribution functions of solar wind parameters, but more sophisticated methods will be developed in the future. An ensemble of results from the simple downscaling scheme are tested using a model-independent method and shown to add value to the magnetospheric forecast, both improving the best estimate and quantifying the uncertainty. We suggest a number of features desirable in an operational solar wind downscaling scheme. Key Points Solar wind models must be downscaled in order to drive magnetospheric models Ensemble downscaling is more effective than deterministic downscaling The magnetosphere responds nonlinearly to small-scale solar wind fluctuations PMID:26213518
The Design of Large-Scale Complex Engineered Systems: Present Challenges and Future Promise
NASA Technical Reports Server (NTRS)
Bloebaum, Christina L.; McGowan, Anna-Maria Rivas
2012-01-01
Model-Based Systems Engineering techniques are used in the SE community to address the need for managing the development of complex systems. A key feature of the MBSE approach is the use of a model to capture the requirements, architecture, behavior, operating environment and other key aspects of the system. The focus on the model differentiates MBSE from traditional SE techniques that may have a document centric approach. In an effort to assess the benefit of utilizing MBSE on its flight projects, NASA Langley has implemented a pilot program to apply MBSE techniques during the early phase of the Materials International Space Station Experiment-X (MISSE-X). MISSE-X is a Technology Demonstration Mission being developed by the NASA Office of the Chief Technologist i . Designed to be installed on the exterior of the International Space Station (ISS), MISSE-X will host experiments that advance the technology readiness of materials and devices needed for future space exploration. As a follow-on to the highly successful series of previous MISSE experiments on ISS, MISSE-X benefits from a significant interest by the
Killing approximation for vacuum and thermal stress-energy tensor in static space-times
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frolov, V.P.; Zel'nikov, A.I.
1987-05-15
The problem of the vacuum polarization of conformal massless fields in static space-times is considered. A tensor T/sub ..mu..//sub ..nu../ constructed from the curvature, the Killing vector, and their covariant derivatives is proposed which can be used to approximate the average value of the stress-energy tensor /sup ren/ in such spaces. It is shown that if (i) its trace T /sub epsilon//sup epsilon/ coincides with the trace anomaly /sup ren/, (ii) it satisfies the conservation law T/sup ..mu..//sup epsilon/ /sub ;//sub epsilon/ = 0, and (iii) it has the correct behavior under the scale transformations, then it is uniquely definedmore » up to a few arbitrary constants. These constants must be chosen to satisfy the boundary conditions. In the case of a static black hole in a vacuum these conditions single out the unique tensor T/sub ..mu..//sub ..nu../ which provides a good approximation for /sup ren/ in the Hartle-Hawking vacuum. The relation between this approach and the Page-Brown-Ottewill approach is discussed.« less
NASA Astrophysics Data System (ADS)
Terzopoulos, Demetri; Qureshi, Faisal Z.
Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.
NASA Astrophysics Data System (ADS)
Tubman, Norm; Whaley, Birgitta
The development of exponential scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, allows exact diagonalization through stochastically sampling of determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, together with a stochastic projected wave function, which are used to explore the important parts of Hilbert space. However, a stochastic representation of the wave function is not required to search Hilbert space efficiently and new deterministic approaches have recently been shown to efficiently find the important parts of determinant space. We shall discuss the technique of Adaptive Sampling Configuration Interaction (ASCI) and the related heat-bath Configuration Interaction approach for ground state and excited state simulations. We will present several applications for strongly correlated Hamiltonians. This work was supported through the Scientific Discovery through Advanced Computing (SciDAC) program funded by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences.
Flow measurements in a water tunnel using a holocinematographic velocimeter
NASA Technical Reports Server (NTRS)
Weinstein, Leonard M.; Beeler, George B.
1987-01-01
Dual-view holographic movies were used to examine complex flows with full three-space and time resolution. This approach, which tracks the movement of small tracer particles in water, is termed holocinematographic velocimetry (HCV). A small prototype of a new water tunnel was used to demonstrate proof-of-concept for the HCV. After utilizing a conventional flow visualization apparatus with a laser light sheet to illuminate tracer particles to evaluate flow quality of the prototype tunnel, a simplified version of the HCV was employed to demonstrate the capabilities of the approach. Results indicate that a full-scale version of the water tunnel and a high performance version of the HCV should be able to check theoretical and numerical modeling of complex flows and examine the mechanisms operative in turbulent and vortex flow control concepts, providing an entirely unique instrument capable, for the first time, of simultaneous three-space and time measurements in turbulent flow.
Phase and Pupil Amplitude Recovery for JWST Space-Optics Control
NASA Technical Reports Server (NTRS)
Dean, B. H.; Zielinski, T. P.; Smith, J. S.; Bolcar, M. R.; Aronstein, D. L.; Fienup, J. R.
2010-01-01
This slide presentation reviews the phase and pupil amplitude recovery for the James Webb Space Telescope (JWST) Near Infrared Camera (NIRCam). It includes views of the Integrated Science Instrument Module (ISIM), the NIRCam, examples of Phase Retrieval Data, Ghost Irradiance, Pupil Amplitude Estimation, Amplitude Retrieval, Initial Plate Scale Estimation using the Modulation Transfer Function (MTF), Pupil Amplitude Estimation vs lambda, Pupil Amplitude Estimation vs. number of Images, Pupil Amplitude Estimation vs Rotation (clocking), and Typical Phase Retrieval Results Also included is information about the phase retrieval approach, Non-Linear Optimization (NLO) Optimized Diversity Functions, and Least Square Error vs. Starting Pupil Amplitude.
Perceptual distortion analysis of color image VQ-based coding
NASA Astrophysics Data System (ADS)
Charrier, Christophe; Knoblauch, Kenneth; Cherifi, Hocine
1997-04-01
It is generally accepted that a RGB color image can be easily encoded by using a gray-scale compression technique on each of the three color planes. Such an approach, however, fails to take into account correlations existing between color planes and perceptual factors. We evaluated several linear and non-linear color spaces, some introduced by the CIE, compressed with the vector quantization technique for minimum perceptual distortion. To study these distortions, we measured contrast and luminance of the video framebuffer, to precisely control color. We then obtained psychophysical judgements to measure how well these methods work to minimize perceptual distortion in a variety of color space.
Lattice dynamics calculations based on density-functional perturbation theory in real space
NASA Astrophysics Data System (ADS)
Shang, Honghui; Carbogno, Christian; Rinke, Patrick; Scheffler, Matthias
2017-06-01
A real-space formalism for density-functional perturbation theory (DFPT) is derived and applied for the computation of harmonic vibrational properties in molecules and solids. The practical implementation using numeric atom-centered orbitals as basis functions is demonstrated exemplarily for the all-electron Fritz Haber Institute ab initio molecular simulations (FHI-aims) package. The convergence of the calculations with respect to numerical parameters is carefully investigated and a systematic comparison with finite-difference approaches is performed both for finite (molecules) and extended (periodic) systems. Finally, the scaling tests and scalability tests on massively parallel computer systems demonstrate the computational efficiency.
Solar concentrators for advanced solar-dynamic power systems in space
NASA Technical Reports Server (NTRS)
Rockwell, Richard
1993-01-01
This report summarizes the results of a study performed by Hughes Danbury Optical Systems, HDOS, (formerly Perkin-Elmer) to design, fabricate, and test a lightweight (2 kg/sq M), self supporting, and highly reflective sub-scale concentrating mirror panel suitable for use in space. The HDOS panel design utilizes Corning's 'micro sheet' glass as the top layer of a composite honeycomb sandwich. This approach, whose manufacturability was previously demonstrated under an earlier NASA contract, provides a smooth (specular) reflective surface without the weight of a conventional glass panel. The primary result of this study is a point design and it's performance assessment.
Assessment of the State-of-the-Art in the Design and Manufacturing of Large Composite Structure
NASA Technical Reports Server (NTRS)
Harris, C. E.
2001-01-01
This viewgraph presentation gives an assessment of the state-of-the-art in the design and manufacturing of large component structures, including details on the use of continuous fiber reinforced polymer matrix composites (CFRP) in commercial and military aircraft and in space launch vehicles. Project risk mitigation plans must include a building-block test approach to structural design development, manufacturing process scale-up development tests, and pre-flight ground tests to verify structural integrity. The potential benefits of composite structures justifies NASA's investment in developing the technology. Advanced composite structures technology is enabling to virtually every Aero-Space Technology Enterprise Goal.
Terahertz plasmonic Bessel beamformer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monnai, Yasuaki; Shinoda, Hiroyuki; Jahn, David
We experimentally demonstrate terahertz Bessel beamforming based on the concept of plasmonics. The proposed planar structure is made of concentric metallic grooves with a subwavelength spacing that couple to a point source to create tightly confined surface waves or spoof surface plasmon polaritons. Concentric scatterers periodically incorporated at a wavelength scale allow for launching the surface waves into free space to define a Bessel beam. The Bessel beam defined at 0.29 THz has been characterized through terahertz time-domain spectroscopy. This approach is capable of generating Bessel beams with planar structures as opposed to bulky axicon lenses and can be readily integratedmore » with solid-state terahertz sources.« less
Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; ...
2015-05-27
We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. The focus is on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues formore » investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.« less
Brodie, Nicholas I.; Popov, Konstantin I.; Petrotchenko, Evgeniy V.; Dokholyan, Nikolay V.; Borchers, Christoph H.
2017-01-01
We present an integrated experimental and computational approach for de novo protein structure determination in which short-distance cross-linking data are incorporated into rapid discrete molecular dynamics (DMD) simulations as constraints, reducing the conformational space and achieving the correct protein folding on practical time scales. We tested our approach on myoglobin and FK506 binding protein—models for α helix–rich and β sheet–rich proteins, respectively—and found that the lowest-energy structures obtained were in agreement with the crystal structure, hydrogen-deuterium exchange, surface modification, and long-distance cross-linking validation data. Our approach is readily applicable to other proteins with unknown structures. PMID:28695211
Brodie, Nicholas I; Popov, Konstantin I; Petrotchenko, Evgeniy V; Dokholyan, Nikolay V; Borchers, Christoph H
2017-07-01
We present an integrated experimental and computational approach for de novo protein structure determination in which short-distance cross-linking data are incorporated into rapid discrete molecular dynamics (DMD) simulations as constraints, reducing the conformational space and achieving the correct protein folding on practical time scales. We tested our approach on myoglobin and FK506 binding protein-models for α helix-rich and β sheet-rich proteins, respectively-and found that the lowest-energy structures obtained were in agreement with the crystal structure, hydrogen-deuterium exchange, surface modification, and long-distance cross-linking validation data. Our approach is readily applicable to other proteins with unknown structures.
Biological challenges of true space settlement
NASA Astrophysics Data System (ADS)
Mankins, John C.; Mankins, Willa M.; Walter, Helen
2018-05-01
"Space Settlements" - i.e., permanent human communities beyond Earth's biosphere - have been discussed within the space advocacy community since the 1970s. Now, with the end of the International Space Station (ISS) program fast approaching (planned for 2024-2025) and the advent of low cost Earth-to-orbit (ETO) transportation in the near future, the concept is coming once more into mainstream. Considerable attention has been focused on various issues associated with the engineering and human health considerations of space settlement such as artificial gravity and radiation shielding. However, relatively little attention has been given to the biological implications of a self-sufficient space settlement. Three fundamental questions are explored in this paper: (1) what are the biological "foundations" of truly self-sufficient space settlements in the foreseeable future, (2) what is the minimum scale for such self-sustaining human settlements, and (3) what are the integrated biologically-driven system requirements for such settlements? The paper examines briefly the implications of the answers to these questions in relevant potential settings (including free space, the Moon and Mars). Finally, this paper suggests relevant directions for future research and development in order for such space settlements to become viable in the future.
Hamilton, Joshua J; Dwivedi, Vivek; Reed, Jennifer L
2013-07-16
Constraint-based methods provide powerful computational techniques to allow understanding and prediction of cellular behavior. These methods rely on physiochemical constraints to eliminate infeasible behaviors from the space of available behaviors. One such constraint is thermodynamic feasibility, the requirement that intracellular flux distributions obey the laws of thermodynamics. The past decade has seen several constraint-based methods that interpret this constraint in different ways, including those that are limited to small networks, rely on predefined reaction directions, and/or neglect the relationship between reaction free energies and metabolite concentrations. In this work, we utilize one such approach, thermodynamics-based metabolic flux analysis (TMFA), to make genome-scale, quantitative predictions about metabolite concentrations and reaction free energies in the absence of prior knowledge of reaction directions, while accounting for uncertainties in thermodynamic estimates. We applied TMFA to a genome-scale network reconstruction of Escherichia coli and examined the effect of thermodynamic constraints on the flux space. We also assessed the predictive performance of TMFA against gene essentiality and quantitative metabolomics data, under both aerobic and anaerobic, and optimal and suboptimal growth conditions. Based on these results, we propose that TMFA is a useful tool for validating phenotypes and generating hypotheses, and that additional types of data and constraints can improve predictions of metabolite concentrations. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Towse, Clare-Louise; Akke, Mikael; Daggett, Valerie
2017-04-27
Molecular dynamics (MD) simulations contain considerable information with regard to the motions and fluctuations of a protein, the magnitude of which can be used to estimate conformational entropy. Here we survey conformational entropy across protein fold space using the Dynameomics database, which represents the largest existing data set of protein MD simulations for representatives of essentially all known protein folds. We provide an overview of MD-derived entropies accounting for all possible degrees of dihedral freedom on an unprecedented scale. Although different side chains might be expected to impose varying restrictions on the conformational space that the backbone can sample, we found that the backbone entropy and side chain size are not strictly coupled. An outcome of these analyses is the Dynameomics Entropy Dictionary, the contents of which have been compared with entropies derived by other theoretical approaches and experiment. As might be expected, the conformational entropies scale linearly with the number of residues, demonstrating that conformational entropy is an extensive property of proteins. The calculated conformational entropies of folding agree well with previous estimates. Detailed analysis of specific cases identifies deviations in conformational entropy from the average values that highlight how conformational entropy varies with sequence, secondary structure, and tertiary fold. Notably, α-helices have lower entropy on average than do β-sheets, and both are lower than coil regions.
Observing the Global Water Cycle from Space
NASA Technical Reports Server (NTRS)
Hildebrand, Peter H.; Houser, Paul; Schlosser, C. Adam
2003-01-01
This paper presents an approach to measuring all major components of the water cycle from space. The goal of the paper is to explore the concept of using a sensor-web of satellites to observe the global water cycle. The details of the required measurements and observation systems are therefore only an initial approach and will undergo future refinement, as their details will be highly important. Key elements include observation and evaluation of all components of the water cycle in terms of the storage of water-in the ocean, air, cloud and precipitation, in soil, ground water, snow and ice, and in lakes and rivers-and in terms of the global fluxes of water between these reservoirs. For each component of the water cycle that must be observed, the appropriate temporal and spatial scales of measurement are estimated, along with the some of the frequencies that have been used for active and passive microwave observations of the quantities. The suggested types of microwave observations are based on the heritage for such measurements, and some aspects of the recent heritage of these measurement algorithms are listed. The observational requirements are based on present observational systems, as modified by expectations for future needs. Approaches to the development of space systems for measuring the global water cycle can be based on these observational requirements.
Characterization of double continuum formulations of transport through pore-scale information
NASA Astrophysics Data System (ADS)
Porta, G.; Ceriotti, G.; Bijeljic, B.
2016-12-01
Information on pore-scale characteristics is becoming increasingly available at unprecedented levels of detail from modern visualization/data-acquisition techniques. These advancements are not completely matched by corresponding developments of operational procedures according to which we can engineer theoretical findings aiming at improving our ability to reduce the uncertainty associated with the outputs of continuum-scale models to be employed at large scales. We present here a modeling approach which rests on pore-scale information to achieve a complete characterization of a double continuum model of transport and fluid-fluid reactive processes. Our model makes full use of pore-scale velocity distributions to identify mobile and immobile regions. We do so on the basis of a pointwise (in the pore space) evaluation of the relative strength of advection and diffusion time scales, as rendered by spatially variable values of local Péclet numbers. After mobile and immobile regions are demarcated, we build a simplified unit cell which is employed as a representative proxy of the real porous domain. This model geometry is then employed to simplify the computation of the effective parameters embedded in the double continuum transport model, while retaining relevant information from the pore-scale characterization of the geometry and velocity field. We document results which illustrate the applicability of the methodology to predict transport of a passive tracer within two- and three-dimensional media upon comparison with direct pore-scale numerical simulation of transport in the same geometrical settings. We also show preliminary results about the extension of this model to fluid-fluid reactive transport processes. In this context, we focus on results obtained in two-dimensional porous systems. We discuss the impact of critical quantities required as input to our modeling approach to obtain continuum-scale outputs. We identify the key limitations of the proposed methodology and discuss its capability also in comparison with alternative approaches grounded, e.g., on nonlocal and particle-based approximations.
Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe
A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals formore » the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. In conclusion, the chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.« less
Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations
Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe; ...
2017-11-14
A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals formore » the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. In conclusion, the chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.« less
Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations
NASA Astrophysics Data System (ADS)
Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe; Gagliardi, Laura; de Jong, Wibe A.
2017-11-01
A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals for the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. The chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.
Sampling and Visualizing Creases with Scale-Space Particles
Kindlmann, Gordon L.; Estépar, Raúl San José; Smith, Stephen M.; Westin, Carl-Fredrik
2010-01-01
Particle systems have gained importance as a methodology for sampling implicit surfaces and segmented objects to improve mesh generation and shape analysis. We propose that particle systems have a significantly more general role in sampling structure from unsegmented data. We describe a particle system that computes samplings of crease features (i.e. ridges and valleys, as lines or surfaces) that effectively represent many anatomical structures in scanned medical data. Because structure naturally exists at a range of sizes relative to the image resolution, computer vision has developed the theory of scale-space, which considers an n-D image as an (n + 1)-D stack of images at different blurring levels. Our scale-space particles move through continuous four-dimensional scale-space according to spatial constraints imposed by the crease features, a particle-image energy that draws particles towards scales of maximal feature strength, and an inter-particle energy that controls sampling density in space and scale. To make scale-space practical for large three-dimensional data, we present a spline-based interpolation across scale from a small number of pre-computed blurrings at optimally selected scales. The configuration of the particle system is visualized with tensor glyphs that display information about the local Hessian of the image, and the scale of the particle. We use scale-space particles to sample the complex three-dimensional branching structure of airways in lung CT, and the major white matter structures in brain DTI. PMID:19834216
Jackson, Nathan; Muthuswamy, Jit
2009-04-01
We report here a novel approach called MEMS microflex interconnect (MMFI) technology for packaging a new generation of Bio-MEMS devices that involve movable microelectrodes implanted in brain tissue. MMFI addresses the need for (i) operating space for movable parts and (ii) flexible interconnects for mechanical isolation. We fabricated a thin polyimide substrate with embedded bond-pads, vias, and conducting traces for the interconnect with a backside dry etch, so that the flexible substrate can act as a thin-film cap for the MEMS package. A double gold stud bump rivet bonding mechanism was used to form electrical connections to the chip and also to provide a spacing of approximately 15-20 µm for the movable parts. The MMFI approach achieved a chip scale package (CSP) that is lightweight, biocompatible, having flexible interconnects, without an underfill. Reliability tests demonstrated minimal increases of 0.35 mΩ, 0.23 mΩ and 0.15 mΩ in mean contact resistances under high humidity, thermal cycling, and thermal shock conditions respectively. High temperature tests resulted in an increase in resistance of > 90 mΩ when aluminum bond pads were used, but an increase of ~ 4.2 mΩ with gold bond pads. The mean-time-to-failure (MTTF) was estimated to be at least one year under physiological conditions. We conclude that MMFI technology is a feasible and reliable approach for packaging and interconnecting Bio-MEMS devices.
NASA Astrophysics Data System (ADS)
Razavi, S.; Gupta, H. V.
2015-12-01
Earth and environmental systems models (EESMs) are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. Complexity and dimensionality are manifested by introducing many different factors in EESMs (i.e., model parameters, forcings, boundary conditions, etc.) to be identified. Sensitivity Analysis (SA) provides an essential means for characterizing the role and importance of such factors in producing the model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to 'variogram analysis', that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are limiting cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
Quantum teleportation and entanglement distribution over 100-kilometre free-space channels.
Yin, Juan; Ren, Ji-Gang; Lu, He; Cao, Yuan; Yong, Hai-Lin; Wu, Yu-Ping; Liu, Chang; Liao, Sheng-Kai; Zhou, Fei; Jiang, Yan; Cai, Xin-Dong; Xu, Ping; Pan, Ge-Sheng; Jia, Jian-Jun; Huang, Yong-Mei; Yin, Hao; Wang, Jian-Yu; Chen, Yu-Ao; Peng, Cheng-Zhi; Pan, Jian-Wei
2012-08-09
Transferring an unknown quantum state over arbitrary distances is essential for large-scale quantum communication and distributed quantum networks. It can be achieved with the help of long-distance quantum teleportation and entanglement distribution. The latter is also important for fundamental tests of the laws of quantum mechanics. Although quantum teleportation and entanglement distribution over moderate distances have been realized using optical fibre links, the huge photon loss and decoherence in fibres necessitate the use of quantum repeaters for larger distances. However, the practical realization of quantum repeaters remains experimentally challenging. Free-space channels, first used for quantum key distribution, offer a more promising approach because photon loss and decoherence are almost negligible in the atmosphere. Furthermore, by using satellites, ultra-long-distance quantum communication and tests of quantum foundations could be achieved on a global scale. Previous experiments have achieved free-space distribution of entangled photon pairs over distances of 600 metres (ref. 14) and 13 kilometres (ref. 15), and transfer of triggered single photons over a 144-kilometre one-link free-space channel. Most recently, following a modified scheme, free-space quantum teleportation over 16 kilometres was demonstrated with a single pair of entangled photons. Here we report quantum teleportation of independent qubits over a 97-kilometre one-link free-space channel with multi-photon entanglement. An average fidelity of 80.4 ± 0.9 per cent is achieved for six distinct states. Furthermore, we demonstrate entanglement distribution over a two-link channel, in which the entangled photons are separated by 101.8 kilometres. Violation of the Clauser-Horne-Shimony-Holt inequality is observed without the locality loophole. Besides being of fundamental interest, our results represent an important step towards a global quantum network. Moreover, the high-frequency and high-accuracy acquiring, pointing and tracking technique developed in our experiment can be directly used for future satellite-based quantum communication and large-scale tests of quantum foundations.
Influences of the MJO on the space-time organization of tropical convection
NASA Astrophysics Data System (ADS)
Dias, Juliana; Sakaeda, Naoko; Kiladis, George N.; Kikuchi, Kazuyoshi
2017-08-01
The fact that the Madden-Julian Oscillation (MJO) is characterized by large-scale patterns of enhanced tropical rainfall has been widely recognized for decades. However, the precise nature of any two-way feedback between the MJO and the properties of smaller-scale organization that makes up its convective envelope is not well understood. Satellite estimates of brightness temperature are used here as a proxy for tropical rainfall, and a variety of diagnostics are applied to determine the degree to which tropical convection is affected either locally or globally by the MJO. To address the multiscale nature of tropical convective organization, the approach ranges from space-time spectral analysis to an object-tracking algorithm. In addition to the intensity and distribution of global tropical rainfall, the relationship between the MJO and other tropical processes such as convectively coupled equatorial waves, mesoscale convective systems, and the diurnal cycle of tropical convection is also analyzed. The main findings of this paper are that, aside from the well-known increase in rainfall activity across scales within the MJO convective envelope, the MJO does not favor any particular scale or type of organization, and there is no clear signature of the MJO in terms of the globally integrated distribution of brightness temperature or rainfall.
A quality by design study applied to an industrial pharmaceutical fluid bed granulation.
Lourenço, Vera; Lochmann, Dirk; Reich, Gabriele; Menezes, José C; Herdling, Thorsten; Schewitz, Jens
2012-06-01
The pharmaceutical industry is encouraged within Quality by Design (QbD) to apply science-based manufacturing principles to assure quality not only of new but also of existing processes. This paper presents how QbD principles can be applied to an existing industrial pharmaceutical fluid bed granulation (FBG) process. A three-step approach is presented as follows: (1) implementation of Process Analytical Technology (PAT) monitoring tools at the industrial scale process, combined with multivariate data analysis (MVDA) of process and PAT data to increase the process knowledge; (2) execution of scaled-down designed experiments at a pilot scale, with adequate PAT monitoring tools, to investigate the process response to intended changes in Critical Process Parameters (CPPs); and finally (3) the definition of a process Design Space (DS) linking CPPs to Critical to Quality Attributes (CQAs), within which product quality is ensured by design, and after scale-up enabling its use at the industrial process scale. The proposed approach was developed for an existing industrial process. Through enhanced process knowledge established a significant reduction in product CQAs, variability already within quality specifications ranges was achieved by a better choice of CPPs values. The results of such step-wise development and implementation are described. Copyright © 2012 Elsevier B.V. All rights reserved.
Small-scale, self-propagating combustion realized with on-chip porous silicon.
Piekiel, Nicholas W; Morris, Christopher J
2015-05-13
For small-scale energy applications, energetic materials represent a high energy density source that, in certain cases, can be accessed with a very small amount of energy input. Recent advances in microprocessing techniques allow for the implementation of a porous silicon energetic material onto a crystalline silicon wafer at the microscale; however, combustion at a small length scale remains to be fully investigated, particularly with regards to the limitations of increased relative heat loss during combustion. The present study explores the critical dimensions of an on-chip porous silicon energetic material (porous silicon + sodium perchlorate (NaClO4)) required to propagate combustion. We etched ∼97 μm wide and ∼45 μm deep porous silicon channels that burned at a steady rate of 4.6 m/s, remaining steady across 90° changes in direction. In an effort to minimize the potential on-chip footprint for energetic porous silicon, we also explored the minimum spacing between porous silicon channels. We demonstrated independent burning of porous silicon channels at a spacing of <40 μm. Using this spacing, it was possible to have a flame path length of >0.5 m on a chip surface area of 1.65 cm(2). Smaller porous silicon channels of ∼28 μm wide and ∼14 μm deep were also utilized. These samples propagated combustion, but at times, did so unsteadily. This result may suggest that we are approaching a critical length scale for self-propagating combustion in a porous silicon energetic material.
NASA Technical Reports Server (NTRS)
Allgood, Daniel C.; Graham, Jason S.; McVay, Greg P.; Langford, Lester L.
2008-01-01
A unique assessment of acoustic similarity scaling laws and acoustic analogy methodologies in predicting the far-field acoustic signature from a sub-scale altitude rocket test facility at the NASA Stennis Space Center was performed. A directional, point-source similarity analysis was implemented for predicting the acoustic far-field. In this approach, experimental acoustic data obtained from "similar" rocket engine tests were appropriately scaled using key geometric and dynamic parameters. The accuracy of this engineering-level method is discussed by comparing the predictions with acoustic far-field measurements obtained. In addition, a CFD solver was coupled with a Lilley's acoustic analogy formulation to determine the improvement of using a physics-based methodology over an experimental correlation approach. In the current work, steady-state Reynolds-averaged Navier-Stokes calculations were used to model the internal flow of the rocket engine and altitude diffuser. These internal flow simulations provided the necessary realistic input conditions for external plume simulations. The CFD plume simulations were then used to provide the spatial turbulent noise source distributions in the acoustic analogy calculations. Preliminary findings of these studies will be discussed.
NASA Astrophysics Data System (ADS)
Sigismondi, Costantino
2006-06-01
The role played by Fermions in Astrophysics is primary in Cosmology dealing with large-scale structure formation and in Relativistic Astrophysics, with white dwarfs and neutron stars. An introductory approach to Fermions in the expanding Universe is presented for didactic purposes. The phase space structure is sketched, and the distinction between weak gravitational field and strong gravitational field is made in order to make some evaluations about the Fermi energy of the systems.
Development Of A Data Assimilation Capability For RAPID
NASA Astrophysics Data System (ADS)
Emery, C. M.; David, C. H.; Turmon, M.; Hobbs, J.; Allen, G. H.; Famiglietti, J. S.
2017-12-01
The global decline of in situ observations associated with the increasing ability to monitor surface water from space motivates the creation of data assimilation algorithms that merge computer models and space-based observations to produce consistent estimates of terrestrial hydrology that fill the spatiotemporal gaps in observations. RAPID is a routing model based on the Muskingum method that is capable of estimating river streamflow over large scales with a relatively short computing time. This model only requires limited inputs: a reach-based river network, and lateral surface and subsurface flow into the rivers. The relatively simple model physics imply that RAPID simulations could be significantly improved by including a data assimilation capability. Here we present the early developments of such data assimilation approach into RAPID. Given the linear and matrix-based structure of the model, we chose to apply a direct Kalman filter, hence allowing for the preservation of high computational speed. We correct the simulated streamflows by assimilating streamflow observations and our early results demonstrate the feasibility of the approach. Additionally, the use of in situ gauges at continental scales motivates the application of our new data assimilation scheme to altimetry measurements from existing (e.g. EnviSat, Jason 2) and upcoming satellite missions (e.g. SWOT), and ultimately apply the scheme globally.
Measured noise of a scale model high speed propeller at simulated takeoff/approach conditions
NASA Technical Reports Server (NTRS)
Woodward, Richard P.
1987-01-01
A model high-speed advanced propeller, SR-7A, was tested in the NASA Lewis 9x15 foot anechoic wind tunnel at simulated takeoff/approach conditions of 0.2 Mach number. These tests were in support of the full-scale Propfan Text Assessment (PTA) flight program. Acoustic measurements were taken with fixed microphone arrays and with an axially translating microphone probe. Limited aerodynamic measurements were also taken to establish the propeller operating conditions. Tests were conducted with the propeller alone and with three down-stream wing configurations. The propeller was run over a range of blade setting angles from 32.0 deg. to 43.6 deg., tip speeds from 183 to 290 m/sec (600 to 950 ft/sec), and angles of attack from -10 deg. to +15 deg. The propeller alone BPF tone noise was found to increase 10 dB in the flyover plane at 15 deg. propeller axis angle of attack. The installation of the straight wing at minimum spacing of 0.54 wing chord increased the tone noise 5 dB under the wing of 10 deg. propeller axis angle of attack, while a similarly spaced inboard upswept wing only increased the tone noise 2 dB.
Eyeglass: A Very Large Aperture Diffractive Space Telescope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hyde, R; Dixit, S; Weisberg, A
2002-07-29
Eyeglass is a very large aperture (25-100 meter) space telescope consisting of two distinct spacecraft, separated in space by several kilometers. A diffractive lens provides the telescope's large aperture, and a separate, much smaller, space telescope serves as its mobile eyepiece. Use of a transmissive diffractive lens solves two basic problems associated with very large aperture space telescopes; it is inherently fieldable (lightweight and flat, hence packagable and deployable) and virtually eliminates the traditional, very tight, surface shape tolerances faced by reflecting apertures. The potential drawback to use of a diffractive primary (very narrow spectral bandwidth) is eliminated by correctivemore » optics in the telescope's eyepiece. The Eyeglass can provide diffraction-limited imaging with either single-band, multiband, or continuous spectral coverage. Broadband diffractive telescopes have been built at LLNL and have demonstrated diffraction-limited performance over a 40% spectral bandwidth (0.48-0.72 {micro}m). As one approach to package a large aperture for launch, a foldable lens has been built and demonstrated. A 75 cm aperture diffractive lens was constructed from 6 panels of 1 m thick silica; it achieved diffraction-limited performance both before and after folding. This multiple panel, folding lens, approach is currently being scaled-up at LLNL. We are building a 5 meter aperture foldable lens, involving 72 panels of 700 {micro}m thick glass sheets, diffractively patterned to operate as coherent f/50 lens.« less
Subsampling for dataset optimisation
NASA Astrophysics Data System (ADS)
Ließ, Mareike
2017-04-01
Soil-landscapes have formed by the interaction of soil-forming factors and pedogenic processes. In modelling these landscapes in their pedodiversity and the underlying processes, a representative unbiased dataset is required. This concerns model input as well as output data. However, very often big datasets are available which are highly heterogeneous and were gathered for various purposes, but not to model a particular process or data space. As a first step, the overall data space and/or landscape section to be modelled needs to be identified including considerations regarding scale and resolution. Then the available dataset needs to be optimised via subsampling to well represent this n-dimensional data space. A couple of well-known sampling designs may be adapted to suit this purpose. The overall approach follows three main strategies: (1) the data space may be condensed and de-correlated by a factor analysis to facilitate the subsampling process. (2) Different methods of pattern recognition serve to structure the n-dimensional data space to be modelled into units which then form the basis for the optimisation of an existing dataset through a sensible selection of samples. Along the way, data units for which there is currently insufficient soil data available may be identified. And (3) random samples from the n-dimensional data space may be replaced by similar samples from the available dataset. While being a presupposition to develop data-driven statistical models, this approach may also help to develop universal process models and identify limitations in existing models.
A nonlinear generalized continuum approach for electro-elasticity including scale effects
NASA Astrophysics Data System (ADS)
Skatulla, S.; Arockiarajan, A.; Sansour, C.
2009-01-01
Materials characterized by an electro-mechanically coupled behaviour fall into the category of so-called smart materials. In particular, electro-active polymers (EAP) recently attracted much interest, because, upon electrical loading, EAP exhibit a large amount of deformation while sustaining large forces. This property can be utilized for actuators in electro-mechanical systems, artificial muscles and so forth. When it comes to smaller structures, it is a well-known fact that the mechanical response deviates from the prediction of classical mechanics theory. These scale effects are due to the fact that the size of the microscopic material constituents of such structures cannot be considered to be negligible small anymore compared to the structure's overall dimensions. In this context so-called generalized continuum formulations have been proven to account for the micro-structural influence to the macroscopic material response. Here, we want to adopt a strain gradient approach based on a generalized continuum framework [Sansour, C., 1998. A unified concept of elastic-viscoplastic Cosserat and micromorphic continua. J. Phys. IV Proc. 8, 341-348; Sansour, C., Skatulla, S., 2007. A higher gradient formulation and meshfree-based computation for elastic rock. Geomech. Geoeng. 2, 3-15] and extend it to also encompass the electro-mechanically coupled behaviour of EAP. The approach introduces new strain and stress measures which lead to the formulation of a corresponding generalized variational principle. The theory is completed by Dirichlet boundary conditions for the displacement field and its derivatives normal to the boundary as well as the electric potential. The basic idea behind this generalized continuum theory is the consideration of a micro- and a macro-space which together span the generalized space. As all quantities are defined in this generalized space, also the constitutive law, which is in this work conventional electro-mechanically coupled nonlinear hyperelasticity, is embedded in the generalized continuum. In this way material information of the micro-space, which are here only the geometrical specifications of the micro-continuum, can naturally enter the constitutive law. Several applications with moving least square-based approximations (MLS) demonstrate the potential of the proposed method. This particular meshfree method is chosen, as it has been proven to be highly flexible with regard to continuity and consistency required by this generalized approach.
Body frame close coupling wave packet approach to gas phase atom-rigid rotor inelastic collisions
NASA Technical Reports Server (NTRS)
Sun, Y.; Judson, R. S.; Kouri, D. J.
1989-01-01
The close coupling wave packet (CCWP) method is formulated in a body-fixed representation for atom-rigid rotor inelastic scattering. For J greater than j-max (where J is the total angular momentum and j is the rotational quantum number), the computational cost of propagating the coupled channel wave packets in the body frame is shown to scale approximately as N exp 3/2, where N is the total number of channels. For large numbers of channels, this will be much more efficient than the space frame CCWP method previously developed which scales approximately as N-squared under the same conditions.
Technology-based design and scaling for RTGs for space exploration in the 100 W range
NASA Astrophysics Data System (ADS)
Summerer, Leopold; Pierre Roux, Jean; Pustovalov, Alexey; Gusev, Viacheslav; Rybkin, Nikolai
2011-04-01
This paper presents the results of a study on design considerations for a 100 W radioisotope thermo-electric generator (RTG). Special emphasis has been put on designing a modular, multi-purpose system with high overall TRL levels and making full use of the extensive Russian heritage in the design of radioisotope power systems. The modular approach allowed insight into the scaling of such RTGs covering the electric power range from 50 to 200 W e (EoL). The retained concept is based on a modular thermal block structure, a radiative inner-RTG heat transfer and using a two-stage thermo-electric conversion system.
Minimal scales from an extended Hilbert space
NASA Astrophysics Data System (ADS)
Kober, Martin; Nicolini, Piero
2010-12-01
We consider an extension of the conventional quantum Heisenberg algebra, assuming that coordinates as well as momenta fulfil nontrivial commutation relations. As a consequence, a minimal length and a minimal mass scale are implemented. Our commutators do not depend on positions and momenta and we provide an extension of the coordinate coherent state approach to noncommutative geometry. We explore, as a toy model, the corresponding quantum field theory in a (2+1)-dimensional spacetime. Then we investigate the more realistic case of a (3+1)-dimensional spacetime, foliated into noncommutative planes. As a result, we obtain propagators, which are finite in the ultraviolet as well as the infrared regime.
Friction Stir Welding of Large Scale Cryogenic Tanks for Aerospace Applications
NASA Technical Reports Server (NTRS)
Russell, Carolyn; Ding, R. Jeffrey
1998-01-01
The Marshall Space Flight Center (MSFC) has established a facility for the joining of large-scale aluminum cryogenic propellant tanks using the friction stir welding process. Longitudinal welds, approximately five meters in length, have been made by retrofitting an existing vertical fusion weld system, designed to fabricate tank barrel sections ranging from two to ten meters in diameter. The structural design requirements of the tooling, clamping and travel system will be described in this presentation along with process controls and real-time data acquisition developed for this application. The approach to retrofitting other large welding tools at MSFC with the friction stir welding process will also be discussed.
Accelerated horizons and Planck-scale kinematics
NASA Astrophysics Data System (ADS)
Arzano, Michele; Laudonio, Matteo
2018-04-01
We extend the concept of accelerated horizons to the framework of deformed relativistic kinematics at the Planck scale. We show that the nontrivial effects due to symmetry deformation manifest in a finite blueshift for field modes as measured by a Rindler observer approaching the horizon. We investigate whether, at a field theoretic level, this effect could manifest in the possibility of a finite horizon contribution to the entropy, a sort of covariant brick wall. In the specific model of symmetry deformation considered, it will turn out that a nondiverging density of modes close to the horizon can be achieved only by introducing a momentum space measure which violates Lorentz invariance.
The Importance of Neighborhood Scheme Selection in Agent-based Tumor Growth Modeling.
Tzedakis, Georgios; Tzamali, Eleftheria; Marias, Kostas; Sakkalis, Vangelis
2015-01-01
Modeling tumor growth has proven a very challenging problem, mainly due to the fact that tumors are highly complex systems that involve dynamic interactions spanning multiple scales both in time and space. The desire to describe interactions in various scales has given rise to modeling approaches that use both continuous and discrete variables, known as hybrid approaches. This work refers to a hybrid model on a 2D square lattice focusing on cell movement dynamics as they play an important role in tumor morphology, invasion and metastasis and are considered as indicators for the stage of malignancy used for early prognosis and effective treatment. Considering various distributions of the microenvironment, we explore how Neumann vs. Moore neighborhood schemes affects tumor growth and morphology. The results indicate that the importance of neighborhood selection is critical under specific conditions that include i) increased hapto/chemo-tactic coefficient, ii) a rugged microenvironment and iii) ECM degradation.
Complex Quantum Network Manifolds in Dimension d > 2 are Scale-Free
NASA Astrophysics Data System (ADS)
Bianconi, Ginestra; Rahmede, Christoph
2015-09-01
In quantum gravity, several approaches have been proposed until now for the quantum description of discrete geometries. These theoretical frameworks include loop quantum gravity, causal dynamical triangulations, causal sets, quantum graphity, and energetic spin networks. Most of these approaches describe discrete spaces as homogeneous network manifolds. Here we define Complex Quantum Network Manifolds (CQNM) describing the evolution of quantum network states, and constructed from growing simplicial complexes of dimension . We show that in d = 2 CQNM are homogeneous networks while for d > 2 they are scale-free i.e. they are characterized by large inhomogeneities of degrees like most complex networks. From the self-organized evolution of CQNM quantum statistics emerge spontaneously. Here we define the generalized degrees associated with the -faces of the -dimensional CQNMs, and we show that the statistics of these generalized degrees can either follow Fermi-Dirac, Boltzmann or Bose-Einstein distributions depending on the dimension of the -faces.
NASA Astrophysics Data System (ADS)
Tourret, D.; Karma, A.; Clarke, A. J.; Gibbs, P. J.; Imhoff, S. D.
2015-06-01
We present a three-dimensional (3D) extension of a previously proposed multi-scale Dendritic Needle Network (DNN) approach for the growth of complex dendritic microstructures. Using a new formulation of the DNN dynamics equations for dendritic paraboloid-branches of a given thickness, one can directly extend the DNN approach to 3D modeling. We validate this new formulation against known scaling laws and analytical solutions that describe the early transient and steady-state growth regimes, respectively. Finally, we compare the predictions of the model to in situ X-ray imaging of Al-Cu alloy solidification experiments. The comparison shows a very good quantitative agreement between 3D simulations and thin sample experiments. It also highlights the importance of full 3D modeling to accurately predict the primary dendrite arm spacing that is significantly over-estimated by 2D simulations.
NASA Technical Reports Server (NTRS)
Yurchak, Boris S.
2010-01-01
The study of the collective effects of radar scattering from an aggregation of discrete scatterers randomly distributed in a space is important for better understanding the origin of the backscatter from spatially extended geophysical targets (SEGT). We consider the microstructure irregularities of a SEGT as the essential factor that affect radar backscatter. To evaluate their contribution this study uses the "slice" approach: particles close to the front of incident radar wave are considered to reflect incident electromagnetic wave coherently. The radar equation for a SEGT is derived. The equation includes contributions to the total backscatter from correlated small-scale fluctuations of the slice's reflectivity. The correlation contribution changes in accordance with an earlier proposed idea by Smith (1964) based on physical consideration. The slice approach applied allows parameterizing the features of the SEGT's inhomogeneities.
Tourret, D.; Karma, A.; Clarke, A. J.; ...
2015-06-11
We present a three-dimensional (3D) extension of a previously proposed multi-scale Dendritic Needle Network (DNN) approach for the growth of complex dendritic microstructures. Using a new formulation of the DNN dynamics equations for dendritic paraboloid-branches of a given thickness, one can directly extend the DNN approach to 3D modeling. We validate this new formulation against known scaling laws and analytical solutions that describe the early transient and steady-state growth regimes, respectively. Finally, we compare the predictions of the model to in situ X-ray imaging of Al-Cu alloy solidification experiments. The comparison shows a very good quantitative agreement between 3D simulationsmore » and thin sample experiments. It also highlights the importance of full 3D modeling to accurately predict the primary dendrite arm spacing that is significantly over-estimated by 2D simulations.« less
Batch effects in single-cell RNA-sequencing data are corrected by matching mutual nearest neighbors.
Haghverdi, Laleh; Lun, Aaron T L; Morgan, Michael D; Marioni, John C
2018-06-01
Large-scale single-cell RNA sequencing (scRNA-seq) data sets that are produced in different laboratories and at different times contain batch effects that may compromise the integration and interpretation of the data. Existing scRNA-seq analysis methods incorrectly assume that the composition of cell populations is either known or identical across batches. We present a strategy for batch correction based on the detection of mutual nearest neighbors (MNNs) in the high-dimensional expression space. Our approach does not rely on predefined or equal population compositions across batches; instead, it requires only that a subset of the population be shared between batches. We demonstrate the superiority of our approach compared with existing methods by using both simulated and real scRNA-seq data sets. Using multiple droplet-based scRNA-seq data sets, we demonstrate that our MNN batch-effect-correction method can be scaled to large numbers of cells.
Scale-by-scale contributions to Lagrangian particle acceleration
NASA Astrophysics Data System (ADS)
Lalescu, Cristian C.; Wilczek, Michael
2017-11-01
Fluctuations on a wide range of scales in both space and time are characteristic of turbulence. Lagrangian particles, advected by the flow, probe these fluctuations along their trajectories. In an effort to isolate the influence of the different scales on Lagrangian statistics, we employ direct numerical simulations (DNS) combined with a filtering approach. Specifically, we study the acceleration statistics of tracers advected in filtered fields to characterize the smallest temporal scales of the flow. Emphasis is put on the acceleration variance as a function of filter scale, along with the scaling properties of the relevant terms of the Navier-Stokes equations. We furthermore discuss scaling ranges for higher-order moments of the tracer acceleration, as well as the influence of the choice of filter on the results. Starting from the Lagrangian tracer acceleration as the short time limit of the Lagrangian velocity increment, we also quantify the influence of filtering on Lagrangian intermittency. Our work complements existing experimental results on intermittency and accelerations of finite-sized, neutrally-buoyant particles: for the passive tracers used in our DNS, feedback effects are neglected such that the spatial averaging effect is cleanly isolated.
Potential space applications of nanomaterials and standartization issues
NASA Astrophysics Data System (ADS)
Voronina, Ekaterina; Novikov, Lev
Nanomaterials surpass traditional materials for space applications in many aspects due to their unique properties associated with nanoscale size of their constituents. This superiority in mechanical, thermal, electrical and optical properties will evidently inspire a wide range of applications in the next generation spacecraft intended for the long-term (~15-20 years) operation in near-Earth orbits and the automatic and manned interplanetary missions as well as in the construction of inhabited bases on the Moon. Nanocomposites with nanoclays, carbon nanotubes and various nanoparticles as fillers are one of the most promising materials for space applications. They may be used as light-weighted and strong structural materials as well as functional and smart materials of general and specific applications, e.g. thermal stabilization, radiation shielding, electrostatic charge mitigation, protection of atomic oxygen influence and space debris impact, etc. Currently, ISO activity on developing standards concerning different issues of nanomaterials manufacturing and applications is high enough. In this presentation, a brief review of existing standards and standards under development in this field is given. Most such standards are related to nanoparticles and nanotube production and characterization, thus the next important step in this activity is the creation of standards on nanomaterial properties and their behavior in different environmental conditions, including extreme environments. The near-Earth’s space is described as an extreme environment for materials due to high vacuum, space radiation, hot and cold plasma, micrometeoroids and space debris, temperature differences, etc. Existing experimental and theoretical data demonstrate that nanomaterials response to various space environment effects may differ substantially from the one of conventional bulk spacecraft materials. Therefore, it is necessary to determine the space environment components, critical for nanomaterials, and to develop novel methods of the mathematical and experimental simulation of the space environment impact on nanomaterials. Computer simulation is a very important scientific tool for explaining various phenomena and predicting the behavior of existing and designing materials under different conditions. The changes in the materials properties, caused by the space environment impact, are determined with structural parameters and processes that are related to different spatial scales: from the size of atoms and molecules to the size of macroobjects. To study nanomaterial response to the space environment, it is necessary to investigate and simulate processes occurring at nanoscale and to reveal various links between them and the processes, typical for the micro- and macroscale. Therefore, the multiscale simulation approach is needed, and different methods for various scales should be applied. In this presentation some approaches to multiscale computer simulation of the impact of some space environment components on nanomaterials are presented and discussed.
A Hybrid Approach to Protect Palmprint Templates
Sun, Dongmei; Xiong, Ke; Qiu, Zhengding
2014-01-01
Biometric template protection is indispensable to protect personal privacy in large-scale deployment of biometric systems. Accuracy, changeability, and security are three critical requirements for template protection algorithms. However, existing template protection algorithms cannot satisfy all these requirements well. In this paper, we propose a hybrid approach that combines random projection and fuzzy vault to improve the performances at these three points. Heterogeneous space is designed for combining random projection and fuzzy vault properly in the hybrid scheme. New chaff point generation method is also proposed to enhance the security of the heterogeneous vault. Theoretical analyses of proposed hybrid approach in terms of accuracy, changeability, and security are given in this paper. Palmprint database based experimental results well support the theoretical analyses and demonstrate the effectiveness of proposed hybrid approach. PMID:24982977
A hybrid approach to protect palmprint templates.
Liu, Hailun; Sun, Dongmei; Xiong, Ke; Qiu, Zhengding
2014-01-01
Biometric template protection is indispensable to protect personal privacy in large-scale deployment of biometric systems. Accuracy, changeability, and security are three critical requirements for template protection algorithms. However, existing template protection algorithms cannot satisfy all these requirements well. In this paper, we propose a hybrid approach that combines random projection and fuzzy vault to improve the performances at these three points. Heterogeneous space is designed for combining random projection and fuzzy vault properly in the hybrid scheme. New chaff point generation method is also proposed to enhance the security of the heterogeneous vault. Theoretical analyses of proposed hybrid approach in terms of accuracy, changeability, and security are given in this paper. Palmprint database based experimental results well support the theoretical analyses and demonstrate the effectiveness of proposed hybrid approach.
Strategies for Validation Testing of Ground Systems
NASA Technical Reports Server (NTRS)
Annis, Tammy; Sowards, Stephanie
2009-01-01
In order to accomplish the full Vision for Space Exploration announced by former President George W. Bush in 2004, NASA will have to develop a new space transportation system and supporting infrastructure. The main portion of this supporting infrastructure will reside at the Kennedy Space Center (KSC) in Florida and will either be newly developed or a modification of existing vehicle processing and launch facilities, including Ground Support Equipment (GSE). This type of large-scale launch site development is unprecedented since the time of the Apollo Program. In order to accomplish this successfully within the limited budget and schedule constraints a combination of traditional and innovative strategies for Verification and Validation (V&V) have been developed. The core of these strategies consists of a building-block approach to V&V, starting with component V&V and ending with a comprehensive end-to-end validation test of the complete launch site, called a Ground Element Integration Test (GEIT). This paper will outline these strategies and provide the high level planning for meeting the challenges of implementing V&V on a large-scale development program. KEY WORDS: Systems, Elements, Subsystem, Integration Test, Ground Systems, Ground Support Equipment, Component, End Item, Test and Verification Requirements (TVR), Verification Requirements (VR)
Technology Development and Demonstration Concepts for the Space Elevator
NASA Technical Reports Server (NTRS)
Smitherman, David V., Jr.
2004-01-01
During the 1990s several discoveries and advances in the development of carbon nano-tube (CNT) materials indicated that material strengths many times greater than common high-strength composite materials might be possible. Progress in the development of this material led to renewed interest in the space elevator concept for construction of a tether structure from the surface of the Earth through a geostationary orbit (GEO) and thus creating a new approach to Earth-to-orbit transportation infrastructures. To investigate this possibility the author, in 1999, managed for NASA a space elevator work:hop at the Marshall Space Flight Center to explore the potential feasibility of space elevators in the 21 century, and to identify the critical technologies and demonstration missions needed to make development of space elevators feasible. Since that time, a NASA Institute for Advanced Concepts (NIAC) funded study of the Space Elevator proposed a concept for a simpler first space elevator system using more near-term technologies. This paper will review some of the latest ideas for space elevator development, the critical technologies required, and some of the ideas proposed for demonstrating the feasibility for full-scale development of an Earth to GEO space elevator. Critical technologies include CNT composite materials, wireless power transmission, orbital object avoidance, and large-scale tether deployment and control systems. Numerous paths for technology demonstrations have been proposed utilizing ground experiments, air structures. LEO missions, the space shuttle, the international Space Station, GEO demonstration missions, demonstrations at the lunar L1 or L2 points, and other locations. In conclusion, this paper finds that the most critical technologies for an Earth to GEO space elevator include CNT composite materials development and object avoidance technologies; that lack of successful development of these technologies need not preclude continued development of space elevator systems in general; and that the critical technologies required for the Earth to GEO space elevator are not required for similar systems at the Moon, Mars, Europa, or for orbital tether systems at GEO, Luna, and other locations.
Cihan, Abdullah; Birkholzer, Jens; Trevisan, Luca; ...
2014-12-31
During CO 2 injection and storage in deep reservoirs, the injected CO 2 enters into an initially brine saturated porous medium, and after the injection stops, natural groundwater flow eventually displaces the injected mobile-phase CO 2, leaving behind residual non-wetting fluid. Accurate modeling of two-phase flow processes are needed for predicting fate and transport of injected CO 2, evaluating environmental risks and designing more effective storage schemes. The entrapped non-wetting fluid saturation is typically a function of the spatially varying maximum saturation at the end of injection. At the pore-scale, distribution of void sizes and connectivity of void space playmore » a major role for the macroscopic hysteresis behavior and capillary entrapment of wetting and non-wetting fluids. This paper presents development of an approach based on the connectivity of void space for modeling hysteretic capillary pressure-saturation-relative permeability relationships. The new approach uses void-size distribution and a measure of void space connectivity to compute the hysteretic constitutive functions and to predict entrapped fluid phase saturations. Two functions, the drainage connectivity function and the wetting connectivity function, are introduced to characterize connectivity of fluids in void space during drainage and wetting processes. These functions can be estimated through pore-scale simulations in computer-generated porous media or from traditional experimental measurements of primary drainage and main wetting curves. The hysteresis model for saturation-capillary pressure is tested successfully by comparing the model-predicted residual saturation and scanning curves with actual data sets obtained from column experiments found in the literature. A numerical two-phase model simulator with the new hysteresis functions is tested against laboratory experiments conducted in a quasi-two-dimensional flow cell (91.4cm×5.6cm×61cm), packed with homogeneous and heterogeneous sands. Initial results show that the model can predict spatial and temporal distribution of injected fluid during the experiments reasonably well. However, further analyses are needed for comprehensively testing the ability of the model to predict transient two-phase flow processes and capillary entrapment in geological reservoirs during geological carbon sequestration.« less
Models of Small-Scale Patchiness
NASA Technical Reports Server (NTRS)
McGillicuddy, D. J.
2001-01-01
Patchiness is perhaps the most salient characteristic of plankton populations in the ocean. The scale of this heterogeneity spans many orders of magnitude in its spatial extent, ranging from planetary down to microscale. It has been argued that patchiness plays a fundamental role in the functioning of marine ecosystems, insofar as the mean conditions may not reflect the environment to which organisms are adapted. Understanding the nature of this patchiness is thus one of the major challenges of oceanographic ecology. The patchiness problem is fundamentally one of physical-biological-chemical interactions. This interconnection arises from three basic sources: (1) ocean currents continually redistribute dissolved and suspended constituents by advection; (2) space-time fluctuations in the flows themselves impact biological and chemical processes, and (3) organisms are capable of directed motion through the water. This tripartite linkage poses a difficult challenge to understanding oceanic ecosystems: differentiation between the three sources of variability requires accurate assessment of property distributions in space and time, in addition to detailed knowledge of organismal repertoires and the processes by which ambient conditions control the rates of biological and chemical reactions. Various methods of observing the ocean tend to lie parallel to the axes of the space/time domain in which these physical-biological-chemical interactions take place. Given that a purely observational approach to the patchiness problem is not tractable with finite resources, the coupling of models with observations offers an alternative which provides a context for synthesis of sparse data with articulations of fundamental principles assumed to govern functionality of the system. In a sense, models can be used to fill the gaps in the space/time domain, yielding a framework for exploring the controls on spatially and temporally intermittent processes. The following discussion highlights only a few of the multitude of models which have yielded insight into the dynamics of plankton patchiness. In addition, this particular collection of examples is intended to furnish some exposure to the diversity of modeling approaches which can be brought to bear on the problem. These approaches range from abstract theoretical models intended to elucidate specific processes, to complex numerical formulations which can be used to actually simulate observed distributions in detail.
Space Laboratory on a Table Top: A Next Generative ECLSS design and diagnostic tool
NASA Technical Reports Server (NTRS)
Ramachandran, N.
2005-01-01
This paper describes the development plan for a comprehensive research and diagnostic tool for aspects of advanced life support systems in space-based laboratories. Specifically it aims to build a high fidelity tabletop model that can be used for the purpose of risk mitigation, failure mode analysis, contamination tracking, and testing reliability. We envision a comprehensive approach involving experimental work coupled with numerical simulation to develop this diagnostic tool. It envisions a 10% scale transparent model of a space platform such as the International Space Station that operates with water or a specific matched index of refraction liquid as the working fluid. This allows the scaling of a 10 ft x 10 ft x 10 ft room with air flow to 1 ft x 1 ft x 1 ft tabletop model with water/liquid flow. Dynamic similitude for this length scale dictates model velocities to be 67% of full-scale and thereby the time scale of the model to represent 15% of the full- scale system; meaning identical processes in the model are completed in 15% of the full- scale-time. The use of an index matching fluid (fluid that matches the refractive index of cast acrylic, the model material) allows making the entire model (with complex internal geometry) transparent and hence conducive to non-intrusive optical diagnostics. So using such a system one can test environment control parameters such as core flows (axial flows), cross flows (from registers and diffusers), potential problem areas such as flow short circuits, inadequate oxygen content, build up of other gases beyond desirable levels, test mixing processes within the system at local nodes or compartments and assess the overall system performance. The system allows quantitative measurements of contaminants introduced in the system and allows testing and optimizing the tracking process and removal of contaminants. The envisaged system will be modular and hence flexible for quick configuration change and subsequent testing. The data and inferences from the tests will allow for improvements in the development and design of next generation life support systems and configurations. Preliminary experimental and modeling work in this area will be presented. This involves testing of a single inlet-exit model with detailed 3-D flow visualization and quantitative diagnostics and computational modeling of the system.
Observing the Global Water Cycle from Space
NASA Technical Reports Server (NTRS)
Hildebrand, P. H.
2004-01-01
This paper presents an approach to measuring all major components of the water cycle from space. Key elements of the global water cycle are discussed in terms of the storage of water-in the ocean, air, cloud and precipitation, in soil, ground water, snow and ice, and in lakes and rivers, and in terms of the global fluxes of water between these reservoirs. Approaches to measuring or otherwise evaluating the global water cycle are presented, and the limitations on known accuracy for many components of the water cycle are discussed, as are the characteristic spatial and temporal scales of the different water cycle components. Using these observational requirements for a global water cycle observing system, an approach to measuring the global water cycle from space is developed. The capabilities of various active and passive microwave instruments are discussed, as is the potential of supporting measurements from other sources. Examples of space observational systems, including TRMM/GPM precipitation measurement, cloud radars, soil moisture, sea surface salinity, temperature and humidity profiling, other measurement approaches and assimilation of the microwave and other data into interpretative computer models are discussed to develop the observational possibilities. The selection of orbits is then addressed, for orbit selection and antenna size/beamwidth considerations determine the sampling characteristics for satellite measurement systems. These considerations dictate a particular set of measurement possibilities, which are then matched to the observational sampling requirements based on the science. The results define a network of satellite instrumentation systems, many in low Earth orbit, a few in geostationary orbit, and all tied together through a sampling network that feeds the observations into a data-assimilative computer model.
Towards Remotely Sensed Composite Global Drought Risk Modelling
NASA Astrophysics Data System (ADS)
Dercas, Nicholas; Dalezios, Nicolas
2015-04-01
Drought is a multi-faceted issue and requires a multi-faceted assessment. Droughts may have the origin on precipitation deficits, which sequentially and by considering different time and space scales may impact soil moisture, plant wilting, stream flow, wildfire, ground water levels, famine and social impacts. There is a need to monitor drought even at a global scale. Key variables for monitoring drought include climate data, soil moisture, stream flow, ground water, reservoir and lake levels, snow pack, short-medium-long range forecasts, vegetation health and fire danger. However, there is no single definition of drought and there are different drought indicators and indices even for each drought type. There are already four operational global drought risk monitoring systems, namely the U.S. Drought Monitor, the European Drought Observatory (EDO), the African and the Australian systems, respectively. These systems require further research to improve the level of accuracy, the time and space scales, to consider all types of drought and to achieve operational efficiency, eventually. This paper attempts to contribute to the above mentioned objectives. Based on a similar general methodology, the multi-indicator approach is considered. This has resulted from previous research in the Mediterranean region, an agriculturally vulnerable region, using several drought indices separately, namely RDI and VHI. The proposed scheme attempts to consider different space scaling based on agroclimatic zoning through remotely sensed techniques and several indices. Needless to say, the agroclimatic potential of agricultural areas has to be assessed in order to achieve sustainable and efficient use of natural resources in combination with production maximization. Similarly, the time scale is also considered by addressing drought-related impacts affected by precipitation deficits on time scales ranging from a few days to a few months, such as non-irrigated agriculture, topsoil moisture, wildfire danger, range and pasture conditions and unregulated stream flows. Keywords Remote sensing; Composite Drought Indicators; Global Drought Risk Monitoring.
Spectral Mass Gauging of Unsettled Liquid with Acoustic Waves
NASA Technical Reports Server (NTRS)
Feller, Jeffrey; Kashani, Ali; Khasin, Michael; Muratov, Cyrill; Osipov, Viatcheslav; Sharma, Surendra
2018-01-01
Propellant mass gauging is one of the key technologies required to enable the next step in NASA's space exploration program. At present, there is no reliable method to accurately measure the amount of unsettled liquid propellant of an unknown configuration in a propellant tank in micro- or zero gravity. We propose a new approach to use sound waves to probe the resonance frequencies of the two-phase liquid-gas mixture and take advantage of the mathematical properties of the high frequency spectral asymptotics to determine the volume fraction of the tank filled with liquid. We report the current progress in exploring the feasibility of this approach, both experimental and theoretical. Excitation and detection procedures using solenoids for excitation and both hydrophones and accelerometers for detection have been developed. A 3% uncertainty for mass-gauging was demonstrated for a 200-liter tank partially filled with water for various unsettled configurations, such as tilts and artificial ullages. A new theoretical formula for the counting function associated with axially symmetric modes was derived. Scaling analysis of the approach has been performed to predict an adequate performance for in-space applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mehmani, Yashar; Oostrom, Martinus; Balhoff, Matthew
2014-03-20
Several approaches have been developed in the literature for solving flow and transport at the pore-scale. Some authors use a direct modeling approach where the fundamental flow and transport equations are solved on the actual pore-space geometry. Such direct modeling, while very accurate, comes at a great computational cost. Network models are computationally more efficient because the pore-space morphology is approximated. Typically, a mixed cell method (MCM) is employed for solving the flow and transport system which assumes pore-level perfect mixing. This assumption is invalid at moderate to high Peclet regimes. In this work, a novel Eulerian perspective on modelingmore » flow and transport at the pore-scale is developed. The new streamline splitting method (SSM) allows for circumventing the pore-level perfect mixing assumption, while maintaining the computational efficiency of pore-network models. SSM was verified with direct simulations and excellent matches were obtained against micromodel experiments across a wide range of pore-structure and fluid-flow parameters. The increase in the computational cost from MCM to SSM is shown to be minimal, while the accuracy of SSM is much higher than that of MCM and comparable to direct modeling approaches. Therefore, SSM can be regarded as an appropriate balance between incorporating detailed physics and controlling computational cost. The truly predictive capability of the model allows for the study of pore-level interactions of fluid flow and transport in different porous materials. In this paper, we apply SSM and MCM to study the effects of pore-level mixing on transverse dispersion in 3D disordered granular media.« less
A hybrid fault diagnosis approach based on mixed-domain state features for rotating machinery.
Xue, Xiaoming; Zhou, Jianzhong
2017-01-01
To make further improvement in the diagnosis accuracy and efficiency, a mixed-domain state features data based hybrid fault diagnosis approach, which systematically blends both the statistical analysis approach and the artificial intelligence technology, is proposed in this work for rolling element bearings. For simplifying the fault diagnosis problems, the execution of the proposed method is divided into three steps, i.e., fault preliminary detection, fault type recognition and fault degree identification. In the first step, a preliminary judgment about the health status of the equipment can be evaluated by the statistical analysis method based on the permutation entropy theory. If fault exists, the following two processes based on the artificial intelligence approach are performed to further recognize the fault type and then identify the fault degree. For the two subsequent steps, mixed-domain state features containing time-domain, frequency-domain and multi-scale features are extracted to represent the fault peculiarity under different working conditions. As a powerful time-frequency analysis method, the fast EEMD method was employed to obtain multi-scale features. Furthermore, due to the information redundancy and the submergence of original feature space, a novel manifold learning method (modified LGPCA) is introduced to realize the low-dimensional representations for high-dimensional feature space. Finally, two cases with 12 working conditions respectively have been employed to evaluate the performance of the proposed method, where vibration signals were measured from an experimental bench of rolling element bearing. The analysis results showed the effectiveness and the superiority of the proposed method of which the diagnosis thought is more suitable for practical application. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
A new methodology for determination of macroscopic transport parameters in drying porous media
NASA Astrophysics Data System (ADS)
Attari Moghaddam, A.; Kharaghani, A.; Tsotsas, E.; Prat, M.
2015-12-01
Two main approaches have been used to model the drying process: The first approach considers the partially saturated porous medium as a continuum and partial differential equations are used to describe the mass, momentum and energy balances of the fluid phases. The continuum-scale models (CM) obtained by this approach involve constitutive laws which require effective material properties, such as the diffusivity, permeability, and thermal conductivity which are often determined by experiments. The second approach considers the material at the pore scale, where the void space is represented by a network of pores (PN). Micro- or nanofluidics models used in each pore give rise to a large system of ordinary differential equations with degrees of freedom at each node of the pore network. In this work, the moisture transport coefficient (D), the pseudo desorption isotherm inside the network and at the evaporative surface are estimated from the post-processing of the three-dimensional pore network drying simulations for fifteen realizations of the pore space geometry from a given probability distribution. A slice sampling method is used in order to extract these parameters from PN simulations. The moisture transport coefficient obtained in this way is shown in Fig. 1a. The minimum of average D values demonstrates the transition between liquid dominated moisture transport region and vapor dominated moisture transport region; a similar behavior has been observed in previous experimental findings. A function is fitted to the average D values and then is fed into the non-linear moisture diffusion equation. The saturation profiles obtained from PN and CM simulations are shown in Fig. 1b. Figure 1: (a) extracted moisture transport coefficient during drying for fifteen realizations of the pore network, (b) average moisture profiles during drying obtained from PN and CM simulations.
Scaling Theory of Entanglement at the Many-Body Localization Transition.
Dumitrescu, Philipp T; Vasseur, Romain; Potter, Andrew C
2017-09-15
We study the universal properties of eigenstate entanglement entropy across the transition between many-body localized (MBL) and thermal phases. We develop an improved real space renormalization group approach that enables numerical simulation of large system sizes and systematic extrapolation to the infinite system size limit. For systems smaller than the correlation length, the average entanglement follows a subthermal volume law, whose coefficient is a universal scaling function. The full distribution of entanglement follows a universal scaling form, and exhibits a bimodal structure that produces universal subleading power-law corrections to the leading volume law. For systems larger than the correlation length, the short interval entanglement exhibits a discontinuous jump at the transition from fully thermal volume law on the thermal side, to pure area law on the MBL side.
Wavelet synthetic method for turbulent flow.
Zhou, Long; Rauh, Cornelia; Delgado, Antonio
2015-07-01
Based on the idea of random cascades on wavelet dyadic trees and the energy cascade model known as the wavelet p model, a series of velocity increments in two-dimensional space are constructed in different levels of scale. The dynamics is imposed on the generated scales by solving the Euler equation in the Lagrangian framework. A dissipation model is used in order to cover the shortage of the p model, which only predicts in inertial range. Wavelet reconstruction as well as the multiresolution analysis are then performed on each scales. As a result, a type of isotropic velocity field is created. The statistical properties show that the constructed velocity fields share many important features with real turbulence. The pertinence of this approach in the prediction of flow intermittency is also discussed.
Alsmadi, Othman M K; Abo-Hammour, Zaer S
2015-01-01
A robust computational technique for model order reduction (MOR) of multi-time-scale discrete systems (single input single output (SISO) and multi-input multioutput (MIMO)) is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA) with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.
Micro-Macro Simulation of Viscoelastic Fluids in Three Dimensions
NASA Astrophysics Data System (ADS)
Rüttgers, Alexander; Griebel, Michael
2012-11-01
The development of the chemical industry resulted in various complex fluids that cannot be correctly described by classical fluid mechanics. For instance, this includes paint, engine oils with polymeric additives and toothpaste. We currently perform multiscale viscoelastic flow simulations for which we have coupled our three-dimensional Navier-Stokes solver NaSt3dGPF with the stochastic Brownian configuration field method on the micro-scale. In this method, we represent a viscoelastic fluid as a dumbbell system immersed in a three-dimensional Newtonian liquid which leads to a six-dimensional problem in space. The approach requires large computational resources and therefore depends on an efficient parallelisation strategy. Our flow solver is parallelised with a domain decomposition approach using MPI. It shows excellent scale-up results for up to 128 processors. In this talk, we present simulation results for viscoelastic fluids in square-square contractions due to their relevance for many engineering applications such as extrusion. Another aspect of the talk is the parallel implementation in NaSt3dGPF and the parallel scale-up and speed-up behaviour.
Toward multiscale modelings of grain-fluid systems
NASA Astrophysics Data System (ADS)
Chareyre, Bruno; Yuan, Chao; Montella, Eduard P.; Salager, Simon
2017-06-01
Computationally efficient methods have been developed for simulating partially saturated granular materials in the pendular regime. In contrast, one hardly avoid expensive direct resolutions of 2-phase fluid dynamics problem for mixed pendular-funicular situations or even saturated regimes. Following previous developments for single-phase flow, a pore-network approach of the coupling problems is described. The geometry and movements of phases and interfaces are described on the basis of a tetrahedrization of the pore space, introducing elementary objects such as bridge, meniscus, pore body and pore throat, together with local rules of evolution. As firmly established local rules are still missing on some aspects (entry capillary pressure and pore-scale pressure-saturation relations, forces on the grains, or kinetics of transfers in mixed situations) a multi-scale numerical framework is introduced, enhancing the pore-network approach with the help of direct simulations. Small subsets of a granular system are extracted, in which multiphase scenario are solved using the Lattice-Boltzman method (LBM). In turns, a global problem is assembled and solved at the network scale, as illustrated by a simulated primary drainage.
Classical Wave Model of Quantum-Like Processing in Brain
NASA Astrophysics Data System (ADS)
Khrennikov, A.
2011-01-01
We discuss the conjecture on quantum-like (QL) processing of information in the brain. It is not based on the physical quantum brain (e.g., Penrose) - quantum physical carriers of information. In our approach the brain created the QL representation (QLR) of information in Hilbert space. It uses quantum information rules in decision making. The existence of such QLR was (at least preliminary) confirmed by experimental data from cognitive psychology. The violation of the law of total probability in these experiments is an important sign of nonclassicality of data. In so called "constructive wave function approach" such data can be represented by complex amplitudes. We presented 1,2 the QL model of decision making. In this paper we speculate on a possible physical realization of QLR in the brain: a classical wave model producing QLR . It is based on variety of time scales in the brain. Each pair of scales (fine - the background fluctuations of electromagnetic field and rough - the cognitive image scale) induces the QL representation. The background field plays the crucial role in creation of "superstrong QL correlations" in the brain.
Phylogenetic approaches reveal biodiversity threats under climate change
NASA Astrophysics Data System (ADS)
González-Orozco, Carlos E.; Pollock, Laura J.; Thornhill, Andrew H.; Mishler, Brent D.; Knerr, Nunzio; Laffan, Shawn W.; Miller, Joseph T.; Rosauer, Dan F.; Faith, Daniel P.; Nipperess, David A.; Kujala, Heini; Linke, Simon; Butt, Nathalie; Külheim, Carsten; Crisp, Michael D.; Gruber, Bernd
2016-12-01
Predicting the consequences of climate change for biodiversity is critical to conservation efforts. Extensive range losses have been predicted for thousands of individual species, but less is known about how climate change might impact whole clades and landscape-scale patterns of biodiversity. Here, we show that climate change scenarios imply significant changes in phylogenetic diversity and phylogenetic endemism at a continental scale in Australia using the hyper-diverse clade of eucalypts. We predict that within the next 60 years the vast majority of species distributions (91%) across Australia will shrink in size (on average by 51%) and shift south on the basis of projected suitable climatic space. Geographic areas currently with high phylogenetic diversity and endemism are predicted to change substantially in future climate scenarios. Approximately 90% of the current areas with concentrations of palaeo-endemism (that is, places with old evolutionary diversity) are predicted to disappear or shift their location. These findings show that climate change threatens whole clades of the phylogenetic tree, and that the outlined approach can be used to forecast areas of biodiversity losses and continental-scale impacts of climate change.
Recent Developments in Non-Fermi Liquid Theory
NASA Astrophysics Data System (ADS)
Lee, Sung-Sik
2018-03-01
Non-Fermi liquids are unconventional metals whose physical properties deviate qualitatively from those of noninteracting fermions due to strong quantum fluctuations near Fermi surfaces. They arise when metals are subject to singular interactions mediated by soft collective modes. In the absence of well-defined quasiparticles, universal physics of non-Fermi liquids is captured by interacting field theories which replace Landau Fermi liquid theory. However, it has been difficult to understand their universal low-energy physics due to a lack of theoretical methods that take into account strong quantum fluctuations in the presence of abundant low-energy degrees of freedom. In this review, we discuss two approaches that have been recently developed for non-Fermi liquid theory with emphasis on two space dimensions. The first is a perturbative scheme based on a dimensional regularization, which achieves a controlled access to the low-energy physics by tuning the codimension of Fermi surface. The second is a nonperturbative approach which treats the interaction ahead of the kinetic term through a non-Gaussian scaling called interaction-driven scaling. Examples of strongly coupled non-Fermi liquids amenable to exact treatments through the interaction-driven scaling are discussed.
Tourre, Yves M.; Lacaux, Jean-Pierre; Vignolles, Cécile; Lafaye, Murielle
2009-01-01
Background Climate and environment vary across many spatio-temporal scales, including the concept of climate change, which impact on ecosystems, vector-borne diseases and public health worldwide. Objectives To develop a conceptual approach by mapping climatic and environmental conditions from space and studying their linkages with Rift Valley Fever (RVF) epidemics in Senegal. Design Ponds in which mosquitoes could thrive were identified from remote sensing using high-resolution SPOT-5 satellite images. Additional data on pond dynamics and rainfall events (obtained from the Tropical Rainfall Measuring Mission) were combined with hydrological in-situ data. Localisation of vulnerable hosts such as penned cattle (from QuickBird satellite) were also used. Results Dynamic spatio-temporal distribution of Aedes vexans density (one of the main RVF vectors) is based on the total rainfall amount and ponds’ dynamics. While Zones Potentially Occupied by Mosquitoes are mapped, detailed risk areas, i.e. zones where hazards and vulnerability occur, are expressed in percentages of areas where cattle are potentially exposed to mosquitoes’ bites. Conclusions This new conceptual approach, using precise remote-sensing techniques, simply relies upon rainfall distribution also evaluated from space. It is meant to contribute to the implementation of operational early warning systems for RVF based on both natural and anthropogenic climatic and environmental changes. In a climate change context, this approach could also be applied to other vector-borne diseases and places worldwide. PMID:20052381
Fritz London and the scale of quantum mechanisms
NASA Astrophysics Data System (ADS)
Monaldi, Daniela
2017-11-01
Fritz London's seminal idea of ;quantum mechanisms of macroscopic scale;, first articulated in 1946, was the unanticipated result of two decades of research, during which London pursued quantum-mechanical explanations of various kinds of systems of particles at different scales. He started at the microphysical scale with the hydrogen molecule, generalized his approach to chemical bonds and intermolecular forces, then turned to macrophysical systems like superconductors and superfluid helium. Along this path, he formulated a set of concepts-the quantum mechanism of exchange, the rigidity of the wave function, the role of quantum statistics in multi-particle systems, the possibility of order in momentum space-that eventually coalesced into a new conception of systems of equal particles. In particular, it was London's clarification of Bose-Einstein condensation that enabled him to formulate the notion of superfluids, and led him to the recognition that quantum mechanics was not, as it was commonly assumed, relevant exclusively as a micromechanics.
Survey on large scale system control methods
NASA Technical Reports Server (NTRS)
Mercadal, Mathieu
1987-01-01
The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.
A multi-sensor remote sensing approach for measuring primary production from space
NASA Technical Reports Server (NTRS)
Gautier, Catherine
1989-01-01
It is proposed to develop a multi-sensor remote sensing method for computing marine primary productivity from space, based on the capability to measure the primary ocean variables which regulate photosynthesis. The three variables and the sensors which measure them are: (1) downwelling photosynthetically available irradiance, measured by the VISSR sensor on the GOES satellite, (2) sea-surface temperature from AVHRR on NOAA series satellites, and (3) chlorophyll-like pigment concentration from the Nimbus-7/CZCS sensor. These and other measured variables would be combined within empirical or analytical models to compute primary productivity. With this proposed capability of mapping primary productivity on a regional scale, we could begin realizing a more precise and accurate global assessment of its magnitude and variability. Applications would include supplementation and expansion on the horizontal scale of ship-acquired biological data, which is more accurate and which supplies the vertical components of the field, monitoring oceanic response to increased atmospheric carbon dioxide levels, correlation with observed sedimentation patterns and processes, and fisheries management.
NASA Astrophysics Data System (ADS)
Matsuoka, Seikichi; Idomura, Yasuhiro; Satake, Shinsuke
2017-10-01
The neoclassical toroidal viscosity (NTV) caused by a non-axisymmetric magnetic field perturbation is numerically studied using two global kinetic simulations with different numerical approaches. Both simulations reproduce similar collisionality ( νb*) dependencies over wide νb * ranges. It is demonstrated that resonant structures in the velocity space predicted by the conventional superbanana-plateau theory exist in the small banana width limit, while the resonances diminish when the banana width becomes large. It is also found that fine scale structures are generated in the velocity space as νb* decreases in the large banana width simulations, leading to the νb* -dependency of the NTV. From the analyses of the particle orbit, it is found that the finite k∥ mode structure along the bounce motion appears owing to the finite orbit width, and it suffers from bounce phase mixing, suggesting the generation of the fine scale structures by the similar mechanism as the parallel phase mixing of passing particles.
Characteristics of the flow around tandem flapping wings
NASA Astrophysics Data System (ADS)
Muscutt, Luke; Ganapathisubramani, Bharathram; Weymouth, Gabriel; The University of Southampton Team
2014-11-01
Vortex recapture is a fundamental fluid mechanics phenomenon which is important to many fields. Any large scale vorticity contained within a freestream flow may affect the aerodynamic properties of a downstream body. In the case of tandem flapping wings, the front wing generates strong large scale vorticity which impinges on the hind wing. The characteristics of this interaction are greatly affected by the spacing, and the phase of flapping between the front and rear wings. The interaction of the vorticity of the rear wing with the shed vorticity of the front wing may be constructive or destructive, increasing thrust or efficiency of the hind wing when compared to a wing operating in isolation. Knowledge of the parameter space where the maximum increases in these are obtained is important for the development of tandem wing unmanned air and underwater vehicles, commercial aerospace and renewable energy applications. This question is addressed with a combined computational and experimental approach, and a discussion of these is presented.
North Atlantic weather regimes: A synoptic study of phase space. M.S. Thesis
NASA Technical Reports Server (NTRS)
Orrhede, Anna Karin
1990-01-01
In the phase space of weather, low frequency variability (LFV) of the atmosphere can be captured in a large scale subspace, where a trajectory connects consecutive large scale weather maps, thus revealing flow changes and recurrences. Using this approach, Vautard applied the trajectory speed minimization method (Vautard and Legras) to atmospheric data. From 37 winters of 700 mb geopotential height anomalies over the North Atlantic and the adjacent land masses, four persistent and recurrent weather patterns, interpreted as weather regimes, were discernable: a blocking regime, a zonal regime, a Greenland anticyclone regime, and an Atlantic regime. These regimes are studied further in terms of maintenance and transitions. A regime survey unveils preferences regarding event durations and precursors for the onset or break of an event. The transition frequencies between regimes vary, and together with the transition times, suggest the existence of easier transition routes. These matters are more systematically studied using complete synoptic map sequences from a number of events.
NASA Technical Reports Server (NTRS)
Anderson, Tim; Balaban, Canan
2008-01-01
The activities presented are a broad based approach to advancing key hydrogen related technologies in areas such as fuel cells, hydrogen production, and distributed sensors for hydrogen-leak detection, laser instrumentation for hydrogen-leak detection, and cryogenic transport and storage. Presented are the results from research projects, education and outreach activities, system and trade studies. The work will aid in advancing the state-of-the-art for several critical technologies related to the implementation of a hydrogen infrastructure. Activities conducted are relevant to a number of propulsion and power systems for terrestrial, aeronautics and aerospace applications. Hydrogen storage and in-space hydrogen transport research focused on developing and verifying design concepts for efficient, safe, lightweight liquid hydrogen cryogenic storage systems. Research into hydrogen production had a specific goal of further advancing proton conducting membrane technology in the laboratory at a larger scale. System and process trade studies evaluated the proton conducting membrane technology, specifically, scale-up issues.
Fault Tolerant Frequent Pattern Mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shohdy, Sameh; Vishnu, Abhinav; Agrawal, Gagan
FP-Growth algorithm is a Frequent Pattern Mining (FPM) algorithm that has been extensively used to study correlations and patterns in large scale datasets. While several researchers have designed distributed memory FP-Growth algorithms, it is pivotal to consider fault tolerant FP-Growth, which can address the increasing fault rates in large scale systems. In this work, we propose a novel parallel, algorithm-level fault-tolerant FP-Growth algorithm. We leverage algorithmic properties and MPI advanced features to guarantee an O(1) space complexity, achieved by using the dataset memory space itself for checkpointing. We also propose a recovery algorithm that can use in-memory and disk-based checkpointing,more » though in many cases the recovery can be completed without any disk access, and incurring no memory overhead for checkpointing. We evaluate our FT algorithm on a large scale InfiniBand cluster with several large datasets using up to 2K cores. Our evaluation demonstrates excellent efficiency for checkpointing and recovery in comparison to the disk-based approach. We have also observed 20x average speed-up in comparison to Spark, establishing that a well designed algorithm can easily outperform a solution based on a general fault-tolerant programming model.« less
Segmentation-based wavelet transform for still-image compression
NASA Astrophysics Data System (ADS)
Mozelle, Gerard; Seghier, Abdellatif; Preteux, Francoise J.
1996-10-01
In order to address simultaneously the two functionalities, content-based scalability required by MPEG-4, we introduce a segmentation-based wavelet transform (SBWT). SBWT takes into account both the mathematical properties of multiresolution analysis and the flexibility of region-based approaches for image compression. The associated methodology has two stages: 1) image segmentation into convex and polygonal regions; 2) 2D-wavelet transform of the signal corresponding to each region. In this paper, we have mathematically studied a method for constructing a multiresolution analysis (VjOmega)j (epsilon) N adapted to a polygonal region which provides an adaptive region-based filtering. The explicit construction of scaling functions, pre-wavelets and orthonormal wavelets bases defined on a polygon is carried out by using scaling functions is established by using the theory of Toeplitz operators. The corresponding expression can be interpreted as a location property which allow defining interior and boundary scaling functions. Concerning orthonormal wavelets and pre-wavelets, a similar expansion is obtained by taking advantage of the properties of the orthogonal projector P(V(j(Omega )) perpendicular from the space Vj(Omega ) + 1 onto the space (Vj(Omega )) perpendicular. Finally the mathematical results provide a simple and fast algorithm adapted to polygonal regions.
Millimeter-scale epileptiform spike propagation patterns and their relationship to seizures
Vanleer, Ann C; Blanco, Justin A; Wagenaar, Joost B; Viventi, Jonathan; Contreras, Diego; Litt, Brian
2016-01-01
Objective Current mapping of epileptic networks in patients prior to epilepsy surgery utilizes electrode arrays with sparse spatial sampling (∼1.0 cm inter-electrode spacing). Recent research demonstrates that sub-millimeter, cortical-column-scale domains have a role in seizure generation that may be clinically significant. We use high-resolution, active, flexible surface electrode arrays with 500 μm inter-electrode spacing to explore epileptiform local field potential spike propagation patterns in two dimensions recorded from subdural micro-electrocorticographic signals in vivo in cat. In this study, we aimed to develop methods to quantitatively characterize the spatiotemporal dynamics of epileptiform activity at high-resolution. Approach We topically administered a GABA-antagonist, picrotoxin, to induce acute neocortical epileptiform activity leading up to discrete electrographic seizures. We extracted features from local field potential spikes to characterize spatiotemporal patterns in these events. We then tested the hypothesis that two dimensional spike patterns during seizures were different from those between seizures. Main results We showed that spatially correlated events can be used to distinguish ictal versus interictal spikes. Significance We conclude that sub-millimeter-scale spatiotemporal spike patterns reveal network dynamics that are invisible to standard clinical recordings and contain information related to seizure-state. PMID:26859260
Millimeter-scale epileptiform spike propagation patterns and their relationship to seizures
NASA Astrophysics Data System (ADS)
Vanleer, Ann C.; Blanco, Justin A.; Wagenaar, Joost B.; Viventi, Jonathan; Contreras, Diego; Litt, Brian
2016-04-01
Objective. Current mapping of epileptic networks in patients prior to epilepsy surgery utilizes electrode arrays with sparse spatial sampling (∼1.0 cm inter-electrode spacing). Recent research demonstrates that sub-millimeter, cortical-column-scale domains have a role in seizure generation that may be clinically significant. We use high-resolution, active, flexible surface electrode arrays with 500 μm inter-electrode spacing to explore epileptiform local field potential (LFP) spike propagation patterns in two dimensions recorded from subdural micro-electrocorticographic signals in vivo in cat. In this study, we aimed to develop methods to quantitatively characterize the spatiotemporal dynamics of epileptiform activity at high-resolution. Approach. We topically administered a GABA-antagonist, picrotoxin, to induce acute neocortical epileptiform activity leading up to discrete electrographic seizures. We extracted features from LFP spikes to characterize spatiotemporal patterns in these events. We then tested the hypothesis that two-dimensional spike patterns during seizures were different from those between seizures. Main results. We showed that spatially correlated events can be used to distinguish ictal versus interictal spikes. Significance. We conclude that sub-millimeter-scale spatiotemporal spike patterns reveal network dynamics that are invisible to standard clinical recordings and contain information related to seizure-state.
Spatio-temporal Granger causality: a new framework
Luo, Qiang; Lu, Wenlian; Cheng, Wei; Valdes-Sosa, Pedro A.; Wen, Xiaotong; Ding, Mingzhou; Feng, Jianfeng
2015-01-01
That physiological oscillations of various frequencies are present in fMRI signals is the rule, not the exception. Herein, we propose a novel theoretical framework, spatio-temporal Granger causality, which allows us to more reliably and precisely estimate the Granger causality from experimental datasets possessing time-varying properties caused by physiological oscillations. Within this framework, Granger causality is redefined as a global index measuring the directed information flow between two time series with time-varying properties. Both theoretical analyses and numerical examples demonstrate that Granger causality is a monotonically increasing function of the temporal resolution used in the estimation. This is consistent with the general principle of coarse graining, which causes information loss by smoothing out very fine-scale details in time and space. Our results confirm that the Granger causality at the finer spatio-temporal scales considerably outperforms the traditional approach in terms of an improved consistency between two resting-state scans of the same subject. To optimally estimate the Granger causality, the proposed theoretical framework is implemented through a combination of several approaches, such as dividing the optimal time window and estimating the parameters at the fine temporal and spatial scales. Taken together, our approach provides a novel and robust framework for estimating the Granger causality from fMRI, EEG, and other related data. PMID:23643924
Scaling Up Graph-Based Semisupervised Learning via Prototype Vector Machines
Zhang, Kai; Lan, Liang; Kwok, James T.; Vucetic, Slobodan; Parvin, Bahram
2014-01-01
When the amount of labeled data are limited, semi-supervised learning can improve the learner's performance by also using the often easily available unlabeled data. In particular, a popular approach requires the learned function to be smooth on the underlying data manifold. By approximating this manifold as a weighted graph, such graph-based techniques can often achieve state-of-the-art performance. However, their high time and space complexities make them less attractive on large data sets. In this paper, we propose to scale up graph-based semisupervised learning using a set of sparse prototypes derived from the data. These prototypes serve as a small set of data representatives, which can be used to approximate the graph-based regularizer and to control model complexity. Consequently, both training and testing become much more efficient. Moreover, when the Gaussian kernel is used to define the graph affinity, a simple and principled method to select the prototypes can be obtained. Experiments on a number of real-world data sets demonstrate encouraging performance and scaling properties of the proposed approach. It also compares favorably with models learned via ℓ1-regularization at the same level of model sparsity. These results demonstrate the efficacy of the proposed approach in producing highly parsimonious and accurate models for semisupervised learning. PMID:25720002
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Jingfeng; Zhuang, Qianlai; Baldocchi, Dennis D.
Eddy covariance flux towers provide continuous measurements of net ecosystem carbon exchange (NEE) for a wide range of climate and biome types. However, these measurements only represent the carbon fluxes at the scale of the tower footprint. To quantify the net exchange of carbon dioxide between the terrestrial biosphere and the atmosphere for regions or continents, flux tower measurements need to be extrapolated to these large areas. Here we used remotely sensed data from the Moderate Resolution Imaging Spectrometer (MODIS) instrument on board the National Aeronautics and Space Administration's (NASA) Terra satellite to scale up AmeriFlux NEE measurements to themore » continental scale. We first combined MODIS and AmeriFlux data for representative U.S. ecosystems to develop a predictive NEE model using a modified regression tree approach. The predictive model was trained and validated using eddy flux NEE data over the periods 2000-2004 and 2005-2006, respectively. We found that the model predicted NEE well (r = 0.73, p < 0.001). We then applied the model to the continental scale and estimated NEE for each 1 km x 1 km cell across the conterminous U.S. for each 8-day interval in 2005 using spatially explicit MODIS data. The model generally captured the expected spatial and seasonal patterns of NEE as determined from measurements and the literature. Our study demonstrated that our empirical approach is effective for scaling up eddy flux NEE measurements to the continental scale and producing wall-to-wall NEE estimates across multiple biomes. Our estimates may provide an independent dataset from simulations with biogeochemical models and inverse modeling approaches for examining the spatiotemporal patterns of NEE and constraining terrestrial carbon budgets over large areas.« less
Butler, Samuel D; Nauyoks, Stephen E; Marciniak, Michael A
2015-06-01
Of the many classes of bidirectional reflectance distribution function (BRDF) models, two popular classes of models are the microfacet model and the linear systems diffraction model. The microfacet model has the benefit of speed and simplicity, as it uses geometric optics approximations, while linear systems theory uses a diffraction approach to compute the BRDF, at the expense of greater computational complexity. In this Letter, nongrazing BRDF measurements of rough and polished surface-reflecting materials at multiple incident angles are scaled by the microfacet cross section conversion term, but in the linear systems direction cosine space, resulting in great alignment of BRDF data at various incident angles in this space. This results in a predictive BRDF model for surface-reflecting materials at nongrazing angles, while avoiding some of the computational complexities in the linear systems diffraction model.
Flame Synthesis Of Single-Walled Carbon Nanotubes And Nanofibers
NASA Technical Reports Server (NTRS)
Wal, Randy L. Vander; Berger, Gordon M.; Ticich, Thomas M.
2003-01-01
Carbon nanotubes are widely sought for a variety of applications including gas storage, intercalation media, catalyst support and composite reinforcing material [1]. Each of these applications will require large scale quantities of CNTs. A second consideration is that some of these applications may require redispersal of the collected CNTs and attachment to a support structure. If the CNTs could be synthesized directly upon the support to be used in the end application, a tremendous savings in post-synthesis processing could be realized. Therein we have pursued both aerosol and supported catalyst synthesis of CNTs. Given space limitations, only the aerosol portion of the work is outlined here though results from both thrusts will be presented during the talk. Aerosol methods of SWNT, MWNT or nanofiber synthesis hold promise of large-scale production to supply the tonnage quantities these applications will require. Aerosol methods may potentially permit control of the catalyst particle size, offer continuous processing, provide highest product purity and most importantly, are scaleable. Only via economy of scale will the cost of CNTs be sufficient to realize the large-scale structural and power applications on both earth and in space. Present aerosol methods for SWNT synthesis include laser ablation of composite metalgraphite targets or thermal decomposition/pyrolysis of a sublimed or vaporized organometallic [2]. Both approaches, conducted within a high temperature furnace, have produced single-walled nanotubes (SWNTs). The former method requires sophisticated hardware and is inherently limited by the energy deposition that can be realized using pulsed laser light. The latter method, using expensive organometallics is difficult to control for SWNT synthesis given a range of gasparticle mixing conditions along variable temperature gradients; multi-walled nanotubes (MWNTs) are a far more likely end products. Both approaches require large energy expenditures and produce CNTs at prohibitive costs, around $500 per gram. Moreover these approaches do not possess demonstrated scalability. In contrast to these approaches, flame synthesis can be a very energy efficient, low-cost process [3]; a portion of the fuel serves as the heating source while the remainder serves as reactant. Moreover, flame systems are geometrically versatile as illustrated by innumerable boiler and furnace designs. Addressing scalability, flame systems are commercially used for producing megatonnage quantities of carbon black [4]. Although it presents a complex chemically reacting flow, a flame also offers many variables for control, e.g. temperature, chemical environment and residence times [5]. Despite these advantages, there are challenges to scaling flame synthesis as well.
A rapid local singularity analysis algorithm with applications
NASA Astrophysics Data System (ADS)
Chen, Zhijun; Cheng, Qiuming; Agterberg, Frits
2015-04-01
The local singularity model developed by Cheng is fast gaining popularity in characterizing mineralization and detecting anomalies of geochemical, geophysical and remote sensing data. However in one of the conventional algorithms involving the moving average values with different scales is time-consuming especially while analyzing a large dataset. Summed area table (SAT), also called as integral image, is a fast algorithm used within the Viola-Jones object detection framework in computer vision area. Historically, the principle of SAT is well-known in the study of multi-dimensional probability distribution functions, namely in computing 2D (or ND) probabilities (area under the probability distribution) from the respective cumulative distribution functions. We introduce SAT and it's variation Rotated Summed Area Table in the isotropic, anisotropic or directional local singularity mapping in this study. Once computed using SAT, any one of the rectangular sum can be computed at any scale or location in constant time. The area for any rectangular region in the image can be computed by using only 4 array accesses in constant time independently of the size of the region; effectively reducing the time complexity from O(n) to O(1). New programs using Python, Julia, matlab and C++ are implemented respectively to satisfy different applications, especially to the big data analysis. Several large geochemical and remote sensing datasets are tested. A wide variety of scale changes (linear spacing or log spacing) for non-iterative or iterative approach are adopted to calculate the singularity index values and compare the results. The results indicate that the local singularity analysis with SAT is more robust and superior to traditional approach in identifying anomalies.
Critical Fluctuations in Cortical Models Near Instability
Aburn, Matthew J.; Holmes, C. A.; Roberts, James A.; Boonstra, Tjeerd W.; Breakspear, Michael
2012-01-01
Computational studies often proceed from the premise that cortical dynamics operate in a linearly stable domain, where fluctuations dissipate quickly and show only short memory. Studies of human electroencephalography (EEG), however, have shown significant autocorrelation at time lags on the scale of minutes, indicating the need to consider regimes where non-linearities influence the dynamics. Statistical properties such as increased autocorrelation length, increased variance, power law scaling, and bistable switching have been suggested as generic indicators of the approach to bifurcation in non-linear dynamical systems. We study temporal fluctuations in a widely-employed computational model (the Jansen–Rit model) of cortical activity, examining the statistical signatures that accompany bifurcations. Approaching supercritical Hopf bifurcations through tuning of the background excitatory input, we find a dramatic increase in the autocorrelation length that depends sensitively on the direction in phase space of the input fluctuations and hence on which neuronal subpopulation is stochastically perturbed. Similar dependence on the input direction is found in the distribution of fluctuation size and duration, which show power law scaling that extends over four orders of magnitude at the Hopf bifurcation. We conjecture that the alignment in phase space between the input noise vector and the center manifold of the Hopf bifurcation is directly linked to these changes. These results are consistent with the possibility of statistical indicators of linear instability being detectable in real EEG time series. However, even in a simple cortical model, we find that these indicators may not necessarily be visible even when bifurcations are present because their expression can depend sensitively on the neuronal pathway of incoming fluctuations. PMID:22952464
The Geospace Dynamics Observatory; A Paradigm Changing Geospace Mission
NASA Technical Reports Server (NTRS)
Spann, James; Reardon, Patrick J.; Pitalo, Ken; Stahl, Phil; Hopkins, Randall
2013-01-01
The Geospace Dynamics Observatory (GDO) mission observes the near-Earth region in space called Geospace with unprecedented resolution, scale and sensitivity. At a distance of 60 Earth Radii (Re) in a near-polar circular orbit and a approx. 27-day period, GDO images the earth's full disk with (1) a three-channel far ultraviolet imager, (2) an extreme ultraviolet imager of the plasmasphere, and (3) a spectrometer in the near to far ultraviolet range that probes any portion of the disk and simultaneously observes the limb. The exceptional capabilities of the GDO mission include (1) unprecedented improvement in signal to noise for globalscale imaging of Earth's space environment that enable changes in the Earth's space environment to be resolved with orders of magnitude higher in temporal and spatial resolution compared to existing data and other approaches, and (2) unrivaled capability for resolving the temporal evolution, over many days, in local time or latitude with a continuous view of Earth's global-scale evolution while simultaneously capturing the changes at scales smaller than are possible with other methods. This combination of new capabilities is a proven path to major scientific advances and discoveries. The GDO mission (1) has the first full disk imagery of the density and composition variability that exist during disturbed "storm" periods and the circulation systems of the upper atmosphere, (2) is able to image the ionosphere on a global and long time scale basis, (3) is able to probe the mechanisms that control the evolution of planetary atmospheres, and (4) is able to test our understanding of how the Earth is connected to the Sun. This paper explores the optical and technical aspects of the GDO mission and the implementation strategy. Additionally, the case will be made that GDO addresses a significant portion of the priority mission science articulated in the recent Solar and Space Physics Decadal Survey.
A "Stepping Stone" Approach for Obtaining Quantum Free Energies of Hydration.
Sampson, Chris; Fox, Thomas; Tautermann, Christofer S; Woods, Christopher; Skylaris, Chris-Kriton
2015-06-11
We present a method which uses DFT (quantum, QM) calculations to improve free energies of binding computed with classical force fields (classical, MM). To overcome the incomplete overlap of configurational spaces between MM and QM, we use a hybrid Monte Carlo approach to generate quickly correct ensembles of structures of intermediate states between a MM and a QM/MM description, hence taking into account a great fraction of the electronic polarization of the quantum system, while being able to use thermodynamic integration to compute the free energy of transition between the MM and QM/MM. Then, we perform a final transition from QM/MM to full QM using a one-step free energy perturbation approach. By using QM/MM as a stepping stone toward the full QM description, we find very small convergence errors (<1 kJ/mol) in the transition to full QM. We apply this method to compute hydration free energies, and we obtain consistent improvements over the MM values for all molecules we used in this study. This approach requires large-scale DFT calculations as the full QM systems involved the ligands and all waters in their simulation cells, so the linear-scaling DFT code ONETEP was used for these calculations.
Control Coordination of Large Scale Hereditary Systems.
1985-07-01
Theory - A Hilbert Space Approach, (Academic Press, New York, 1982). [4] W. Findeisen , F. N. Bailey, M. Brdys, K Malinowski, P. Tatjewski and A. Wozniak... Findeisen et al. (1980), in the sense that local models are used in the design of component control laws and a higher level coordination problem is...Vol. 1, pp. 590-591, 1985. 3. W. Findeisen , F.N. Bailley, M. Brdys, K. Malinowski, P. Tatjewski and A. Wozniak, Control Coordination in Hierarchical
Family System of Advanced Charring Ablators for Planetary Exploration Missions
NASA Technical Reports Server (NTRS)
Congdon, William M.; Curry, Donald M.
2005-01-01
Advanced Ablators Program Objectives: 1) Flight-ready(TRL-6) ablative heat shields for deep-space missions; 2) Diversity of selection from family-system approach; 3) Minimum weight systems with high reliability; 4) Optimized formulations and processing; 5) Fully characterized properties; and 6) Low-cost manufacturing. Definition and integration of candidate lightweight structures. Test and analysis database to support flight-vehicle engineering. Results from production scale-up studies and production-cost analyses.
NASA Technical Reports Server (NTRS)
Lin, N. J.; Quinn, R. D.
1991-01-01
A locally-optimal trajectory management (LOTM) approach is analyzed, and it is found that care should be taken in choosing the Ritz expansion and cost function. A modified cost function for the LOTM approach is proposed which includes the kinetic energy along with the base reactions in a weighted and scale sum. The effects of the modified functions are demonstrated with numerical examples for robots operating in two- and three-dimensional space. It is pointed out that this modified LOTM approach shows good performance, the reactions do not fluctuate greatly, joint velocities reach their objectives at the end of the manifestation, and the CPU time is slightly more than twice the manipulation time.
The efficacy of student-centered instruction in supporting science learning.
Granger, E M; Bevis, T H; Saka, Y; Southerland, S A; Sampson, V; Tate, R L
2012-10-05
Transforming science learning through student-centered instruction that engages students in a variety of scientific practices is central to national science-teaching reform efforts. Our study employed a large-scale, randomized-cluster experimental design to compare the effects of student-centered and teacher-centered approaches on elementary school students' understanding of space-science concepts. Data included measures of student characteristics and learning and teacher characteristics and fidelity to the instructional approach. Results reveal that learning outcomes were higher for students enrolled in classrooms engaging in scientific practices through a student-centered approach; two moderators were identified. A statistical search for potential causal mechanisms for the observed outcomes uncovered two potential mediators: students' understanding of models and evidence and the self-efficacy of teachers.
Demonstration of the James Webb Space Telescope commissioning on the JWST testbed telescope
NASA Astrophysics Data System (ADS)
Acton, D. Scott; Towell, Timothy; Schwenker, John; Swensen, John; Shields, Duncan; Sabatke, Erin; Klingemann, Lana; Contos, Adam R.; Bauer, Brian; Hansen, Karl; Atcheson, Paul D.; Redding, David; Shi, Fang; Basinger, Scott; Dean, Bruce; Burns, Laura
2006-06-01
The one-meter Testbed Telescope (TBT) has been developed at Ball Aerospace to facilitate the design and implementation of the wavefront sensing and control (WFS&C) capabilities of the James Webb Space Telescope (JWST). The TBT is used to develop and verify the WFS&C algorithms, check the communication interfaces, validate the WFS&C optical components and actuators, and provide risk reduction opportunities for test approaches for later full-scale cryogenic vacuum testing of the observatory. In addition, the TBT provides a vital opportunity to demonstrate the entire WFS&C commissioning process. This paper describes recent WFS&C commissioning experiments that have been performed on the TBT.
Stochastic inflation in phase space: is slow roll a stochastic attractor?
NASA Astrophysics Data System (ADS)
Grain, Julien; Vennin, Vincent
2017-05-01
An appealing feature of inflationary cosmology is the presence of a phase-space attractor, ``slow roll'', which washes out the dependence on initial field velocities. We investigate the robustness of this property under backreaction from quantum fluctuations using the stochastic inflation formalism in the phase-space approach. A Hamiltonian formulation of stochastic inflation is presented, where it is shown that the coarse-graining procedure—where wavelengths smaller than the Hubble radius are integrated out—preserves the canonical structure of free fields. This means that different sets of canonical variables give rise to the same probability distribution which clarifies the literature with respect to this issue. The role played by the quantum-to-classical transition is also analysed and is shown to constrain the coarse-graining scale. In the case of free fields, we find that quantum diffusion is aligned in phase space with the slow-roll direction. This implies that the classical slow-roll attractor is immune to stochastic effects and thus generalises to a stochastic attractor regardless of initial conditions, with a relaxation time at least as short as in the classical system. For non-test fields or for test fields with non-linear self interactions however, quantum diffusion and the classical slow-roll flow are misaligned. We derive a condition on the coarse-graining scale so that observational corrections from this misalignment are negligible at leading order in slow roll.
The many faces of population density.
Mayor, Stephen J; Schaefer, James A
2005-09-01
Population density, one of the most fundamental demographic attributes, may vary systematically with spatial scale, but this scale-sensitivity is incompletely understood. We used a novel approach-based on fully censused and mapped distributions of eastern grey squirrel (Sciurus carolinensis) dreys, beaver (Castor canadensis) lodges, and moose (Alces alces)--to explore the scale-dependence of population density and its relationship to landscape features. We identified population units at several scales, both objectively, using cluster analysis, and arbitrarily, using artificial bounds centred on high-abundance sites. Densities declined with census area. For dreys, this relationship was stronger in objective versus arbitrary population units. Drey density was inconsistently related to patch area, a relationship that was positive for all patches but negative when non-occupied patches were excluded. Drey density was negatively related to the proportion of green-space and positively related to the density of buildings or roads, relationships that were accentuated at coarser scales. Mean drey densities were more sensitive to scale when calculated as organism-weighted versus area-weighted averages. Greater understanding of these scaling effects is required to facilitate comparisons of population density across studies.
Angular scale expansion theory and the misperception of egocentric distance in locomotor space.
Durgin, Frank H
Perception is crucial for the control of action, but perception need not be scaled accurately to produce accurate actions. This paper reviews evidence for an elegant new theory of locomotor space perception that is based on the dense coding of angular declination so that action control may be guided by richer feedback. The theory accounts for why so much direct-estimation data suggests that egocentric distance is underestimated despite the fact that action measures have been interpreted as indicating accurate perception. Actions are calibrated to the perceived scale of space and thus action measures are typically unable to distinguish systematic (e.g., linearly scaled) misperception from accurate perception. Whereas subjective reports of the scaling of linear extent are difficult to evaluate in absolute terms, study of the scaling of perceived angles (which exist in a known scale, delimited by vertical and horizontal) provides new evidence regarding the perceptual scaling of locomotor space.
Fluid Physics Experiments onboard International Space Station: Through the Eyes of a Scientist.
NASA Astrophysics Data System (ADS)
Shevtsova, Valentina
Fluids are present everywhere in everyday life. They are also present as fuel, in support systems or as consumable in rockets and onboard of satellites and space stations. Everyone experiences every day that fluids are very sensitive to gravity: on Earth liquids flow downwards and gases mostly rise. Nowadays much of the interest of the scientific community is on studying the phenomena at microscales in so-called microfluidic systems. However, at smaller scales the experimental investigation of convective flows becomes increasingly difficult as the control parameter Ra scales with g L (3) (g; acceleration level, L: length scale). A unique alternative to the difficulty of investigating systems with small length scale on the ground is to reduce the gravity level g. In systems with interfaces, buoyancy forces are proportional to the volume of the liquid, while capillary forces act solely on the liquid surface. The importance of buoyancy diminishes either at very small scales or with reducing the acceleration level. Under the weightless conditions of space where buoyancy is virtually eliminated, other mechanisms such as capillary forces, diffusion, vibration, shear forces, electrostatic and electromagnetic forces are dominating in the fluid behaviour. This is why research in space represents a powerful tool for scientific research in this field. Understanding how fluids work really matters and so does measuring their properties accurately. Presently, a number of scientific laboratories, as usual goes with multi-user instruments, are involved in fluid research on the ISS. The programme of fluid physics experiments on-board deals with capillary flows, diffusion, dynamics in complex fluids (foams, emulsions and granular matter), heat transfer processes with phase change, physics and physico-chemistry near or beyond the critical point and it also extends to combustion physics. The top-level objectives of fluid research in space are as follows: (i) to investigate fluid behaviour in order to support the development of predictive models for the management of fluids and fluid mixtures on the ground as well as in space; (ii) to measure fluid properties that are either very difficult or not possible at all to measure on the ground and establish benchmarks; (iii) to exploit the absence of gravity forces to study new behaviours and implement new experimental configurations; Surely, all of you have seen movies about astronauts’ work and life on the ISS. Here you will learn another approach to the ISS activity, through the opinion of experienced scientist.
Assessing equitable access to urban green space: the role of engineered water infrastructure.
Wendel, Heather E Wright; Downs, Joni A; Mihelcic, James R
2011-08-15
Urban green space and water features provide numerous social, environmental, and economic benefits, yet disparities often exist in their distribution and accessibility. This study examines the link between issues of environmental justice and urban water management to evaluate potential improvements in green space and surface water access through the revitalization of existing engineered water infrastructures, namely stormwater ponds. First, relative access to green space and water features were compared for residents of Tampa, Florida, and an inner-city community of Tampa (East Tampa). Although disparities were not found in overall accessibility between Tampa and East Tampa, inequalities were apparent when quality, diversity, and size of green spaces were considered. East Tampa residents had significantly less access to larger, more desirable spaces and water features. Second, this research explored approaches for improving accessibility to green space and natural water using three integrated stormwater management development scenarios. These scenarios highlighted the ability of enhanced water infrastructures to increase access equality at a variety of spatial scales. Ultimately, the "greening" of gray urban water infrastructures is advocated as a way to address environmental justice issues while also reconnecting residents with issues of urban water management.
The floral morphospace – a modern comparative approach to study angiosperm evolution
Chartier, Marion; Jabbour, Florian; Gerber, Sylvain; Mitteroecker, Philipp; Sauquet, Hervé; von Balthazar, Maria; Staedler, Yannick; Crane, Peter R.; Schönenberger, Jürg
2017-01-01
Summary Morphospaces are mathematical representations used for studying the evolution of morphological diversity and for the evaluation of evolved shapes among theoretically possible ones. Although widely used in zoology, they – with few exceptions – have been disregarded in plant science and in particular in the study of broad-scale patterns of floral structure and evolution. Here we provide basic information on the morphospace approach; we review earlier morphospace applications in plant science; and as a practical example, we construct and analyze a floral morphospace. Morphospaces are usually visualized with the help of ordination methods such as principal component analysis (PCA) or nonmetric multidimensional scaling (NMDS). The results of these analyses are then coupled with disparity indices that describe the spread of taxa in the space. We discuss these methods and apply modern statistical tools to the first and only angiosperm-wide floral morphospace published by Stebbins in 1951. Despite the incompleteness of Stebbins’ original dataset, our analyses highlight major, angiosperm-wide trends in the diversity of flower morphology and thereby demonstrate the power of this previously neglected approach in plant science. PMID:25539005
Corr, Philip J; Cooper, Andrew J
2016-11-01
We report the development and validation of a questionnaire measure of the revised reinforcement sensitivity theory (rRST) of personality. Starting with qualitative responses to defensive and approach scenarios modeled on typical rodent ethoexperimental situations, exploratory and confirmatory factor analyses (CFAs) revealed a robust 6-factor structure: 2 unitary defensive factors, fight-flight-freeze system (FFFS; related to fear) and the behavioral inhibition system (BIS; related to anxiety); and 4 behavioral approach system (BAS) factors (Reward Interest, Goal-Drive Persistence, Reward Reactivity, and Impulsivity). Theoretically motivated thematic facets were employed to sample the breadth of defensive space, comprising FFFS (Flight, Freeze, and Active Avoidance) and BIS (Motor Planning Interruption, Worry, Obsessive Thoughts, and Behavioral Disengagement). Based on theoretical considerations, and statistically confirmed, a separate scale for Defensive Fight was developed. Validation evidence for the 6-factor structure came from convergent and discriminant validity shown by correlations with existing personality scales. We offer the Reinforcement Sensitivity Theory of Personality Questionnaire to facilitate future research specifically on rRST and, more broadly, on approach-avoidance theories of personality. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Synchronicity in predictive modelling: a new view of data assimilation
NASA Astrophysics Data System (ADS)
Duane, G. S.; Tribbia, J. J.; Weiss, J. B.
2006-11-01
The problem of data assimilation can be viewed as one of synchronizing two dynamical systems, one representing "truth" and the other representing "model", with a unidirectional flow of information between the two. Synchronization of truth and model defines a general view of data assimilation, as machine perception, that is reminiscent of the Jung-Pauli notion of synchronicity between matter and mind. The dynamical systems paradigm of the synchronization of a pair of loosely coupled chaotic systems is expected to be useful because quasi-2D geophysical fluid models have been shown to synchronize when only medium-scale modes are coupled. The synchronization approach is equivalent to standard approaches based on least-squares optimization, including Kalman filtering, except in highly non-linear regions of state space where observational noise links regimes with qualitatively different dynamics. The synchronization approach is used to calculate covariance inflation factors from parameters describing the bimodality of a one-dimensional system. The factors agree in overall magnitude with those used in operational practice on an ad hoc basis. The calculation is robust against the introduction of stochastic model error arising from unresolved scales.
NASA Astrophysics Data System (ADS)
Danesh-Yazdi, Mohammad; Botter, Gianluca; Foufoula-Georgiou, Efi
2017-05-01
Lack of hydro-bio-chemical data at subcatchment scales necessitates adopting an aggregated system approach for estimating water and solute transport properties, such as residence and travel time distributions, at the catchment scale. In this work, we show that within-catchment spatial heterogeneity, as expressed in spatially variable discharge-storage relationships, can be appropriately encapsulated within a lumped time-varying stochastic Lagrangian formulation of transport. This time (variability) for space (heterogeneity) substitution yields mean travel times (MTTs) that are not significantly biased to the aggregation of spatial heterogeneity. Despite the significant variability of MTT at small spatial scales, there exists a characteristic scale above which the MTT is not impacted by the aggregation of spatial heterogeneity. Extensive simulations of randomly generated river networks reveal that the ratio between the characteristic scale and the mean incremental area is on average independent of river network topology and the spatial arrangement of incremental areas.
Universal sequence map (USM) of arbitrary discrete sequences
2002-01-01
Background For over a decade the idea of representing biological sequences in a continuous coordinate space has maintained its appeal but not been fully realized. The basic idea is that any sequence of symbols may define trajectories in the continuous space conserving all its statistical properties. Ideally, such a representation would allow scale independent sequence analysis – without the context of fixed memory length. A simple example would consist on being able to infer the homology between two sequences solely by comparing the coordinates of any two homologous units. Results We have successfully identified such an iterative function for bijective mappingψ of discrete sequences into objects of continuous state space that enable scale-independent sequence analysis. The technique, named Universal Sequence Mapping (USM), is applicable to sequences with an arbitrary length and arbitrary number of unique units and generates a representation where map distance estimates sequence similarity. The novel USM procedure is based on earlier work by these and other authors on the properties of Chaos Game Representation (CGR). The latter enables the representation of 4 unit type sequences (like DNA) as an order free Markov Chain transition table. The properties of USM are illustrated with test data and can be verified for other data by using the accompanying web-based tool:http://bioinformatics.musc.edu/~jonas/usm/. Conclusions USM is shown to enable a statistical mechanics approach to sequence analysis. The scale independent representation frees sequence analysis from the need to assume a memory length in the investigation of syntactic rules. PMID:11895567
NASA Technical Reports Server (NTRS)
Mankins, John C.; Mazanek, Daniel D.
2001-01-01
The safe, affordable and effective transfer of ever-larger payloads and eventually personnel beyond Low Earth Orbit (LEO) is a major challenge facing future commercial development and human exploration of space. Without reusable systems, sustained exploration or large scale development beyond LEO appears to be economically non-viable. However, reusable systems must be capable of both good fuel efficiency and "high utilization of capacity", or else economic costs will remain unacceptably high. Various options exist that can provide high fuel efficiency - for example, Solar Electric Propulsion Systems (SEPS) - but only at the cost of low thrust and concomitant long transit times. Chemical propulsion systems offer the potential for high thrust and short transit times - including both cryogenic and non-cryogenic options - but only at the cost of relatively low specific impulse (Isp). Nuclear thermal propulsion systems offer relatively good thrust-to-weight and Isp - but involve public concerns that may be insurmountable for all except the most-critical of public purposes. Fixed infrastructures have been suggested as one approach to solving this challenge; for example, rotating tether approaches. However, these systems tend to suffer from high initial costs or unacceptable operational constraints. A new concept has been identified - the Hybrid Propellant Module (HPM) - that integrates the best features of both chemical and solar electric transportation architectures. The HPM approach appears to hold promise of solving the issues associated with other approaches, opening a new family of capabilities for future space exploration and development of near-Earth space and beyond. This paper provides a summary overview of the challenge of Earth neighborhood transportation and discusses how various systems concepts might be applied to meet the needs of these architectures. The paper describes a new approach, the HPM, and illustrates the application of the concept for a typical mission concept. The paper concludes with a discussion of needed technologies and a possible timeline for the development and evolution of this class of systems concepts.
Mapping the landscape of metabolic goals of a cell
Zhao, Qi; Stettner, Arion I.; Reznik, Ed; ...
2016-05-23
Here, genome-scale flux balance models of metabolism provide testable predictions of all metabolic rates in an organism, by assuming that the cell is optimizing a metabolic goal known as the objective function. We introduce an efficient inverse flux balance analysis (invFBA) approach, based on linear programming duality, to characterize the space of possible objective functions compatible with measured fluxes. After testing our algorithm on simulated E. coli data and time-dependent S. oneidensis fluxes inferred from gene expression, we apply our inverse approach to flux measurements in long-term evolved E. coli strains, revealing objective functions that provide insight into metabolic adaptationmore » trajectories.« less
A ground-base Radar network to access the 3D structure of MLT winds
NASA Astrophysics Data System (ADS)
Stober, G.; Chau, J. L.; Wilhelm, S.; Jacobi, C.
2016-12-01
The mesosphere/lower thermosphere (MLT) is a highly variable atmospheric region driven by wave dynamics at various scales including planetary waves, tides and gravity waves. Some of these propagate through the MLT into the thermosphere/ionosphere carrying energy and momentum from the middle atmosphere into the upper atmosphere. To improve our understanding of the wave energetics and momentum transfer during their dissipation it is essential to characterize their space time properties. During the last two years we developed a new experimental approach to access the horizontal structure of wind fields at the MLT using a meteor radar network in Germany, which we called MMARIA - Multi-static Multi-frequency Agile Radar for Investigation of the Atmosphere. The network combines classical backscatter meteor radars and passive forward scatter radio links. We present our preliminary results using up to 7 different active and passive radio links to obtain horizontally resolved wind fields applying a statistical inverse method. The wind fields are retrieved with 15-30 minutes temporal resolution on a grid with 30x30 km horizontal spacing. Depending on the number of observed meteors, we are able to apply the wind field inversion at heights between 84-94 km. The horizontally resolved wind fields provide insights of the typical horizontal gravity wave length and the energy cascade from large scales to small scales. We present first power spectra indicating the transition from the synoptic wave scale to the gravity wave scale.
Space Industrialization: The Mirage of Abundance.
ERIC Educational Resources Information Center
Deudney, Daniel
1982-01-01
Large-scale space industrialization is not a viable solution to the population, energy, and resource problems of earth. The expense and technological difficulties involved in the development and maintenance of space manufacturing facilities, space colonies, and large-scale satellites for solar power are discussed. (AM)
NASA Technical Reports Server (NTRS)
1999-01-01
This brief three-frame movie of the Moon was made from three Cassini narrow-angle images as the spacecraft passed by the Moon on the way to its closest approach with Earth on August 17, 1999. The purpose of this particular set of images was to calibrate the spectral response of the narrow-angle camera and to test its 'on-chip summing mode' data compression technique in flight. From left to right, they show the Moon in the green, blue and ultraviolet regions of the spectrum in 40, 60 and 80 millisecond exposures, respectively. All three images have been scaled so that the brightness of Crisium basin, the dark circular region in the upper right, is the same in each image. The spatial scale in the blue and ultraviolet images is 1.4 miles per pixel (2.3 kilometers). The original scale in the green image (which was captured in the usual manner and then reduced in size by 2x2 pixel summing within the camera system) was 2.8 miles per pixel (4.6 kilometers). It has been enlarged for display to the same scale as the other two. The imaging data were processed and released by the Cassini Imaging Central Laboratory for Operations (CICLOPS) at the University of Arizona's Lunar and Planetary Laboratory, Tucson, AZ. Photo Credit: NASA/JPL/Cassini Imaging Team/University of Arizona Cassini, launched in 1997, is a joint mission of NASA, the European Space Agency and Italian Space Agency. The mission is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Space Science, Washington DC. JPL is a division of the California Institute of Technology, Pasadena, CA.Multi-scale Material Appearance
NASA Astrophysics Data System (ADS)
Wu, Hongzhi
Modeling and rendering the appearance of materials is important for a diverse range of applications of computer graphics - from automobile design to movies and cultural heritage. The appearance of materials varies considerably at different scales, posing significant challenges due to the sheer complexity of the data, as well the need to maintain inter-scale consistency constraints. This thesis presents a series of studies around the modeling, rendering and editing of multi-scale material appearance. To efficiently render material appearance at multiple scales, we develop an object-space precomputed adaptive sampling method, which precomputes a hierarchy of view-independent points that preserve multi-level appearance. To support bi-scale material appearance design, we propose a novel reflectance filtering algorithm, which rapidly computes the large-scale appearance from small-scale details, by exploiting the low-rank structures of Bidirectional Visible Normal Distribution Functions and pre-rotated Bidirectional Reflectance Distribution Functions in the matrix formulation of the rendering algorithm. This approach can guide the physical realization of appearance, as well as the modeling of real-world materials using very sparse measurements. Finally, we present a bi-scale-inspired high-quality general representation for material appearance described by Bidirectional Texture Functions. Our representation is at once compact, easily editable, and amenable to efficient rendering.
Modelling the large-scale redshift-space 3-point correlation function of galaxies
NASA Astrophysics Data System (ADS)
Slepian, Zachary; Eisenstein, Daniel J.
2017-08-01
We present a configuration-space model of the large-scale galaxy 3-point correlation function (3PCF) based on leading-order perturbation theory and including redshift-space distortions (RSD). This model should be useful in extracting distance-scale information from the 3PCF via the baryon acoustic oscillation method. We include the first redshift-space treatment of biasing by the baryon-dark matter relative velocity. Overall, on large scales the effect of RSD is primarily a renormalization of the 3PCF that is roughly independent of both physical scale and triangle opening angle; for our adopted Ωm and bias values, the rescaling is a factor of ˜1.8. We also present an efficient scheme for computing 3PCF predictions from our model, important for allowing fast exploration of the space of cosmological parameters in future analyses.
Prioritized System Science Targets for Heliophysics
NASA Astrophysics Data System (ADS)
Spann, J. F.; Christensen, A. B.; St Cyr, O. C.; Posner, A.; Giles, B. L.
2009-12-01
Heliophysics is a discipline that investigates the science at work from the interface of Earth and space, to the core of the Sun, and to the outer edge of our solar system. This solar-interplanetary-planetary system is vast and inherently coupled on many spatial, temporal and energy scales. The Sun’s explosive energy output creates complicated field and plasma structures that when coupled with our terrestrial magnetized space, generates an extraordinary complex environment that has practical implications for humanity as we are becoming increasingly dependent on space-based assets. This immense volume of our cosmic neighborhood is the domain of heliophysics. Understanding this domain and the dominant mechanisms that control the transfer of mass and energy requires a system approach that addresses all aspects and regions of the system. The 2009 NASA Heliophysics Roadmap presents a science-focused strategic approach to advance the goal of heliophysics: why does the Sun vary; how do the Earth and heliosphere respond; and what are the impacts on humanity? This talk will present the top 6 prioritized science targets to understand the coupled heliophysics system as presented in the 2009 NASA Heliophysics Roadmap. An exposition of each science target and how it addresses outstanding questions in heliophysics will be discussed.
NASA Astrophysics Data System (ADS)
Pan, Zhen; Anderes, Ethan; Knox, Lloyd
2018-05-01
One of the major targets for next-generation cosmic microwave background (CMB) experiments is the detection of the primordial B-mode signal. Planning is under way for Stage-IV experiments that are projected to have instrumental noise small enough to make lensing and foregrounds the dominant source of uncertainty for estimating the tensor-to-scalar ratio r from polarization maps. This makes delensing a crucial part of future CMB polarization science. In this paper we present a likelihood method for estimating the tensor-to-scalar ratio r from CMB polarization observations, which combines the benefits of a full-scale likelihood approach with the tractability of the quadratic delensing technique. This method is a pixel space, all order likelihood analysis of the quadratic delensed B modes, and it essentially builds upon the quadratic delenser by taking into account all order lensing and pixel space anomalies. Its tractability relies on a crucial factorization of the pixel space covariance matrix of the polarization observations which allows one to compute the full Gaussian approximate likelihood profile, as a function of r , at the same computational cost of a single likelihood evaluation.
Future System Science Mission Targets for Heliophysics
NASA Technical Reports Server (NTRS)
Spann, James; Christensen, Andrew B.; SaintCyr, O. C.; Giles, Barbara I.; Posner, Arik
2009-01-01
Heliophysics is a discipline that investigates the science at work from the interface of Earth and space, to the core of the Sun, and to the outer edge of our solar system. This solar-interplanetary-planetary system is vast and inherently coupled on many spatial, temporal and energy scales. The Sun's explosive energy output creates complicated field and plasma structures that when coupled without terrestrial magnetized space, generates an extraordinary complex environment that has practical implications for humanity as we are becoming increasingly dependent on space-based assets. The immense volume of our cosmic neighborhood is the domain of heliophysics. Understanding this domain and the dominant mechanisms that control the transfer of mass and energy requires a system approach that addresses all aspects and regions of the system. The 2009 NASA Heliophysics Roadmap presents a science-focused strategic approach to advance the goal of heliophysics: why does the Sun vary; how do the Earth and heliosphere respond; and what are the impacts on humanity? This talk will present the top 6 prioritized science targets to understand the coupled heliophysics system as presented in the 2009 NASA Heliophysics Roadmap. An exposition of each science target and how it addresses outstanding questions in heliophysics will be discussed.
Prioritized System Science Targets for Heliophysics
NASA Technical Reports Server (NTRS)
Spann, James Frederick; Christensen, Andrew B.; SaintCyr, Orville Chris; Posner, Arik; Giles, Barbara L.
2009-01-01
Heliophysics is a discipline that investigates the science at work from the interface of Earth and space, to the core of the Sun, and to the outer edge of our solar system. This solar-interplanetary-planetary system is vast and inherently coupled on many spatial, temporal and energy scales. The Sun's explosive energy output creates complicated field and plasma structures that when coupled with our terrestrial magnetized space, generates an extraordinary complex environment that has practical implications for humanity as we are becoming increasingly dependent on space-based assets. This immense volume of our cosmic neighborhood is the domain of heliophysics. Understanding this domain and the dominant mechanisms that control the transfer of mass and energy requires a system approach that addresses all aspects and regions of the system. The 2009 NASA Heliophysics Roadmap presents a science-focused strategic approach to advance the goal of heliophysics: why does the Sun vary; how do the Earth and heliosphere respond; and what are the impacts on humanity? This talk will present the top 6 prioritized science targets to understand the coupled heliophysics system as presented in the 2009 NASA Heliophysics Roadmap. An exposition of each science target and how it addresses outstanding questions in heliophysics will be discussed.
NASA Technical Reports Server (NTRS)
Valinia, Azita; Moe, Rud; Seery, Bernard D.; Mankins, John C.
2013-01-01
We present a concept for an ISS-based optical system assembly demonstration designed to advance technologies related to future large in-space optical facilities deployment, including space solar power collectors and large-aperture astronomy telescopes. The large solar power collector problem is not unlike the large astronomical telescope problem, but at least conceptually it should be easier in principle, given the tolerances involved. We strive in this application to leverage heavily the work done on the NASA Optical Testbed Integration on ISS Experiment (OpTIIX) effort to erect a 1.5 m imaging telescope on the International Space Station (ISS). Specifically, we examine a robotic assembly sequence for constructing a large (meter diameter) slightly aspheric or spherical primary reflector, comprised of hexagonal mirror segments affixed to a lightweight rigidizing backplane structure. This approach, together with a structured robot assembler, will be shown to be scalable to the area and areal densities required for large-scale solar concentrator arrays.
Upper limits to submillimetre-range forces from extra space-time dimensions.
Long, Joshua C; Chan, Hilton W; Churnside, Allison B; Gulbis, Eric A; Varney, Michael C M; Price, John C
2003-02-27
String theory is the most promising approach to the long-sought unified description of the four forces of nature and the elementary particles, but direct evidence supporting it is lacking. The theory requires six extra spatial dimensions beyond the three that we observe; it is usually supposed that these extra dimensions are curled up into small spaces. This 'compactification' induces 'moduli' fields, which describe the size and shape of the compact dimensions at each point in space-time. These moduli fields generate forces with strengths comparable to gravity, which according to some recent predictions might be detected on length scales of about 100 microm. Here we report a search for gravitational-strength forces using planar oscillators separated by a gap of 108 micro m. No new forces are observed, ruling out a substantial portion of the previously allowed parameter space for the strange and gluon moduli forces, and setting a new upper limit on the range of the string dilaton and radion forces.
An efficient linear-scaling CCSD(T) method based on local natural orbitals.
Rolik, Zoltán; Szegedy, Lóránt; Ladjánszki, István; Ladóczki, Bence; Kállay, Mihály
2013-09-07
An improved version of our general-order local coupled-cluster (CC) approach [Z. Rolik and M. Kállay, J. Chem. Phys. 135, 104111 (2011)] and its efficient implementation at the CC singles and doubles with perturbative triples [CCSD(T)] level is presented. The method combines the cluster-in-molecule approach of Li and co-workers [J. Chem. Phys. 131, 114109 (2009)] with frozen natural orbital (NO) techniques. To break down the unfavorable fifth-power scaling of our original approach a two-level domain construction algorithm has been developed. First, an extended domain of localized molecular orbitals (LMOs) is assembled based on the spatial distance of the orbitals. The necessary integrals are evaluated and transformed in these domains invoking the density fitting approximation. In the second step, for each occupied LMO of the extended domain a local subspace of occupied and virtual orbitals is constructed including approximate second-order Mo̸ller-Plesset NOs. The CC equations are solved and the perturbative corrections are calculated in the local subspace for each occupied LMO using a highly-efficient CCSD(T) code, which was optimized for the typical sizes of the local subspaces. The total correlation energy is evaluated as the sum of the individual contributions. The computation time of our approach scales linearly with the system size, while its memory and disk space requirements are independent thereof. Test calculations demonstrate that currently our method is one of the most efficient local CCSD(T) approaches and can be routinely applied to molecules of up to 100 atoms with reasonable basis sets.
Giotto, Nina; Gerard, Jean-François; Ziv, Alon; Bouskila, Amos; Bar-David, Shirli
2015-01-01
The way in which animals move and use the landscape is influenced by the spatial distribution of resources, and is of importance when considering species conservation. We aimed at exploring how landscape-related factors affect a large herbivore's space-use patterns by using a combined approach, integrating movement (displacement and recursions) and habitat selection analyses. We studied the endangered Asiatic wild ass (Equus hemionus) in the Negev Desert, Israel, using GPS monitoring and direct observation. We found that the main landscape-related factors affecting the species' space-use patterns, on a daily and seasonal basis, were vegetation cover, water sources and topography. Two main habitat types were selected: high-elevation sites during the day (specific microclimate: windy on warm summer days) and streambed surroundings during the night (coupled with high vegetation when the animals were active in summer). Distribution of recursion times (duration between visits) revealed a 24-hour periodicity, a pattern that could be widespread among large herbivores. Characterizing frequently revisited sites suggested that recursion movements were mainly driven by a few landscape features (water sources, vegetation patches, high-elevation points), but also by social factors, such as territoriality, which should be further explored. This study provided complementary insights into the space-use patterns of E. hemionus. Understanding of the species' space-use patterns, at both large and fine spatial scale, is required for developing appropriate conservation protocols. Our approach could be further applied for studying the space-use patterns of other species in heterogeneous landscapes.