Sample records for large parameter space

  1. Dynamics of a neuron model in different two-dimensional parameter-spaces

    NASA Astrophysics Data System (ADS)

    Rech, Paulo C.

    2011-03-01

    We report some two-dimensional parameter-space diagrams numerically obtained for the multi-parameter Hindmarsh-Rose neuron model. Several different parameter planes are considered, and we show that regardless of the combination of parameters, a typical scenario is preserved: for all choice of two parameters, the parameter-space presents a comb-shaped chaotic region immersed in a large periodic region. We also show that exist regions close these chaotic region, separated by the comb teeth, organized themselves in period-adding bifurcation cascades.

  2. Approaches to automatic parameter fitting in a microscopy image segmentation pipeline: An exploratory parameter space analysis.

    PubMed

    Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas

    2013-01-01

    Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.

  3. Approaches to automatic parameter fitting in a microscopy image segmentation pipeline: An exploratory parameter space analysis

    PubMed Central

    Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas

    2013-01-01

    Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum. PMID:23766941

  4. Interpretation of plasma diagnostics package results in terms of large space structure plasma interactions

    NASA Technical Reports Server (NTRS)

    Kurth, William S.

    1991-01-01

    The Plasma Diagnostics Package (PDP) is a spacecraft which was designed and built at The University of Iowa and which contained several scientific instruments. These instruments were used for measuring Space Shuttle Orbiter environmental parameters and plasma parameters. The PDP flew on two Space Shuttle flights. The first flight of the PDP was on Space Shuttle Mission STS-3 and was a part of the NASA/Office of Space Science payload (OSS-1). The second flight of the PDP was on Space Shuttle Mission STS/51F and was a part of Spacelab 2. The interpretation of both the OSS-1 and Spacelab 2 PDP results in terms of large space structure plasma interactions is emphasized.

  5. Improving parallel I/O autotuning with performance modeling

    DOE PAGES

    Behzad, Babak; Byna, Surendra; Wild, Stefan M.; ...

    2014-01-01

    Various layers of the parallel I/O subsystem offer tunable parameters for improving I/O performance on large-scale computers. However, searching through a large parameter space is challenging. We are working towards an autotuning framework for determining the parallel I/O parameters that can achieve good I/O performance for different data write patterns. In this paper, we characterize parallel I/O and discuss the development of predictive models for use in effectively reducing the parameter space. Furthermore, applying our technique on tuning an I/O kernel derived from a large-scale simulation code shows that the search time can be reduced from 12 hours to 2more » hours, while achieving 54X I/O performance speedup.« less

  6. Exploring theory space with Monte Carlo reweighting

    DOE PAGES

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; ...

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less

  7. Characterizing the Space Debris Environment with a Variety of SSA Sensors

    NASA Technical Reports Server (NTRS)

    Stansbery, Eugene G.

    2010-01-01

    Damaging space debris spans a wide range of sizes and altitudes. Therefore no single method or sensor can fully characterize the space debris environment. Space debris researchers use a variety of radars and optical telescopes to characterize the space debris environment in terms of number, altitude, and inclination distributions. Some sensors, such as phased array radars, are designed to search a large volume of the sky and can be instrumental in detecting new breakups and cataloging and precise tracking of relatively large debris. For smaller debris sizes more sensitivity is needed which can be provided, in part, by large antenna gains. Larger antenna gains, however, produce smaller fields of view. Statistical measurements of the debris environment with less precise orbital parameters result. At higher altitudes, optical telescopes become the more sensitive instrument and present their own measurement difficulties. Space Situational Awareness, or SSA, is concerned with more than the number and orbits of satellites. SSA also seeks to understand such parameters as the function, shape, and composition of operational satellites. Similarly, debris researchers are seeking to characterize similar parameters for space debris to improve our knowledge of the risks debris poses to operational satellites as well as determine sources of debris for future mitigation. This paper will discuss different sensor and sensor types and the role that each plays in fully characterizing the space debris environment.

  8. Effects of SO(10)-inspired scalar non-universality on the MSSM parameter space at large tanβ

    NASA Astrophysics Data System (ADS)

    Ramage, M. R.

    2005-08-01

    We analyze the parameter space of the ( μ>0, A=0) CMSSM at large tanβ with a small degree of non-universality originating from D-terms and Higgs-sfermion splitting inspired by SO(10) GUT models. The effects of such non-universalities on the sparticle spectrum and observables such as (, B(b→Xγ), the SUSY threshold corrections to the bottom mass and Ωh are examined in detail and the consequences for the allowed parameter space of the model are investigated. We find that even small deviations to universality can result in large qualitative differences compared to the universal case; for certain values of the parameters, we find, even at low m and m, that radiative electroweak symmetry breaking fails as a consequence of either |<0 or mA2<0. We find particularly large departures from the mSugra case for the neutralino relic density, which is sensitive to significant changes in the position and shape of the A resonance and a substantial increase in the Higgsino component of the LSP. However, we find that the corrections to the bottom mass are not sufficient to allow for Yukawa unification.

  9. Optimizing for Large Planar Fractures in Multistage Horizontal Wells in Enhanced Geothermal Systems Using a Coupled Fluid and Geomechanics Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Xiexiaomen; Tutuncu, Azra; Eustes, Alfred

    Enhanced Geothermal Systems (EGS) could potentially use technological advancements in coupled implementation of horizontal drilling and multistage hydraulic fracturing techniques in tight oil and shale gas reservoirs along with improvements in reservoir simulation techniques to design and create EGS reservoirs. In this study, a commercial hydraulic fracture simulation package, Mangrove by Schlumberger, was used in an EGS model with largely distributed pre-existing natural fractures to model fracture propagation during the creation of a complex fracture network. The main goal of this study is to investigate optimum treatment parameters in creating multiple large, planar fractures to hydraulically connect a horizontal injectionmore » well and a horizontal production well that are 10,000 ft. deep and spaced 500 ft. apart from each other. A matrix of simulations for this study was carried out to determine the influence of reservoir and treatment parameters on preventing (or aiding) the creation of large planar fractures. The reservoir parameters investigated during the matrix simulations include the in-situ stress state and properties of the natural fracture set such as the primary and secondary fracture orientation, average fracture length, and average fracture spacing. The treatment parameters investigated during the simulations were fluid viscosity, proppant concentration, pump rate, and pump volume. A final simulation with optimized design parameters was performed. The optimized design simulation indicated that high fluid viscosity, high proppant concentration, large pump volume and pump rate tend to minimize the complexity of the created fracture network. Additionally, a reservoir with 'friendly' formation characteristics such as large stress anisotropy, natural fractures set parallel to the maximum horizontal principal stress (SHmax), and large natural fracture spacing also promote the creation of large planar fractures while minimizing fracture complexity.« less

  10. Impact of large-scale tides on cosmological distortions via redshift-space power spectrum

    NASA Astrophysics Data System (ADS)

    Akitsu, Kazuyuki; Takada, Masahiro

    2018-03-01

    Although large-scale perturbations beyond a finite-volume survey region are not direct observables, these affect measurements of clustering statistics of small-scale (subsurvey) perturbations in large-scale structure, compared with the ensemble average, via the mode-coupling effect. In this paper we show that a large-scale tide induced by scalar perturbations causes apparent anisotropic distortions in the redshift-space power spectrum of galaxies in a way depending on an alignment between the tide, wave vector of small-scale modes and line-of-sight direction. Using the perturbation theory of structure formation, we derive a response function of the redshift-space power spectrum to large-scale tide. We then investigate the impact of large-scale tide on estimation of cosmological distances and the redshift-space distortion parameter via the measured redshift-space power spectrum for a hypothetical large-volume survey, based on the Fisher matrix formalism. To do this, we treat the large-scale tide as a signal, rather than an additional source of the statistical errors, and show that a degradation in the parameter is restored if we can employ the prior on the rms amplitude expected for the standard cold dark matter (CDM) model. We also discuss whether the large-scale tide can be constrained at an accuracy better than the CDM prediction, if the effects up to a larger wave number in the nonlinear regime can be included.

  11. Dynamics in the Parameter Space of a Neuron Model

    NASA Astrophysics Data System (ADS)

    Paulo, C. Rech

    2012-06-01

    Some two-dimensional parameter-space diagrams are numerically obtained by considering the largest Lyapunov exponent for a four-dimensional thirteen-parameter Hindmarsh—Rose neuron model. Several different parameter planes are considered, and it is shown that depending on the combination of parameters, a typical scenario can be preserved: for some choice of two parameters, the parameter plane presents a comb-shaped chaotic region embedded in a large periodic region. It is also shown that there exist regions close to these comb-shaped chaotic regions, separated by the comb teeth, organizing themselves in period-adding bifurcation cascades.

  12. SP_Ace: Stellar Parameters And Chemical abundances Estimator

    NASA Astrophysics Data System (ADS)

    Boeche, C.; Grebel, E. K.

    2018-05-01

    SP_Ace (Stellar Parameters And Chemical abundances Estimator) estimates the stellar parameters Teff, log g, [M/H], and elemental abundances. It employs 1D stellar atmosphere models in Local Thermodynamic Equilibrium (LTE). The code is highly automated and suitable for analyzing the spectra of large spectroscopic surveys with low or medium spectral resolution (R = 2000-20 000). A web service for calculating these values with the software is also available.

  13. Proceedings of the Workshop on Applications of Distributed System Theory to the Control of Large Space Structures

    NASA Technical Reports Server (NTRS)

    Rodriguez, G. (Editor)

    1983-01-01

    Two general themes in the control of large space structures are addressed: control theory for distributed parameter systems and distributed control for systems requiring spatially-distributed multipoint sensing and actuation. Topics include modeling and control, stabilization, and estimation and identification.

  14. Marginal space learning for efficient detection of 2D/3D anatomical structures in medical images.

    PubMed

    Zheng, Yefeng; Georgescu, Bogdan; Comaniciu, Dorin

    2009-01-01

    Recently, marginal space learning (MSL) was proposed as a generic approach for automatic detection of 3D anatomical structures in many medical imaging modalities [1]. To accurately localize a 3D object, we need to estimate nine pose parameters (three for position, three for orientation, and three for anisotropic scaling). Instead of exhaustively searching the original nine-dimensional pose parameter space, only low-dimensional marginal spaces are searched in MSL to improve the detection speed. In this paper, we apply MSL to 2D object detection and perform a thorough comparison between MSL and the alternative full space learning (FSL) approach. Experiments on left ventricle detection in 2D MRI images show MSL outperforms FSL in both speed and accuracy. In addition, we propose two novel techniques, constrained MSL and nonrigid MSL, to further improve the efficiency and accuracy. In many real applications, a strong correlation may exist among pose parameters in the same marginal spaces. For example, a large object may have large scaling values along all directions. Constrained MSL exploits this correlation for further speed-up. The original MSL only estimates the rigid transformation of an object in the image, therefore cannot accurately localize a nonrigid object under a large deformation. The proposed nonrigid MSL directly estimates the nonrigid deformation parameters to improve the localization accuracy. The comparison experiments on liver detection in 226 abdominal CT volumes demonstrate the effectiveness of the proposed methods. Our system takes less than a second to accurately detect the liver in a volume.

  15. Variations of cosmic large-scale structure covariance matrices across parameter space

    NASA Astrophysics Data System (ADS)

    Reischke, Robert; Kiessling, Alina; Schäfer, Björn Malte

    2017-03-01

    The likelihood function for cosmological parameters, given by e.g. weak lensing shear measurements, depends on contributions to the covariance induced by the non-linear evolution of the cosmic web. As highly non-linear clustering to date has only been described by numerical N-body simulations in a reliable and sufficiently precise way, the necessary computational costs for estimating those covariances at different points in parameter space are tremendous. In this work, we describe the change of the matter covariance and the weak lensing covariance matrix as a function of cosmological parameters by constructing a suitable basis, where we model the contribution to the covariance from non-linear structure formation using Eulerian perturbation theory at third order. We show that our formalism is capable of dealing with large matrices and reproduces expected degeneracies and scaling with cosmological parameters in a reliable way. Comparing our analytical results to numerical simulations, we find that the method describes the variation of the covariance matrix found in the SUNGLASS weak lensing simulation pipeline within the errors at one-loop and tree-level for the spectrum and the trispectrum, respectively, for multipoles up to ℓ ≤ 1300. We show that it is possible to optimize the sampling of parameter space where numerical simulations should be carried out by minimizing interpolation errors and propose a corresponding method to distribute points in parameter space in an economical way.

  16. An improved output feedback control of flexible large space structures

    NASA Technical Reports Server (NTRS)

    Lin, Y. H.; Lin, J. G.

    1980-01-01

    A special output feedback control design technique for flexible large space structures is proposed. It is shown that the technique will increase both the damping and frequency of selected modes for more effective control. It is also able to effect integrated control of elastic and rigid-body modes and, in particular, closed-loop system stability and robustness to modal truncation and parameter variation. The technique is seen as marking an improvement over previous work concerning large space structures output feedback control.

  17. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    PubMed

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  18. Estimating free-body modal parameters from tests of a constrained structure

    NASA Technical Reports Server (NTRS)

    Cooley, Victor M.

    1993-01-01

    Hardware advances in suspension technology for ground tests of large space structures provide near on-orbit boundary conditions for modal testing. Further advances in determining free-body modal properties of constrained large space structures have been made, on the analysis side, by using time domain parameter estimation and perturbing the stiffness of the constraints over multiple sub-tests. In this manner, passive suspension constraint forces, which are fully correlated and therefore not usable for spectral averaging techniques, are made effectively uncorrelated. The technique is demonstrated with simulated test data.

  19. The Laplace method for probability measures in Banach spaces

    NASA Astrophysics Data System (ADS)

    Piterbarg, V. I.; Fatalov, V. R.

    1995-12-01

    Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography

  20. Scalable posterior approximations for large-scale Bayesian inverse problems via likelihood-informed parameter and state reduction

    NASA Astrophysics Data System (ADS)

    Cui, Tiangang; Marzouk, Youssef; Willcox, Karen

    2016-06-01

    Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.

  1. Free-decay time-domain modal identification for large space structures

    NASA Technical Reports Server (NTRS)

    Kim, Hyoung M.; Vanhorn, David A.; Doiron, Harold H.

    1992-01-01

    Concept definition studies for the Modal Identification Experiment (MIE), a proposed space flight experiment for the Space Station Freedom (SSF), have demonstrated advantages and compatibility of free-decay time-domain modal identification techniques with the on-orbit operational constraints of large space structures. Since practical experience with modal identification using actual free-decay responses of large space structures is very limited, several numerical and test data reduction studies were conducted. Major issues and solutions were addressed, including closely-spaced modes, wide frequency range of interest, data acquisition errors, sampling delay, excitation limitations, nonlinearities, and unknown disturbances during free-decay data acquisition. The data processing strategies developed in these studies were applied to numerical simulations of the MIE, test data from a deployable truss, and launch vehicle flight data. Results of these studies indicate free-decay time-domain modal identification methods can provide accurate modal parameters necessary to characterize the structural dynamics of large space structures.

  2. Learning Maximal Entropy Models from finite size datasets: a fast Data-Driven algorithm allows to sample from the posterior distribution

    NASA Astrophysics Data System (ADS)

    Ferrari, Ulisse

    A maximal entropy model provides the least constrained probability distribution that reproduces experimental averages of an observables set. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a ``rectified'' Data-Driven algorithm that is fast and by sampling from the parameters posterior avoids both under- and over-fitting along all the directions of the parameters space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method. This research was supported by a Grant from the Human Brain Project (HBP CLAP).

  3. Methods for evaluating the predictive accuracy of structural dynamic models

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, Jon D.

    1990-01-01

    Uncertainty of frequency response using the fuzzy set method and on-orbit response prediction using laboratory test data to refine an analytical model are emphasized with respect to large space structures. Two aspects of the fuzzy set approach were investigated relative to its application to large structural dynamics problems: (1) minimizing the number of parameters involved in computing possible intervals; and (2) the treatment of extrema which may occur in the parameter space enclosed by all possible combinations of the important parameters of the model. Extensive printer graphics were added to the SSID code to help facilitate model verification, and an application of this code to the LaRC Ten Bay Truss is included in the appendix to illustrate this graphics capability.

  4. The seesaw space, a vector space to identify and characterize large-scale structures at 1 AU

    NASA Astrophysics Data System (ADS)

    Lara, A.; Niembro, T.

    2017-12-01

    We introduce the seesaw space, an orthonormal space formed by the local and the global fluctuations of any of the four basic solar parameters: velocity, density, magnetic field and temperature at any heliospheric distance. The fluctuations compare the standard deviation of a moving average of three hours against the running average of the parameter in a month (consider as the local fluctuations) and in a year (global fluctuations) We created this new vectorial spaces to identify the arrival of transients to any spacecraft without the need of an observer. We applied our method to the one-minute resolution data of WIND spacecraft from 1996 to 2016. To study the behavior of the seesaw norms in terms of the solar cycle, we computed annual histograms and fixed piecewise functions formed by two log-normal distributions and observed that one of the distributions is due to large-scale structures while the other to the ambient solar wind. The norm values in which the piecewise functions change vary in terms of the solar cycle. We compared the seesaw norms of each of the basic parameters due to the arrival of coronal mass ejections, co-rotating interaction regions and sector boundaries reported in literature. High seesaw norms are due to large-scale structures. We found three critical values of the norms that can be used to determined the arrival of coronal mass ejections. We present as well general comparisons of the norms during the two maxima and the minimum solar cycle periods and the differences of the norms due to large-scale structures depending on each period.

  5. Charming dark matter

    NASA Astrophysics Data System (ADS)

    Jubb, Thomas; Kirk, Matthew; Lenz, Alexander

    2017-12-01

    We have considered a model of Dark Minimal Flavour Violation (DMFV), in which a triplet of dark matter particles couple to right-handed up-type quarks via a heavy colour-charged scalar mediator. By studying a large spectrum of possible constraints, and assessing the entire parameter space using a Markov Chain Monte Carlo (MCMC), we can place strong restrictions on the allowed parameter space for dark matter models of this type.

  6. Human pose tracking from monocular video by traversing an image motion mapped body pose manifold

    NASA Astrophysics Data System (ADS)

    Basu, Saurav; Poulin, Joshua; Acton, Scott T.

    2010-01-01

    Tracking human pose from monocular video sequences is a challenging problem due to the large number of independent parameters affecting image appearance and nonlinear relationships between generating parameters and the resultant images. Unlike the current practice of fitting interpolation functions to point correspondences between underlying pose parameters and image appearance, we exploit the relationship between pose parameters and image motion flow vectors in a physically meaningful way. Change in image appearance due to pose change is realized as navigating a low dimensional submanifold of the infinite dimensional Lie group of diffeomorphisms of the two dimensional sphere S2. For small changes in pose, image motion flow vectors lie on the tangent space of the submanifold. Any observed image motion flow vector field is decomposed into the basis motion vector flow fields on the tangent space and combination weights are used to update corresponding pose changes in the different dimensions of the pose parameter space. Image motion flow vectors are largely invariant to style changes in experiments with synthetic and real data where the subjects exhibit variation in appearance and clothing. The experiments demonstrate the robustness of our method (within +/-4° of ground truth) to style variance.

  7. Large space telescope, phase A. Volume 3: Optical telescope assembly

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The development and characteristics of the optical telescope assembly for the Large Space Telescope are discussed. The systems considerations are based on mission-related parameters and optical equipment requirements. Information is included on: (1) structural design and analysis, (2) thermal design, (3) stabilization and control, (4) alignment, focus, and figure control, (5) electronic subsystem, and (6) scientific instrument design.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowder, Jeff; Cornish, Neil J.; Reddinger, J. Lucas

    This work presents the first application of the method of genetic algorithms (GAs) to data analysis for the Laser Interferometer Space Antenna (LISA). In the low frequency regime of the LISA band there are expected to be tens of thousands of galactic binary systems that will be emitting gravitational waves detectable by LISA. The challenge of parameter extraction of such a large number of sources in the LISA data stream requires a search method that can efficiently explore the large parameter spaces involved. As signals of many of these sources will overlap, a global search method is desired. GAs representmore » such a global search method for parameter extraction of multiple overlapping sources in the LISA data stream. We find that GAs are able to correctly extract source parameters for overlapping sources. Several optimizations of a basic GA are presented with results derived from applications of the GA searches to simulated LISA data.« less

  9. A self-organizing neural network for job scheduling in distributed systems

    NASA Astrophysics Data System (ADS)

    Newman, Harvey B.; Legrand, Iosif C.

    2001-08-01

    The aim of this work is to describe a possible approach for the optimization of the job scheduling in large distributed systems, based on a self-organizing Neural Network. This dynamic scheduling system should be seen as adaptive middle layer software, aware of current available resources and making the scheduling decisions using the "past experience." It aims to optimize job specific parameters as well as the resource utilization. The scheduling system is able to dynamically learn and cluster information in a large dimensional parameter space and at the same time to explore new regions in the parameters space. This self-organizing scheduling system may offer a possible solution to provide an effective use of resources for the off-line data processing jobs for future HEP experiments.

  10. Clustering fossils in solid inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akhshik, Mohammad, E-mail: m.akhshik@ipm.ir

    In solid inflation the single field non-Gaussianity consistency condition is violated. As a result, the long tenor perturbation induces observable clustering fossils in the form of quadrupole anisotropy in large scale structure power spectrum. In this work we revisit the bispectrum analysis for the scalar-scalar-scalar and tensor-scalar-scalar bispectrum for the general parameter space of solid. We consider the parameter space of the model in which the level of non-Gaussianity generated is consistent with the Planck constraints. Specializing to this allowed range of model parameter we calculate the quadrupole anisotropy induced from the long tensor perturbations on the power spectrum ofmore » the scalar perturbations. We argue that the imprints of clustering fossil from primordial gravitational waves on large scale structures can be detected from the future galaxy surveys.« less

  11. CosmoSIS: A system for MC parameter estimation

    DOE PAGES

    Bridle, S.; Dodelson, S.; Jennings, E.; ...

    2015-12-23

    CosmoSIS is a modular system for cosmological parameter estimation, based on Markov Chain Monte Carlo and related techniques. It provides a series of samplers, which drive the exploration of the parameter space, and a series of modules, which calculate the likelihood of the observed data for a given physical model, determined by the location of a sample in the parameter space. While CosmoSIS ships with a set of modules that calculate quantities of interest to cosmologists, there is nothing about the framework itself, nor in the Markov Chain Monte Carlo technique, that is specific to cosmology. Thus CosmoSIS could bemore » used for parameter estimation problems in other fields, including HEP. This paper describes the features of CosmoSIS and show an example of its use outside of cosmology. Furthermore, it also discusses how collaborative development strategies differ between two different communities: that of HEP physicists, accustomed to working in large collaborations, and that of cosmologists, who have traditionally not worked in large groups.« less

  12. Parametric Analysis of a Hover Test Vehicle using Advanced Test Generation and Data Analysis

    NASA Technical Reports Server (NTRS)

    Gundy-Burlet, Karen; Schumann, Johann; Menzies, Tim; Barrett, Tony

    2009-01-01

    Large complex aerospace systems are generally validated in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. This is due to the large parameter space, and complex, highly coupled nonlinear nature of the different systems that contribute to the performance of the aerospace system. We have addressed the factors deterring such an analysis by applying a combination of technologies to the area of flight envelop assessment. We utilize n-factor (2,3) combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. The data generated is automatically analyzed through a combination of unsupervised learning using a Bayesian multivariate clustering technique (AutoBayes) and supervised learning of critical parameter ranges using the machine-learning tool TAR3, a treatment learner. Covariance analysis with scatter plots and likelihood contours are used to visualize correlations between simulation parameters and simulation results, a task that requires tool support, especially for large and complex models. We present results of simulation experiments for a cold-gas-powered hover test vehicle.

  13. An optimal beam alignment method for large-scale distributed space surveillance radar system

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Wang, Dongya; Xia, Shuangzhi

    2018-06-01

    Large-scale distributed space surveillance radar is a very important ground-based equipment to maintain a complete catalogue for Low Earth Orbit (LEO) space debris. However, due to the thousands of kilometers distance between each sites of the distributed radar system, how to optimally implement the Transmitting/Receiving (T/R) beams alignment in a great space using the narrow beam, which proposed a special and considerable technical challenge in the space surveillance area. According to the common coordinate transformation model and the radar beam space model, we presented a two dimensional projection algorithm for T/R beam using the direction angles, which could visually describe and assess the beam alignment performance. Subsequently, the optimal mathematical models for the orientation angle of the antenna array, the site location and the T/R beam coverage are constructed, and also the beam alignment parameters are precisely solved. At last, we conducted the optimal beam alignment experiments base on the site parameters of Air Force Space Surveillance System (AFSSS). The simulation results demonstrate the correctness and effectiveness of our novel method, which can significantly stimulate the construction for the LEO space debris surveillance equipment.

  14. Global fits of GUT-scale SUSY models with GAMBIT

    NASA Astrophysics Data System (ADS)

    Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; de Austri, Roberto Ruiz; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin

    2017-12-01

    We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.

  15. Implications of improved Higgs mass calculations for supersymmetric models.

    PubMed

    Buchmueller, O; Dolan, M J; Ellis, J; Hahn, T; Heinemeyer, S; Hollik, W; Marrouche, J; Olive, K A; Rzehak, H; de Vries, K J; Weiglein, G

    We discuss the allowed parameter spaces of supersymmetric scenarios in light of improved Higgs mass predictions provided by FeynHiggs 2.10.0. The Higgs mass predictions combine Feynman-diagrammatic results with a resummation of leading and subleading logarithmic corrections from the stop/top sector, which yield a significant improvement in the region of large stop masses. Scans in the pMSSM parameter space show that, for given values of the soft supersymmetry-breaking parameters, the new logarithmic contributions beyond the two-loop order implemented in FeynHiggs tend to give larger values of the light CP-even Higgs mass, [Formula: see text], in the region of large stop masses than previous predictions that were based on a fixed-order Feynman-diagrammatic result, though the differences are generally consistent with the previous estimates of theoretical uncertainties. We re-analyse the parameter spaces of the CMSSM, NUHM1 and NUHM2, taking into account also the constraints from CMS and LHCb measurements of [Formula: see text]and ATLAS searches for [Formula: see text] events using 20/fb of LHC data at 8 TeV. Within the CMSSM, the Higgs mass constraint disfavours [Formula: see text], though not in the NUHM1 or NUHM2.

  16. Space weather modeling using artificial neural network. (Slovak Title: Modelovanie kozmického počasia umelou neurónovou sietou)

    NASA Astrophysics Data System (ADS)

    Valach, F.; Revallo, M.; Hejda, P.; Bochníček, J.

    2010-12-01

    Our modern society with its advanced technology is becoming increasingly vulnerable to the Earth's system disorders originating in explosive processes on the Sun. Coronal mass ejections (CMEs) blasted into interplanetary space as gigantic clouds of ionized gas can hit Earth within a few hours or days and cause, among other effects, geomagnetic storms - perhaps the best known manifestation of solar wind interaction with Earth's magnetosphere. Solar energetic particles (SEP), accelerated to near relativistic energy during large solar storms, arrive at the Earth's orbit even in few minutes and pose serious risk to astronauts traveling through the interplanetary space. These and many other threats are the reason why experts pay increasing attention to space weather and its predictability. For research on space weather, it is typically necessary to examine a large number of parameters which are interrelated in a complex non-linear way. One way to cope with such a task is to use an artificial neural network for space weather modeling, a tool originally developed for artificial intelligence. In our contribution, we focus on practical aspects of the neural networks application to modeling and forecasting selected space weather parameters.

  17. Towards physics responsible for large-scale Lyman-α forest bias parameters

    DOE PAGES

    Agnieszka M. Cieplak; Slosar, Anze

    2016-03-08

    Using a series of carefully constructed numerical experiments based on hydrodynamic cosmological SPH simulations, we attempt to build an intuition for the relevant physics behind the large scale density (b δ) and velocity gradient (b η) biases of the Lyman-α forest. Starting with the fluctuating Gunn-Peterson approximation applied to the smoothed total density field in real-space, and progressing through redshift-space with no thermal broadening, redshift-space with thermal broadening and hydrodynamically simulated baryon fields, we investigate how approximations found in the literature fare. We find that Seljak's 2012 analytical formulae for these bias parameters work surprisingly well in the limit ofmore » no thermal broadening and linear redshift-space distortions. We also show that his b η formula is exact in the limit of no thermal broadening. Since introduction of thermal broadening significantly affects its value, we speculate that a combination of large-scale measurements of b η and the small scale flux PDF might be a sensitive probe of the thermal state of the IGM. Lastly, we find that large-scale biases derived from the smoothed total matter field are within 10–20% to those based on hydrodynamical quantities, in line with other measurements in the literature.« less

  18. Towards physics responsible for large-scale Lyman-α forest bias parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agnieszka M. Cieplak; Slosar, Anze

    Using a series of carefully constructed numerical experiments based on hydrodynamic cosmological SPH simulations, we attempt to build an intuition for the relevant physics behind the large scale density (b δ) and velocity gradient (b η) biases of the Lyman-α forest. Starting with the fluctuating Gunn-Peterson approximation applied to the smoothed total density field in real-space, and progressing through redshift-space with no thermal broadening, redshift-space with thermal broadening and hydrodynamically simulated baryon fields, we investigate how approximations found in the literature fare. We find that Seljak's 2012 analytical formulae for these bias parameters work surprisingly well in the limit ofmore » no thermal broadening and linear redshift-space distortions. We also show that his b η formula is exact in the limit of no thermal broadening. Since introduction of thermal broadening significantly affects its value, we speculate that a combination of large-scale measurements of b η and the small scale flux PDF might be a sensitive probe of the thermal state of the IGM. Lastly, we find that large-scale biases derived from the smoothed total matter field are within 10–20% to those based on hydrodynamical quantities, in line with other measurements in the literature.« less

  19. \\Space: A new code to estimate \\temp, \\logg, and elemental abundances

    NASA Astrophysics Data System (ADS)

    Boeche, C.

    2016-09-01

    \\Space is a FORTRAN95 code that derives stellar parameters and elemental abundances from stellar spectra. To derive these parameters, \\Space does not measure equivalent widths of lines nor it uses templates of synthetic spectra, but it employs a new method based on a library of General Curve-Of-Growths. To date \\Space works on the wavelength range 5212-6860 Å and 8400-8921 Å, and at the spectral resolution R=2000-20000. Extensions of these limits are possible. \\Space is a highly automated code suitable for application to large spectroscopic surveys. A web front end to this service is publicly available at http://dc.g-vo.org/SP_ACE together with the library and the binary code.

  20. Sensitivity study of Space Station Freedom operations cost and selected user resources

    NASA Technical Reports Server (NTRS)

    Accola, Anne; Fincannon, H. J.; Williams, Gregory J.; Meier, R. Timothy

    1990-01-01

    The results of sensitivity studies performed to estimate probable ranges for four key Space Station parameters using the Space Station Freedom's Model for Estimating Space Station Operations Cost (MESSOC) are discussed. The variables examined are grouped into five main categories: logistics, crew, design, space transportation system, and training. The modification of these variables implies programmatic decisions in areas such as orbital replacement unit (ORU) design, investment in repair capabilities, and crew operations policies. The model utilizes a wide range of algorithms and an extensive trial logistics data base to represent Space Station operations. The trial logistics data base consists largely of a collection of the ORUs that comprise the mature station, and their characteristics based on current engineering understanding of the Space Station. A nondimensional approach is used to examine the relative importance of variables on parameters.

  1. Efficient Calibration of Distributed Catchment Models Using Perceptual Understanding and Hydrologic Signatures

    NASA Astrophysics Data System (ADS)

    Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.

    2015-12-01

    Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.

  2. Modelling the large-scale redshift-space 3-point correlation function of galaxies

    NASA Astrophysics Data System (ADS)

    Slepian, Zachary; Eisenstein, Daniel J.

    2017-08-01

    We present a configuration-space model of the large-scale galaxy 3-point correlation function (3PCF) based on leading-order perturbation theory and including redshift-space distortions (RSD). This model should be useful in extracting distance-scale information from the 3PCF via the baryon acoustic oscillation method. We include the first redshift-space treatment of biasing by the baryon-dark matter relative velocity. Overall, on large scales the effect of RSD is primarily a renormalization of the 3PCF that is roughly independent of both physical scale and triangle opening angle; for our adopted Ωm and bias values, the rescaling is a factor of ˜1.8. We also present an efficient scheme for computing 3PCF predictions from our model, important for allowing fast exploration of the space of cosmological parameters in future analyses.

  3. Dynamical analysis of rendezvous and docking with very large space infrastructures in non-Keplerian orbits

    NASA Astrophysics Data System (ADS)

    Colagrossi, Andrea; Lavagna, Michèle

    2018-03-01

    A space station in the vicinity of the Moon can be exploited as a gateway for future human and robotic exploration of the solar system. The natural location for a space system of this kind is about one of the Earth-Moon libration points. The study addresses the dynamics during rendezvous and docking operations with a very large space infrastructure in an EML2 Halo orbit. The model takes into account the coupling effects between the orbital and the attitude motion in a circular restricted three-body problem environment. The flexibility of the system is included, and the interaction between the modes of the structure and those related with the orbital motion is investigated. A lumped parameter technique is used to represents the flexible dynamics. The parameters of the space station are maintained as generic as possible, in a way to delineate a global scenario of the mission. However, the developed model can be tuned and updated according to the information that will be available in the future, when the whole system will be defined with a higher level of precision.

  4. Exploring the potential energy landscape over a large parameter-space

    NASA Astrophysics Data System (ADS)

    He, Yang-Hui; Mehta, Dhagash; Niemerg, Matthew; Rummel, Markus; Valeanu, Alexandru

    2013-07-01

    Solving large polynomial systems with coefficient parameters are ubiquitous and constitute an important class of problems. We demonstrate the computational power of two methods — a symbolic one called the Comprehensive Gröbner basis and a numerical one called coefficient-parameter polynomial continuation — applied to studying both potential energy landscapes and a variety of questions arising from geometry and phenomenology. Particular attention is paid to an example in flux compactification where important physical quantities such as the gravitino and moduli masses and the string coupling can be efficiently extracted.

  5. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.

    PubMed

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-08-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.

  6. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks

    PubMed Central

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-01-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784

  7. Numerical Experimentation with Maximum Likelihood Identification in Static Distributed Systems

    NASA Technical Reports Server (NTRS)

    Scheid, R. E., Jr.; Rodriguez, G.

    1985-01-01

    Many important issues in the control of large space structures are intimately related to the fundamental problem of parameter identification. One might also ask how well this identification process can be carried out in the presence of noisy data since no sensor system is perfect. With these considerations in mind the algorithms herein are designed to treat both the case of uncertainties in the modeling and uncertainties in the data. The analytical aspects of maximum likelihood identification are considered in some detail in another paper. The questions relevant to the implementation of these schemes are dealt with, particularly as they apply to models of large space structures. The emphasis is on the influence of the infinite dimensional character of the problem on finite dimensional implementations of the algorithms. Those areas of current and future analysis are highlighted which indicate the interplay between error analysis and possible truncations of the state and parameter spaces.

  8. Proceedings of the Workshop on Identification and Control of Flexible Space Structures, Volume 3

    NASA Technical Reports Server (NTRS)

    Rodriguez, G. (Editor)

    1985-01-01

    The results of a workshop on identification and control of flexible space structures are reported. This volume deals mainly with control theory and methodologies as they apply to space stations and large antennas. Integration and dynamics and control experimental findings are reported. Among the areas of control theory discussed were feedback, optimization, and parameter identification.

  9. Model verification of large structural systems. [space shuttle model response

    NASA Technical Reports Server (NTRS)

    Lee, L. T.; Hasselman, T. K.

    1978-01-01

    A computer program for the application of parameter identification on the structural dynamic models of space shuttle and other large models with hundreds of degrees of freedom is described. Finite element, dynamic, analytic, and modal models are used to represent the structural system. The interface with math models is such that output from any structural analysis program applied to any structural configuration can be used directly. Processed data from either sine-sweep tests or resonant dwell tests are directly usable. The program uses measured modal data to condition the prior analystic model so as to improve the frequency match between model and test. A Bayesian estimator generates an improved analytical model and a linear estimator is used in an iterative fashion on highly nonlinear equations. Mass and stiffness scaling parameters are generated for an improved finite element model, and the optimum set of parameters is obtained in one step.

  10. Engineering Low Dimensional Materials with van der Waals Interaction

    NASA Astrophysics Data System (ADS)

    Jin, Chenhao

    Two-dimensional van der Waals materials grow into a hot and big field in condensed matter physics in the past decade. One particularly intriguing thing is the possibility to stack different layers together as one wish, like playing a Lego game, which can create artificial structures that do not exist in nature. These new structures can enable rich new physics from interlayer interaction: The interaction is strong, because in low-dimension materials electrons are exposed to the interface and are susceptible to other layers; and the screening of interaction is less prominent. The consequence is rich, not only from the extensive list of two-dimensional materials available nowadays, but also from the freedom of interlayer configuration, such as displacement and twist angle, which creates a gigantic parameter space to play with. On the other hand, however, the huge parameter space sometimes can make it challenging to describe consistently with a single picture. For example, the large periodicity or even incommensurability in van der Waals systems creates difficulty in using periodic boundary condition. Worse still, the huge superlattice unit cell and overwhelming computational efforts involved to some extent prevent the establishment of a simple physical picture to understand the evolution of system properties in the parameter space of interlayer configuration. In the first part of the dissertation, I will focus on classification of the huge parameter space into subspaces, and introduce suitable theoretical approaches for each subspace. For each approach, I will discuss its validity, limitation, general solution, as well as a specific example of application demonstrating how one can obtain the most important effects of interlayer interaction with little computation efforts. Combining all the approaches introduced will provide an analytic solution to cover majority of the parameter space, which will be very helpful in understanding the intuitive physical picture behind the consequence of interlayer interaction, as well as its systematic evolution in the parameter space. Experimentally, optical spectroscopy is a powerful tool to investigate properties of materials, owing to its insusceptibility to extrinsic effects like defects, capability of obtaining information in large spectral range, and the sensitivity to not only density of states but also wavefunction through transition matrix element. Following the classification of interlayer interaction, I will present optical spectroscopy studies of three van der Waals systems: Two-dimensional few layer phosphorene, one-dimensional double-walled nanotubes, and two-dimensional graphene/hexagonal Boron Nitride heterostructure. Experimental results exhibit rich and distinctively different effects of interlayer interaction in these systems, as a demonstration of the colorful physics from the large parameter space. On the other hand, all these cases can be well-described by the methods developed in the theory part, which explains experimental results quantitatively through only a few parameters each with clear physical meaning. Therefore, the formalism given here, both from theoretical and experimental aspects, offers a generally useful methodology to study, understand and design van der Waals materials for both fascinating physics and novel applications.

  11. Distributed control of large space antennas

    NASA Technical Reports Server (NTRS)

    Cameron, J. M.; Hamidi, M.; Lin, Y. H.; Wang, S. J.

    1983-01-01

    A systematic way to choose control design parameters and to evaluate performance for large space antennas is presented. The structural dynamics and control properties for a Hoop and Column Antenna and a Wrap-Rib Antenna are characterized. Some results of the effects of model parameter uncertainties to the stability, surface accuracy, and pointing errors are presented. Critical dynamics and control problems for these antenna configurations are identified and potential solutions are discussed. It was concluded that structural uncertainties and model error can cause serious performance deterioration and can even destabilize the controllers. For the hoop and column antenna, large hoop and long meat and the lack of stiffness between the two substructures result in low structural frequencies. Performance can be improved if this design can be strengthened. The two-site control system is more robust than either single-site control systems for the hoop and column antenna.

  12. SP_Ace: a new code to derive stellar parameters and elemental abundances

    NASA Astrophysics Data System (ADS)

    Boeche, C.; Grebel, E. K.

    2016-03-01

    Context. Ongoing and future massive spectroscopic surveys will collect large numbers (106-107) of stellar spectra that need to be analyzed. Highly automated software is needed to derive stellar parameters and chemical abundances from these spectra. Aims: We developed a new method of estimating the stellar parameters Teff, log g, [M/H], and elemental abundances. This method was implemented in a new code, SP_Ace (Stellar Parameters And Chemical abundances Estimator). This is a highly automated code suitable for analyzing the spectra of large spectroscopic surveys with low or medium spectral resolution (R = 2000-20 000). Methods: After the astrophysical calibration of the oscillator strengths of 4643 absorption lines covering the wavelength ranges 5212-6860 Å and 8400-8924 Å, we constructed a library that contains the equivalent widths (EW) of these lines for a grid of stellar parameters. The EWs of each line are fit by a polynomial function that describes the EW of the line as a function of the stellar parameters. The coefficients of these polynomial functions are stored in a library called the "GCOG library". SP_Ace, a code written in FORTRAN95, uses the GCOG library to compute the EWs of the lines, constructs models of spectra as a function of the stellar parameters and abundances, and searches for the model that minimizes the χ2 deviation when compared to the observed spectrum. The code has been tested on synthetic and real spectra for a wide range of signal-to-noise and spectral resolutions. Results: SP_Ace derives stellar parameters such as Teff, log g, [M/H], and chemical abundances of up to ten elements for low to medium resolution spectra of FGK-type stars with precision comparable to the one usually obtained with spectra of higher resolution. Systematic errors in stellar parameters and chemical abundances are presented and identified with tests on synthetic and real spectra. Stochastic errors are automatically estimated by the code for all the parameters. A simple Web front end of SP_Ace can be found at http://dc.g-vo.org/SP_ACE while the source code will be published soon. Full Tables D.1-D.3 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/587/A2

  13. Imprint of non-linear effects on HI intensity mapping on large scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Umeh, Obinna, E-mail: umeobinna@gmail.com

    Intensity mapping of the HI brightness temperature provides a unique way of tracing large-scale structures of the Universe up to the largest possible scales. This is achieved by using a low angular resolution radio telescopes to detect emission line from cosmic neutral Hydrogen in the post-reionization Universe. We use general relativistic perturbation theory techniques to derive for the first time the full expression for the HI brightness temperature up to third order in perturbation theory without making any plane-parallel approximation. We use this result and the renormalization prescription for biased tracers to study the impact of nonlinear effects on themore » power spectrum of HI brightness temperature both in real and redshift space. We show how mode coupling at nonlinear order due to nonlinear bias parameters and redshift space distortion terms modulate the power spectrum on large scales. The large scale modulation may be understood to be due to the effective bias parameter and effective shot noise.« less

  14. Imprint of non-linear effects on HI intensity mapping on large scales

    NASA Astrophysics Data System (ADS)

    Umeh, Obinna

    2017-06-01

    Intensity mapping of the HI brightness temperature provides a unique way of tracing large-scale structures of the Universe up to the largest possible scales. This is achieved by using a low angular resolution radio telescopes to detect emission line from cosmic neutral Hydrogen in the post-reionization Universe. We use general relativistic perturbation theory techniques to derive for the first time the full expression for the HI brightness temperature up to third order in perturbation theory without making any plane-parallel approximation. We use this result and the renormalization prescription for biased tracers to study the impact of nonlinear effects on the power spectrum of HI brightness temperature both in real and redshift space. We show how mode coupling at nonlinear order due to nonlinear bias parameters and redshift space distortion terms modulate the power spectrum on large scales. The large scale modulation may be understood to be due to the effective bias parameter and effective shot noise.

  15. Electroweak supersymmetry in the NMSSM

    NASA Astrophysics Data System (ADS)

    Cheng, Taoli; Li, Tianjun

    2013-07-01

    To explain all the available experimental results, we have proposed the electroweak supersymmetry (EWSUSY) previously, where the squarks and/or gluino are heavy around a few TeVs while the sleptons, sneutrinos, bino, winos, and/or Higgsinos are light within 1 TeV. In the next to minimal supersymmetric Standard Model, we perform the systematic χ2 analyses on parameter space scan for three EWSUSY scenarios: (I) R-parity conservation and one dark matter candidate, (II) R-parity conservation and multicomponent dark matter, (III) R-parity violation. We obtain the minimal χ2/(degreeoffreedom) of 10.2/15, 9.6/14, and 9.2/14 respectively for scenarios I, II, and III. Considering the constraints from the LHC neutralino/chargino and slepton searches, we find that the majority of viable parameter space preferred by the muon anomalous magnetic moment has been excluded except for the parameter space with moderate to large tan⁡β(≳8). Especially, the most favorable parameter space has relatively large tan⁡β, moderate λ, small μeff, heavy squarks/gluino, and the second lightest CP-even neutral Higgs boson with mass around 125 GeV. In addition, if the left-handed smuon is nearly degenerate with or heavier than wino, there is no definite bound on wino mass. Otherwise, the wino with mass up to ˜450GeV has been excluded. Furthermore, we present several benchmark points for scenarios I and II, and briefly discuss the prospects of the EWSUSY searches at the 14 TeV LHC and ILC.

  16. Control technology development

    NASA Astrophysics Data System (ADS)

    Schaechter, D. B.

    1982-03-01

    The main objectives of the control technology development task are given in the slide below. The first is to develop control design techniques based on flexible structural models, rather than simple rigid-body models. Since large space structures are distributed parameter systems, a new degree of freedom, that of sensor/actuator placement, may be exercised for improving control system performance. Another characteristic of large space structures is numerous oscillatory modes within the control bandwidth. Reduced-order controller design models must be developed which produce stable closed-loop systems when combined with the full-order system. Since the date of an actual large-space-structure flight is rapidly approaching, it is vitally important that theoretical developments are tested in actual hardware. Experimental verification is a vital counterpart of all current theoretical developments.

  17. Signs and stability in higher-derivative gravity

    NASA Astrophysics Data System (ADS)

    Narain, Gaurav

    2018-02-01

    Perturbatively renormalizable higher-derivative gravity in four space-time dimensions with arbitrary signs of couplings has been considered. Systematic analysis of the action with arbitrary signs of couplings in Lorentzian flat space-time for no-tachyons, fixes the signs. Feynman + i𝜖 prescription for these signs further grants necessary convergence in path-integral, suppressing the field modes with large action. This also leads to a sensible wick rotation where quantum computation can be performed. Running couplings for these sign of parameters make the massive tensor ghost innocuous leading to a stable and ghost-free renormalizable theory in four space-time dimensions. The theory has a transition point arising from renormalization group (RG) equations, where the coefficient of R2 diverges without affecting the perturbative quantum field theory (QFT). Redefining this coefficient gives a better handle over the theory around the transition point. The flow equations push the flow of parameters across the transition point. The flow beyond the transition point is analyzed using the one-loop RG equations which shows that the regime beyond the transition point has unphysical properties: there are tachyons, the path-integral loses positive definiteness, Newton’s constant G becomes negative and large, and perturbative parameters become large. These shortcomings indicate a lack of completeness beyond the transition point and need of a nonperturbative treatment of the theory beyond the transition point.

  18. Just-in-time connectivity for large spiking networks.

    PubMed

    Lytton, William W; Omurtag, Ahmet; Neymotin, Samuel A; Hines, Michael L

    2008-11-01

    The scale of large neuronal network simulations is memory limited due to the need to store connectivity information: connectivity storage grows as the square of neuron number up to anatomically relevant limits. Using the NEURON simulator as a discrete-event simulator (no integration), we explored the consequences of avoiding the space costs of connectivity through regenerating connectivity parameters when needed: just in time after a presynaptic cell fires. We explored various strategies for automated generation of one or more of the basic static connectivity parameters: delays, postsynaptic cell identities, and weights, as well as run-time connectivity state: the event queue. Comparison of the JitCon implementation to NEURON's standard NetCon connectivity method showed substantial space savings, with associated run-time penalty. Although JitCon saved space by eliminating connectivity parameters, larger simulations were still memory limited due to growth of the synaptic event queue. We therefore designed a JitEvent algorithm that added items to the queue only when required: instead of alerting multiple postsynaptic cells, a spiking presynaptic cell posted a callback event at the shortest synaptic delay time. At the time of the callback, this same presynaptic cell directly notified the first postsynaptic cell and generated another self-callback for the next delay time. The JitEvent implementation yielded substantial additional time and space savings. We conclude that just-in-time strategies are necessary for very large network simulations but that a variety of alternative strategies should be considered whose optimality will depend on the characteristics of the simulation to be run.

  19. Just in time connectivity for large spiking networks

    PubMed Central

    Lytton, William W.; Omurtag, Ahmet; Neymotin, Samuel A; Hines, Michael L

    2008-01-01

    The scale of large neuronal network simulations is memory-limited due to the need to store connectivity information: connectivity storage grows as the square of neuron number up to anatomically-relevant limits. Using the NEURON simulator as a discrete-event simulator (no integration), we explored the consequences of avoiding the space costs of connectivity through regenerating connectivity parameters when needed – just-in-time after a presynaptic cell fires. We explored various strategies for automated generation of one or more of the basic static connectivity parameters: delays, postsynaptic cell identities and weights, as well as run-time connectivity state: the event queue. Comparison of the JitCon implementation to NEURON’s standard NetCon connectivity method showed substantial space savings, with associated run-time penalty. Although JitCon saved space by eliminating connectivity parameters, larger simulations were still memory-limited due to growth of the synaptic event queue. We therefore designed a JitEvent algorithm that only added items to the queue when required: instead of alerting multiple postsynaptic cells, a spiking presynaptic cell posted a callback event at the shortest synaptic delay time. At the time of the callback, this same presynaptic cell directly notified the first postsynaptic cell and generated another self-callback for the next delay time. The JitEvent implementation yielded substantial additional time and space savings. We conclude that just-in-time strategies are necessary for very large network simulations but that a variety of alternative strategies should be considered whose optimality will depend on the characteristics of the simulation to be run. PMID:18533821

  20. Tool Support for Parametric Analysis of Large Software Simulation Systems

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony

    2008-01-01

    The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.

  1. Digital robust active control law synthesis for large order flexible structure using parameter optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.

    1988-01-01

    A generic procedure for the parameter optimization of a digital control law for a large-order flexible flight vehicle or large space structure modeled as a sampled data system is presented. A linear quadratic Guassian type cost function was minimized, while satisfying a set of constraints on the steady-state rms values of selected design responses, using a constrained optimization technique to meet multiple design requirements. Analytical expressions for the gradients of the cost function and the design constraints on mean square responses with respect to the control law design variables are presented.

  2. Utilization of Expert Knowledge in a Multi-Objective Hydrologic Model Automatic Calibration Process

    NASA Astrophysics Data System (ADS)

    Quebbeman, J.; Park, G. H.; Carney, S.; Day, G. N.; Micheletty, P. D.

    2016-12-01

    Spatially distributed continuous simulation hydrologic models have a large number of parameters for potential adjustment during the calibration process. Traditional manual calibration approaches of such a modeling system is extremely laborious, which has historically motivated the use of automatic calibration procedures. With a large selection of model parameters, achieving high degrees of objective space fitness - measured with typical metrics such as Nash-Sutcliffe, Kling-Gupta, RMSE, etc. - can easily be achieved using a range of evolutionary algorithms. A concern with this approach is the high degree of compensatory calibration, with many similarly performing solutions, and yet grossly varying parameter set solutions. To help alleviate this concern, and mimic manual calibration processes, expert knowledge is proposed for inclusion within the multi-objective functions, which evaluates the parameter decision space. As a result, Pareto solutions are identified with high degrees of fitness, but also create parameter sets that maintain and utilize available expert knowledge resulting in more realistic and consistent solutions. This process was tested using the joint SNOW-17 and Sacramento Soil Moisture Accounting method (SAC-SMA) within the Animas River basin in Colorado. Three different elevation zones, each with a range of parameters, resulted in over 35 model parameters simultaneously calibrated. As a result, high degrees of fitness were achieved, in addition to the development of more realistic and consistent parameter sets such as those typically achieved during manual calibration procedures.

  3. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  4. Analytical Verifications in Cryogenic Testing of NGST Advanced Mirror System Demonstrators

    NASA Technical Reports Server (NTRS)

    Cummings, Ramona; Levine, Marie; VanBuren, Dave; Kegley, Jeff; Green, Joseph; Hadaway, James; Presson, Joan; Cline, Todd; Stahl, H. Philip (Technical Monitor)

    2002-01-01

    Ground based testing is a critical and costly part of component, assembly, and system verifications of large space telescopes. At such tests, however, with integral teamwork by planners, analysts, and test personnel, segments can be included to validate specific analytical parameters and algorithms at relatively low additional cost. This paper opens with strategy of analytical verification segments added to vacuum cryogenic testing of Advanced Mirror System Demonstrator (AMSD) assemblies. These AMSD assemblies incorporate material and architecture concepts being considered in the Next Generation Space Telescope (NGST) design. The test segments for workmanship testing, cold survivability, and cold operation optical throughput are supplemented by segments for analytical verifications of specific structural, thermal, and optical parameters. Utilizing integrated modeling and separate materials testing, the paper continues with support plan for analyses, data, and observation requirements during the AMSD testing, currently slated for late calendar year 2002 to mid calendar year 2003. The paper includes anomaly resolution as gleaned by authors from similar analytical verification support of a previous large space telescope, then closes with draft of plans for parameter extrapolations, to form a well-verified portion of the integrated modeling being done for NGST performance predictions.

  5. Combining states without scale hierarchies with ordered parton showers

    DOE PAGES

    Fischer, Nadine; Prestel, Stefan

    2017-09-12

    Here, we present a parameter-free scheme to combine fixed-order multi-jet results with parton-shower evolution. The scheme produces jet cross sections with leading-order accuracy in the complete phase space of multiple emissions, resumming large logarithms when appropriate, while not arbitrarily enforcing ordering on momentum configurations beyond the reach of the parton-shower evolution equation. This then requires the development of a matrix-element correction scheme for complex phase-spaces including ordering conditions as well as a systematic scale-setting procedure for unordered phase-space points. Our algorithm does not require a merging-scale parameter. We implement the new method in the Vincia framework and compare to LHCmore » data.« less

  6. MAPPING GROWTH AND GRAVITY WITH ROBUST REDSHIFT SPACE DISTORTIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwan, Juliana; Lewis, Geraint F.; Linder, Eric V.

    2012-04-01

    Redshift space distortions (RSDs) caused by galaxy peculiar velocities provide a window onto the growth rate of large-scale structure and a method for testing general relativity. We investigate through a comparison of N-body simulations to various extensions of perturbation theory beyond the linear regime, the robustness of cosmological parameter extraction, including the gravitational growth index {gamma}. We find that the Kaiser formula and some perturbation theory approaches bias the growth rate by 1{sigma} or more relative to the fiducial at scales as large as k > 0.07 h Mpc{sup -1}. This bias propagates to estimates of the gravitational growth indexmore » as well as {Omega}{sub m} and the equation-of-state parameter and presents a significant challenge to modeling RSDs. We also determine an accurate fitting function for a combination of line-of-sight damping and higher order angular dependence that allows robust modeling of the redshift space power spectrum to substantially higher k.« less

  7. Preliminary results on the dynamics of large and flexible space structures in Halo orbits

    NASA Astrophysics Data System (ADS)

    Colagrossi, Andrea; Lavagna, Michèle

    2017-05-01

    The global exploration roadmap suggests, among other ambitious future space programmes, a possible manned outpost in lunar vicinity, to support surface operations and further astronaut training for longer and deeper space missions and transfers. In particular, a Lagrangian point orbit location - in the Earth- Moon system - is suggested for a manned cis-lunar infrastructure; proposal which opens an interesting field of study from the astrodynamics perspective. Literature offers a wide set of scientific research done on orbital dynamics under the Three-Body Problem modelling approach, while less of it includes the attitude dynamics modelling as well. However, whenever a large space structure (ISS-like) is considered, not only the coupled orbit-attitude dynamics should be modelled to run more accurate analyses, but the structural flexibility should be included too. The paper, starting from the well-known Circular Restricted Three-Body Problem formulation, presents some preliminary results obtained by adding a coupled orbit-attitude dynamical model and the effects due to the large structure flexibility. In addition, the most relevant perturbing phenomena, such as the Solar Radiation Pressure (SRP) and the fourth-body (Sun) gravity, are included in the model as well. A multi-body approach has been preferred to represent possible configurations of the large cis-lunar infrastructure: interconnected simple structural elements - such as beams, rods or lumped masses linked by springs - build up the space segment. To better investigate the relevance of the flexibility effects, the lumped parameters approach is compared with a distributed parameters semi-analytical technique. A sensitivity analysis of system dynamics, with respect to different configurations and mechanical properties of the extended structure, is also presented, in order to highlight drivers for the lunar outpost design. Furthermore, a case study for a large and flexible space structure in Halo orbits around one of the Earth-Moon collinear Lagrangian points, L1 or L2, is discussed to point out some relevant outcomes for the potential implementation of such a mission.

  8. Z boson mediated dark matter beyond the effective theory

    DOE PAGES

    Kearney, John; Orlofsky, Nicholas; Pierce, Aaron

    2017-02-17

    Here, direct detection bounds are beginning to constrain a very simple model of weakly interacting dark matter—a Majorana fermion with a coupling to the Z boson. In a particularly straightforward gauge-invariant realization, this coupling is introduced via a higher-dimensional operator. While attractive in its simplicity, this model generically induces a large ρ parameter. An ultraviolet completion that avoids an overly large contribution to ρ is the singlet-doublet model. We revisit this model, focusing on the Higgs blind spot region of parameter space where spin-independent interactions are absent. This model successfully reproduces dark matter with direct detection mediated by the Zmore » boson but whose cosmology may depend on additional couplings and states. Future direct detection experiments should effectively probe a significant portion of this parameter space, aside from a small coannihilating region. As such, Z-mediated thermal dark matter as realized in the singlet-doublet model represents an interesting target for future searches.« less

  9. Trap configuration and spacing influences parameter estimates in spatial capture-recapture models

    USGS Publications Warehouse

    Sun, Catherine C.; Fuller, Angela K.; Royle, J. Andrew

    2014-01-01

    An increasing number of studies employ spatial capture-recapture models to estimate population size, but there has been limited research on how different spatial sampling designs and trap configurations influence parameter estimators. Spatial capture-recapture models provide an advantage over non-spatial models by explicitly accounting for heterogeneous detection probabilities among individuals that arise due to the spatial organization of individuals relative to sampling devices. We simulated black bear (Ursus americanus) populations and spatial capture-recapture data to evaluate the influence of trap configuration and trap spacing on estimates of population size and a spatial scale parameter, sigma, that relates to home range size. We varied detection probability and home range size, and considered three trap configurations common to large-mammal mark-recapture studies: regular spacing, clustered, and a temporal sequence of different cluster configurations (i.e., trap relocation). We explored trap spacing and number of traps per cluster by varying the number of traps. The clustered arrangement performed well when detection rates were low, and provides for easier field implementation than the sequential trap arrangement. However, performance differences between trap configurations diminished as home range size increased. Our simulations suggest it is important to consider trap spacing relative to home range sizes, with traps ideally spaced no more than twice the spatial scale parameter. While spatial capture-recapture models can accommodate different sampling designs and still estimate parameters with accuracy and precision, our simulations demonstrate that aspects of sampling design, namely trap configuration and spacing, must consider study area size, ranges of individual movement, and home range sizes in the study population.

  10. Analysis and Ground Testing for Validation of the Inflatable Sunshield in Space (ISIS) Experiment

    NASA Technical Reports Server (NTRS)

    Lienard, Sebastien; Johnston, John; Adams, Mike; Stanley, Diane; Alfano, Jean-Pierre; Romanacci, Paolo

    2000-01-01

    The Next Generation Space Telescope (NGST) design requires a large sunshield to protect the large aperture mirror and instrument module from constant solar exposure at its L2 orbit. The structural dynamics of the sunshield must be modeled in order to predict disturbances to the observatory attitude control system and gauge effects on the line of site jitter. Models of large, non-linear membrane systems are not well understood and have not been successfully demonstrated. To answer questions about sunshield dynamic behavior and demonstrate controlled deployment, the NGST project is flying a Pathfinder experiment, the Inflatable Sunshield in Space (ISIS). This paper discusses in detail the modeling and ground-testing efforts performed at the Goddard Space Flight Center to: validate analytical tools for characterizing the dynamic behavior of the deployed sunshield, qualify the experiment for the Space Shuttle, and verify the functionality of the system. Included in the discussion will be test parameters, test setups, problems encountered, and test results.

  11. Improving the Fitness of High-Dimensional Biomechanical Models via Data-Driven Stochastic Exploration

    PubMed Central

    Bustamante, Carlos D.; Valero-Cuevas, Francisco J.

    2010-01-01

    The field of complex biomechanical modeling has begun to rely on Monte Carlo techniques to investigate the effects of parameter variability and measurement uncertainty on model outputs, search for optimal parameter combinations, and define model limitations. However, advanced stochastic methods to perform data-driven explorations, such as Markov chain Monte Carlo (MCMC), become necessary as the number of model parameters increases. Here, we demonstrate the feasibility and, what to our knowledge is, the first use of an MCMC approach to improve the fitness of realistically large biomechanical models. We used a Metropolis–Hastings algorithm to search increasingly complex parameter landscapes (3, 8, 24, and 36 dimensions) to uncover underlying distributions of anatomical parameters of a “truth model” of the human thumb on the basis of simulated kinematic data (thumbnail location, orientation, and linear and angular velocities) polluted by zero-mean, uncorrelated multivariate Gaussian “measurement noise.” Driven by these data, ten Markov chains searched each model parameter space for the subspace that best fit the data (posterior distribution). As expected, the convergence time increased, more local minima were found, and marginal distributions broadened as the parameter space complexity increased. In the 36-D scenario, some chains found local minima but the majority of chains converged to the true posterior distribution (confirmed using a cross-validation dataset), thus demonstrating the feasibility and utility of these methods for realistically large biomechanical problems. PMID:19272906

  12. Precision Parameter Estimation and Machine Learning

    NASA Astrophysics Data System (ADS)

    Wandelt, Benjamin D.

    2008-12-01

    I discuss the strategy of ``Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.

  13. A BRDF statistical model applying to space target materials modeling

    NASA Astrophysics Data System (ADS)

    Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen

    2017-10-01

    In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.

  14. On the generation of magnetized collisionless shocks in the large plasma device

    NASA Astrophysics Data System (ADS)

    Schaeffer, D. B.; Winske, D.; Larson, D. J.; Cowee, M. M.; Constantin, C. G.; Bondarenko, A. S.; Clark, S. E.; Niemann, C.

    2017-04-01

    Collisionless shocks are common phenomena in space and astrophysical systems, and in many cases, the shocks can be modeled as the result of the expansion of a magnetic piston though a magnetized ambient plasma. Only recently, however, have laser facilities and diagnostic capabilities evolved sufficiently to allow the detailed study in the laboratory of the microphysics of piston-driven shocks. We review experiments on collisionless shocks driven by a laser-produced magnetic piston undertaken with the Phoenix laser laboratory and the Large Plasma Device at the University of California, Los Angeles. The experiments span a large parameter space in laser energy, background magnetic field, and ambient plasma properties that allow us to probe the physics of piston-ambient energy coupling, the launching of magnetosonic solitons, and the formation of subcritical shocks. The results indicate that piston-driven magnetized collisionless shocks in the laboratory can be characterized with a small set of dimensionless formation parameters that place the formation process in an organized and predictive framework.

  15. Not-so-well-tempered neutralino

    NASA Astrophysics Data System (ADS)

    Profumo, Stefano; Stefaniak, Tim; Stephenson-Haskins, Laurel

    2017-09-01

    Light electroweakinos, the neutral and charged fermionic supersymmetric partners of the standard model SU (2 )×U (1 ) gauge bosons and of the two SU(2) Higgs doublets, are an important target for searches for new physics with the Large Hadron Collider (LHC). However, if the lightest neutralino is the dark matter, constraints from direct dark matter detection experiments rule out large swaths of the parameter space accessible to the LHC, including in large part the so-called "well-tempered" neutralinos. We focus on the minimal supersymmetric standard model (MSSM) and explore in detail which regions of parameter space are not excluded by null results from direct dark matter detection, assuming exclusive thermal production of neutralinos in the early universe, and illustrate the complementarity with current and future LHC searches for electroweak gauginos. We consider both bino-Higgsino and bino-wino "not-so-well-tempered" neutralinos, i.e. we include models where the lightest neutralino constitutes only part of the cosmological dark matter, with the consequent suppression of the constraints from direct and indirect dark matter searches.

  16. On the generation of magnetized collisionless shocks in the large plasma device

    DOE PAGES

    Schaeffer, D. B.; Winske, D.; Larson, D. J.; ...

    2017-03-22

    Collisionless shocks are common phenomena in space and astrophysical systems, and in many cases, the shocks can be modeled as the result of the expansion of a magnetic piston though a magnetized ambient plasma. Only recently, however, have laser facilities and diagnostic capabilities evolved sufficiently to allow the detailed study in the laboratory of the microphysics of piston-driven shocks. We review experiments on collisionless shocks driven by a laser-produced magnetic piston undertaken with the Phoenix laser laboratory and the Large Plasma Device at the University of California, Los Angeles. The experiments span a large parameter space in laser energy, backgroundmore » magnetic field, and ambient plasma properties that allow us to probe the physics of piston-ambient energy coupling, the launching of magnetosonic solitons, and the formation of subcritical shocks. Here, the results indicate that piston-driven magnetized collisionless shocks in the laboratory can be characterized with a small set of dimensionless formation parameters that place the formation process in an organized and predictive framework.« less

  17. Laboratory development and testing of spacecraft diagnostics

    NASA Astrophysics Data System (ADS)

    Amatucci, William; Tejero, Erik; Blackwell, Dave; Walker, Dave; Gatling, George; Enloe, Lon; Gillman, Eric

    2017-10-01

    The Naval Research Laboratory's Space Chamber experiment is a large-scale laboratory device dedicated to the creation of large-volume plasmas with parameters scaled to realistic space plasmas. Such devices make valuable contributions to the investigation of space plasma phenomena under controlled, reproducible conditions, allowing for the validation of theoretical models being applied to space data. However, in addition to investigations such as plasma wave and instability studies, such devices can also make valuable contributions to the development and testing of space plasma diagnostics. One example is the plasma impedance probe developed at NRL. Originally developed as a laboratory diagnostic, the sensor has now been flown on a sounding rocket, is included on a CubeSat experiment, and will be included on the DoD Space Test Program's STP-H6 experiment on the International Space Station. In this talk, we will describe how the laboratory simulation of space plasmas made this development path possible. Work sponsored by the US Naval Research Laboratory Base Program.

  18. Robust constraints and novel gamma-ray signatures of dark matter that interacts strongly with nucleons

    NASA Astrophysics Data System (ADS)

    Hooper, Dan; McDermott, Samuel D.

    2018-06-01

    Due to shielding, direct detection experiments are in some cases insensitive to dark matter candidates with very large scattering cross sections with nucleons. In this paper, we revisit this class of models and derive a simple analytic criterion for conservative but robust direct detection limits. While large spin-independent cross sections seem to be ruled out, we identify potentially viable parameter space for dark matter with a spin-dependent cross section with nucleons in the range of 10-27 cm2≲σDM -p≲10-24 cm2 . With these parameters, cosmic-ray scattering with dark matter in the extended halo of the Milky Way could generate a novel and distinctive gamma-ray signal at high galactic latitudes. Such a signal could be observable by Fermi or future space-based gamma-ray telescopes.

  19. Probabilistic failure assessment with application to solid rocket motors

    NASA Technical Reports Server (NTRS)

    Jan, Darrell L.; Davidson, Barry D.; Moore, Nicholas R.

    1990-01-01

    A quantitative methodology is being developed for assessment of risk of failure of solid rocket motors. This probabilistic methodology employs best available engineering models and available information in a stochastic framework. The framework accounts for incomplete knowledge of governing parameters, intrinsic variability, and failure model specification error. Earlier case studies have been conducted on several failure modes of the Space Shuttle Main Engine. Work in progress on application of this probabilistic approach to large solid rocket boosters such as the Advanced Solid Rocket Motor for the Space Shuttle is described. Failure due to debonding has been selected as the first case study for large solid rocket motors (SRMs) since it accounts for a significant number of historical SRM failures. Impact of incomplete knowledge of governing parameters and failure model specification errors is expected to be important.

  20. Emulating Simulations of Cosmic Dawn for 21 cm Power Spectrum Constraints on Cosmology, Reionization, and X-Ray Heating

    NASA Astrophysics Data System (ADS)

    Kern, Nicholas S.; Liu, Adrian; Parsons, Aaron R.; Mesinger, Andrei; Greig, Bradley

    2017-10-01

    Current and upcoming radio interferometric experiments are aiming to make a statistical characterization of the high-redshift 21 cm fluctuation signal spanning the hydrogen reionization and X-ray heating epochs of the universe. However, connecting 21 cm statistics to the underlying physical parameters is complicated by the theoretical challenge of modeling the relevant physics at computational speeds quick enough to enable exploration of the high-dimensional and weakly constrained parameter space. In this work, we use machine learning algorithms to build a fast emulator that can accurately mimic an expensive simulation of the 21 cm signal across a wide parameter space. We embed our emulator within a Markov Chain Monte Carlo framework in order to perform Bayesian parameter constraints over a large number of model parameters, including those that govern the Epoch of Reionization, the Epoch of X-ray Heating, and cosmology. As a worked example, we use our emulator to present an updated parameter constraint forecast for the Hydrogen Epoch of Reionization Array experiment, showing that its characterization of a fiducial 21 cm power spectrum will considerably narrow the allowed parameter space of reionization and heating parameters, and could help strengthen Planck's constraints on {σ }8. We provide both our generalized emulator code and its implementation specifically for 21 cm parameter constraints as publicly available software.

  1. Rapid Computation of Thermodynamic Properties over Multidimensional Nonbonded Parameter Spaces Using Adaptive Multistate Reweighting.

    PubMed

    Naden, Levi N; Shirts, Michael R

    2016-04-12

    We show how thermodynamic properties of molecular models can be computed over a large, multidimensional parameter space by combining multistate reweighting analysis with a linear basis function approach. This approach reduces the computational cost to estimate thermodynamic properties from molecular simulations for over 130,000 tested parameter combinations from over 1000 CPU years to tens of CPU days. This speed increase is achieved primarily by computing the potential energy as a linear combination of basis functions, computed from either modified simulation code or as the difference of energy between two reference states, which can be done without any simulation code modification. The thermodynamic properties are then estimated with the Multistate Bennett Acceptance Ratio (MBAR) as a function of multiple model parameters without the need to define a priori how the states are connected by a pathway. Instead, we adaptively sample a set of points in parameter space to create mutual configuration space overlap. The existence of regions of poor configuration space overlap are detected by analyzing the eigenvalues of the sampled states' overlap matrix. The configuration space overlap to sampled states is monitored alongside the mean and maximum uncertainty to determine convergence, as neither the uncertainty or the configuration space overlap alone is a sufficient metric of convergence. This adaptive sampling scheme is demonstrated by estimating with high precision the solvation free energies of charged particles of Lennard-Jones plus Coulomb functional form with charges between -2 and +2 and generally physical values of σij and ϵij in TIP3P water. We also compute entropy, enthalpy, and radial distribution functions of arbitrary unsampled parameter combinations using only the data from these sampled states and use the estimates of free energies over the entire space to examine the deviation of atomistic simulations from the Born approximation to the solvation free energy.

  2. On the Singularity Structure of WKB Solution of the Boosted Whittaker Equation: its Relevance to Resurgent Functions with Essential Singularities

    NASA Astrophysics Data System (ADS)

    Kamimoto, Shingo; Kawai, Takahiro; Koike, Tatsuya

    2016-12-01

    Inspired by the symbol calculus of linear differential operators of infinite order applied to the Borel transformed WKB solutions of simple-pole type equation [Kamimoto et al. (RIMS Kôkyûroku Bessatsu B 52:127-146, 2014)], which is summarized in Section 1, we introduce in Section 2 the space of simple resurgent functions depending on a parameter with an infra-exponential type growth order, and then we define the assigning operator A which acts on the space and produces resurgent functions with essential singularities. In Section 3, we apply the operator A to the Borel transforms of the Voros coefficient and its exponentiation for the Whittaker equation with a large parameter so that we may find the Borel transforms of the Voros coefficient and its exponentiation for the boosted Whittaker equation with a large parameter. In Section 4, we use these results to find the explicit form of the alien derivatives of the Borel transformed WKB solutions of the boosted Whittaker equation with a large parameter. The results in this paper manifest the importance of resurgent functions with essential singularities in developing the exact WKB analysis, the WKB analysis based on the resurgent function theory. It is also worth emphasizing that the concrete form of essential singularities we encounter is expressed by the linear differential operators of infinite order.

  3. Adaptive/learning control of large space structures - System identification techniques. [for multi-configuration flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Thau, F. E.; Montgomery, R. C.

    1980-01-01

    Techniques developed for the control of aircraft under changing operating conditions are used to develop a learning control system structure for a multi-configuration, flexible space vehicle. A configuration identification subsystem that is to be used with a learning algorithm and a memory and control process subsystem is developed. Adaptive gain adjustments can be achieved by this learning approach without prestoring of large blocks of parameter data and without dither signal inputs which will be suppressed during operations for which they are not compatible. The Space Shuttle Solar Electric Propulsion (SEP) experiment is used as a sample problem for the testing of adaptive/learning control system algorithms.

  4. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    NASA Astrophysics Data System (ADS)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  5. Proceedings of the Workshop on Identification and Control of Flexible Space Structures, Volume 2

    NASA Technical Reports Server (NTRS)

    Rodriguez, G. (Editor)

    1985-01-01

    The results of a workshop on identification and control of flexible space structures held in San Diego, CA, July 4 to 6, 1984 are discussed. The main objectives of the workshop were to provide a forum to exchange ideas in exploring the most advanced modeling, estimation, identification and control methodologies to flexible space structures. The workshop responded to the rapidly growing interest within NASA in large space systems (space station, platforms, antennas, flight experiments) currently under design. Dynamic structural analysis, control theory, structural vibration and stability, and distributed parameter systems are discussed.

  6. Population Synthesis of Radio and Y-ray Millisecond Pulsars Using Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Gonthier, Peter L.; Billman, C.; Harding, A. K.

    2013-04-01

    We present preliminary results of a new population synthesis of millisecond pulsars (MSP) from the Galactic disk using Markov Chain Monte Carlo techniques to better understand the model parameter space. We include empirical radio and γ-ray luminosity models that are dependent on the pulsar period and period derivative with freely varying exponents. The magnitudes of the model luminosities are adjusted to reproduce the number of MSPs detected by a group of ten radio surveys and by Fermi, predicting the MSP birth rate in the Galaxy. We follow a similar set of assumptions that we have used in previous, more constrained Monte Carlo simulations. The parameters associated with the birth distributions such as those for the accretion rate, magnetic field and period distributions are also free to vary. With the large set of free parameters, we employ Markov Chain Monte Carlo simulations to explore the large and small worlds of the parameter space. We present preliminary comparisons of the simulated and detected distributions of radio and γ-ray pulsar characteristics. We express our gratitude for the generous support of the National Science Foundation (REU and RUI), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program.

  7. Systematic simulations of modified gravity: chameleon models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brax, Philippe; Davis, Anne-Christine; Li, Baojiu

    2013-04-01

    In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference withmore » the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc{sup −1}, since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future.« less

  8. Dynamics of large-scale brain activity in normal arousal states and epileptic seizures

    NASA Astrophysics Data System (ADS)

    Robinson, P. A.; Rennie, C. J.; Rowe, D. L.

    2002-04-01

    Links between electroencephalograms (EEGs) and underlying aspects of neurophysiology and anatomy are poorly understood. Here a nonlinear continuum model of large-scale brain electrical activity is used to analyze arousal states and their stability and nonlinear dynamics for physiologically realistic parameters. A simple ordered arousal sequence in a reduced parameter space is inferred and found to be consistent with experimentally determined parameters of waking states. Instabilities arise at spectral peaks of the major clinically observed EEG rhythms-mainly slow wave, delta, theta, alpha, and sleep spindle-with each instability zone lying near its most common experimental precursor arousal states in the reduced space. Theta, alpha, and spindle instabilities evolve toward low-dimensional nonlinear limit cycles that correspond closely to EEGs of petit mal seizures for theta instability, and grand mal seizures for the other types. Nonlinear stimulus-induced entrainment and seizures are also seen, EEG spectra and potentials evoked by stimuli are reproduced, and numerous other points of experimental agreement are found. Inverse modeling enables physiological parameters underlying observed EEGs to be determined by a new, noninvasive route. This model thus provides a single, powerful framework for quantitative understanding of a wide variety of brain phenomena.

  9. A Tool for Parameter-space Explorations

    NASA Astrophysics Data System (ADS)

    Murase, Yohsuke; Uchitane, Takeshi; Ito, Nobuyasu

    A software for managing simulation jobs and results, named "OACIS", is presented. It controls a large number of simulation jobs executed in various remote servers, keeps these results in an organized way, and manages the analyses on these results. The software has a web browser front end, and users can submit various jobs to appropriate remote hosts from a web browser easily. After these jobs are finished, all the result files are automatically downloaded from the computational hosts and stored in a traceable way together with the logs of the date, host, and elapsed time of the jobs. Some visualization functions are also provided so that users can easily grasp the overview of the results distributed in a high-dimensional parameter space. Thus, OACIS is especially beneficial for the complex simulation models having many parameters for which a lot of parameter searches are required. By using API of OACIS, it is easy to write a code that automates parameter selection depending on the previous simulation results. A few examples of the automated parameter selection are also demonstrated.

  10. Low rank approximation methods for MR fingerprinting with large scale dictionaries.

    PubMed

    Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra

    2018-04-01

    This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  11. Application of physical parameter identification to finite-element models

    NASA Technical Reports Server (NTRS)

    Bronowicki, Allen J.; Lukich, Michael S.; Kuritz, Steven P.

    1987-01-01

    The time domain parameter identification method described previously is applied to TRW's Large Space Structure Truss Experiment. Only control sensors and actuators are employed in the test procedure. The fit of the linear structural model to the test data is improved by more than an order of magnitude using a physically reasonable parameter set. The electro-magnetic control actuators are found to contribute significant damping due to a combination of eddy current and back electro-motive force (EMF) effects. Uncertainties in both estimated physical parameters and modal behavior variables are given.

  12. Underexpanded Screeching Jets From Circular, Rectangular, and Elliptic Nozzles

    NASA Technical Reports Server (NTRS)

    Panda, J.; Raman, G.; Zaman, K. B. M. Q.

    2004-01-01

    The screech frequency and amplitude, the shock spacing, the hydrodynamic-acoustic standing wave spacing, and the convective velocity of large organized structures are measured in the nominal Mach number range of 1.1 less than or = Mj less that or = l0.9 for supersonic, underexpanded jets exhausting from a circular, a rectangular and an elliptic nozzle. This provides a carefully measured data set useful in comparing the importance of various physical parameters in the screech generation process. The hydrodynamic-acoustic standing wave is formed between the potential pressure field of large turbulent structures and the acoustic pressure field of the screech sound. It has been demonstrated earlier that in the currently available screech frequency prediction models replacement of the shock spacing by the standing wave spacing provides an exact expression. In view of this newly found evidence, a comparison is made between the average standing wavelength and the average shock spacing. It is found that there exists a small, yet important, difference, which is dependent on the azimuthal screech mode. For example, in the flapping modes of circular, rectangular, and elliptic jets, the standing wavelength is slightly longer than the shock spacing, while for the helical screech mode in a circular jet the opposite is true. This difference accounts for the departure of the existing models from predicting the exact screech frequency. Another important parameter, necessary in screech prediction, is the convective velocity of the large organized structures. It is demonstrated that the presence of the hydrodynamic-acoustic standing wave, even inside the jet shear layer, becomes a significant source of error in the convective velocity data obtained using the conventional methods. However, a new relationship, using the standing wavelength and screech frequency is shown to provide more accurate results.

  13. Evaluation of powertrain solutions for future tactical truck vehicle systems

    NASA Astrophysics Data System (ADS)

    Pisu, Pierluigi; Cantemir, Codrin-Gruie; Dembski, Nicholas; Rizzoni, Giorgio; Serrao, Lorenzo; Josephson, John R.; Russell, James

    2006-05-01

    The article presents the results of a large scale design space exploration for the hybridization of two off-road vehicles, part of the Future Tactical Truck System (FTTS) family: Maneuver Sustainment Vehicle (MSV) and Utility Vehicle (UV). Series hybrid architectures are examined. The objective of the paper is to illustrate a novel design methodology that allows for the choice of the optimal values of several vehicle parameters. The methodology consists in an extensive design space exploration, which involves running a large number of computer simulations with systematically varied vehicle design parameters, where each variant is paced through several different mission profiles, and multiple attributes of performance are measured. The resulting designs are filtered to choose the design tradeoffs that better satisfy the performance and fuel economy requirements. At the end, few promising vehicle configuration designs will be selected that will need additional detailed investigation including neglected metrics like ride and drivability. Several powertrain architectures have been simulated. The design parameters include the number of axles in the vehicle (2 or 3), the number of electric motors per axle (1 or 2), the type of internal combustion engine, the type and quantity of energy storage system devices (batteries, electrochemical capacitors or both together). An energy management control strategy has also been developed to provide efficiency and performance. The control parameters are tunable and have been included into the design space exploration. The results show that the internal combustion engine and the energy storage system devices are extremely important for the vehicle performance.

  14. Parameter estimation methods for gene circuit modeling from time-series mRNA data: a comparative study.

    PubMed

    Fan, Ming; Kuwahara, Hiroyuki; Wang, Xiaolei; Wang, Suojin; Gao, Xin

    2015-11-01

    Parameter estimation is a challenging computational problem in the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter estimation of gene circuit models from such time-series mRNA data has become an important method for quantitatively dissecting the regulation of gene expression. By focusing on the modeling of gene circuits, we examine here the performance of three types of state-of-the-art parameter estimation methods: population-based methods, online methods and model-decomposition-based methods. Our results show that certain population-based methods are able to generate high-quality parameter solutions. The performance of these methods, however, is heavily dependent on the size of the parameter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, online methods and model decomposition-based methods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fast methods with local search as a subsequent refinement procedure can substantially increase the quality of their parameter estimates to the level on par with the best solution obtained from the population-based methods while maintaining high computational speed. These suggest that such hybrid methods can be a promising alternative to the more commonly used population-based methods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatory mechanisms makes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seljak, Uroš, E-mail: useljak@berkeley.edu

    On large scales a nonlinear transformation of matter density field can be viewed as a biased tracer of the density field itself. A nonlinear transformation also modifies the redshift space distortions in the same limit, giving rise to a velocity bias. In models with primordial nongaussianity a nonlinear transformation generates a scale dependent bias on large scales. We derive analytic expressions for the large scale bias, the velocity bias and the redshift space distortion (RSD) parameter β, as well as the scale dependent bias from primordial nongaussianity for a general nonlinear transformation. These biases can be expressed entirely in termsmore » of the one point distribution function (PDF) of the final field and the parameters of the transformation. The analysis shows that one can view the large scale bias different from unity and primordial nongaussianity bias as a consequence of converting higher order correlations in density into 2-point correlations of its nonlinear transform. Our analysis allows one to devise nonlinear transformations with nearly arbitrary bias properties, which can be used to increase the signal in the large scale clustering limit. We apply the results to the ionizing equilibrium model of Lyman-α forest, in which Lyman-α flux F is related to the density perturbation δ via a nonlinear transformation. Velocity bias can be expressed as an average over the Lyman-α flux PDF. At z = 2.4 we predict the velocity bias of -0.1, compared to the observed value of −0.13±0.03. Bias and primordial nongaussianity bias depend on the parameters of the transformation. Measurements of bias can thus be used to constrain these parameters, and for reasonable values of the ionizing background intensity we can match the predictions to observations. Matching to the observed values we predict the ratio of primordial nongaussianity bias to bias to have the opposite sign and lower magnitude than the corresponding values for the highly biased galaxies, but this depends on the model parameters and can also vanish or change the sign.« less

  16. Impact of Ice Morphology on Design Space of Pharmaceutical Freeze-Drying.

    PubMed

    Goshima, Hiroshika; Do, Gabsoo; Nakagawa, Kyuya

    2016-06-01

    It has been known that the sublimation kinetics of a freeze-drying product is affected by its internal ice crystal microstructures. This article demonstrates the impact of the ice morphologies of a frozen formulation in a vial on the design space for the primary drying of a pharmaceutical freeze-drying process. Cross-sectional images of frozen sucrose-bovine serum albumin aqueous solutions were optically observed and digital pictures were acquired. Binary images were obtained from the optical data to extract the geometrical parameters (i.e., ice crystal size and tortuosity) that relate to the mass-transfer resistance of water vapor during the primary drying step. A mathematical model was used to simulate the primary drying kinetics and provided the design space for the process. The simulation results predicted that the geometrical parameters of frozen solutions significantly affect the design space, with large and less tortuous ice morphologies resulting in wide design spaces and vice versa. The optimal applicable drying conditions are influenced by the ice morphologies. Therefore, owing to the spatial distributions of the geometrical parameters of a product, the boundary curves of the design space are variable and could be tuned by controlling the ice morphologies. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  17. Image degradation characteristics and restoration based on regularization for diffractive imaging

    NASA Astrophysics Data System (ADS)

    Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun

    2017-11-01

    The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.

  18. Cosmology and accelerator tests of strongly interacting dark matter

    DOE PAGES

    Berlin, Asher; Blinov, Nikita; Gori, Stefania; ...

    2018-03-23

    A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experimentsmore » such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. As a result, we also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.« less

  19. Cosmology and accelerator tests of strongly interacting dark matter

    NASA Astrophysics Data System (ADS)

    Berlin, Asher; Blinov, Nikita; Gori, Stefania; Schuster, Philip; Toro, Natalia

    2018-03-01

    A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experiments such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. We also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.

  20. Cosmology and accelerator tests of strongly interacting dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berlin, Asher; Blinov, Nikita; Gori, Stefania

    A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experimentsmore » such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. As a result, we also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.« less

  1. Robust Constraints and Novel Gamma-Ray Signatures of Dark Matter That Interacts Strongly With Nucleons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hooper, Dan; McDermott, Samuel D.

    Due to shielding, direct detection experiments are in some cases insensitive to dark matter candidates with very large scattering cross sections with nucleons. In this paper, we revisit this class of models, and derive a simple analytic criterion for conservative but robust direct detection limits. While large spin-independent cross sections seem to be ruled out, we identify potentially viable parameter space for dark matter with a spin-dependent cross section with nucleons in the range ofmore » $$10^{-27} {\\rm cm}^2 < \\sigma_{{\\rm DM}-p} < 10^{-24} \\, {\\rm cm}^{2}$$. With these parameters, cosmic-ray scattering with dark matter in the extended halo of the Milky Way could generate a novel and distinctive gamma-ray signal at high galactic latitudes. Such a signal could be observable by Fermi or future space-based gamma-ray telescopes.« less

  2. Modeling space-time correlations of velocity fluctuations in wind farms

    NASA Astrophysics Data System (ADS)

    Lukassen, Laura J.; Stevens, Richard J. A. M.; Meneveau, Charles; Wilczek, Michael

    2018-07-01

    An analytical model for the streamwise velocity space-time correlations in turbulent flows is derived and applied to the special case of velocity fluctuations in large wind farms. The model is based on the Kraichnan-Tennekes random sweeping hypothesis, capturing the decorrelation in time while including a mean wind velocity in the streamwise direction. In the resulting model, the streamwise velocity space-time correlation is expressed as a convolution of the pure space correlation with an analytical temporal decorrelation kernel. Hence, the spatio-temporal structure of velocity fluctuations in wind farms can be derived from the spatial correlations only. We then explore the applicability of the model to predict spatio-temporal correlations in turbulent flows in wind farms. Comparisons of the model with data from a large eddy simulation of flow in a large, spatially periodic wind farm are performed, where needed model parameters such as spatial and temporal integral scales and spatial correlations are determined from the large eddy simulation. Good agreement is obtained between the model and large eddy simulation data showing that spatial data may be used to model the full temporal structure of fluctuations in wind farms.

  3. Singlet-catalyzed electroweak phase transitions and precision Higgs boson studies

    NASA Astrophysics Data System (ADS)

    Profumo, Stefano; Ramsey-Musolf, Michael J.; Wainwright, Carroll L.; Winslow, Peter

    2015-02-01

    We update the phenomenology of gauge-singlet extensions of the Standard Model scalar sector and their implications for the electroweak phase transition. Considering the introduction of one real scalar singlet to the scalar potential, we analyze present constraints on the potential parameters from Higgs coupling measurements at the Large Hadron Collider (LHC) and electroweak precision observables for the kinematic regime in which no new scalar decay modes arise. We then show how future precision measurements of Higgs boson signal strengths and the Higgs self-coupling could probe the scalar potential parameter space associated with a strong first-order electroweak phase transition. We illustrate using benchmark precision for several future collider options, including the high-luminosity LHC, the International Linear Collider, Triple-Large Electron-Positron collider, the China Electron-Positron Collider, and a 100 TeV proton-proton collider, such as the Very High Energy LHC or the Super Proton-Proton Collider. For the regions of parameter space leading to a strong first-order electroweak phase transition, we find that there exists considerable potential for observable deviations from purely Standard Model Higgs properties at these prospective future colliders.

  4. Estimation of primordial spectrum with post-WMAP 3-year data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shafieloo, Arman; Souradeep, Tarun

    2008-07-15

    In this paper we implement an improved (error-sensitive) Richardson-Lucy deconvolution algorithm on the measured angular power spectrum from the Wilkinson Microwave Anisotropy Probe (WMAP) 3 year data to determine the primordial power spectrum assuming different points in the cosmological parameter space for a flat {lambda}CDM cosmological model. We also present the preliminary results of the cosmological parameter estimation by assuming a free form of the primordial spectrum, for a reasonably large volume of the parameter space. The recovered spectrum for a considerably large number of the points in the cosmological parameter space has a likelihood far better than a 'bestmore » fit' power law spectrum up to {delta}{chi}{sub eff}{sup 2}{approx_equal}-30. We use discrete wavelet transform (DWT) for smoothing the raw recovered spectrum from the binned data. The results obtained here reconfirm and sharpen the conclusion drawn from our previous analysis of the WMAP 1st year data. A sharp cut off around the horizon scale and a bump after the horizon scale seem to be a common feature for all of these reconstructed primordial spectra. We have shown that although the WMAP 3 year data prefers a lower value of matter density for a power law form of the primordial spectrum, for a free form of the spectrum, we can get a very good likelihood to the data for higher values of matter density. We have also shown that even a flat cold dark matter model, allowing a free form of the primordial spectrum, can give a very high likelihood fit to the data. Theoretical interpretation of the results is open to the cosmology community. However, this work provides strong evidence that the data retains discriminatory power in the cosmological parameter space even when there is full freedom in choosing the primordial spectrum.« less

  5. Resonant tidal excitation of planetary atmospheres and an explanation for the jets on Jupiter and Saturn

    NASA Astrophysics Data System (ADS)

    Tyler, R.

    2017-12-01

    Resonant tidal excitation of an atmosphere will arrive in predictable situations where there is a match in form and frequency between tidal forces and the atmosphere's eigenmodes of oscillation. The resonant response is typically several orders of magnitude more energetic than in non-resonant configurations involving only slight differences in parameters, and the behavior can be quite different because different oscillation modes are favored in each. The work presented provides first a generic description of these resonant states by demonstrating the behavior of solutions within the very large parameter space of potential scenarios. This generic description of the range of atmospheric tidal response scenarios is further used to create a taxonomy for organizing and understanding various tidally driven dynamic regimes. The resonances are easily identified by associated peaks in the power. But because these peaks may be relatively narrow, millions of solutions can be required to complete the description of the solution's dependence over the range of parameter values. (Construction of these large solution spaces is performed using a fast, semi-analytical method that solves the forced, dissipative, Laplace Tidal Equations subject to the constraint of dynamical consistency (through a separation constant) with solutions describing the vertical structure.) Filling in the solution space in this way is used not only to locate the parameter coordinates of resonant scenarios but also to study allowed migration paths through this space. It is suggested that resonant scenarios do not arrive through happenstance but rather because secular variations in parameters make the configuration move into the resonant scenario, with associated feedbacks either accelerating or halting the configuration migration. These results are then used to show strong support for the hypothesis by R. Lindzen that the regular banding (belts/zones/jets) on Jupiter and Saturn are driven by tides. The results also provide important, though less specific, support for a second hypothesis that inflated atmospheres inferred for a number of giant extra-solar planets are due to thermal or gravitational tides.

  6. Exploring the hyperchargeless Higgs triplet model up to the Planck scale

    NASA Astrophysics Data System (ADS)

    Khan, Najimuddin

    2018-04-01

    We examine an extension of the SM Higgs sector by a Higgs triplet taking into consideration the discovery of a Higgs-like particle at the LHC with mass around 125 GeV. We evaluate the bounds on the scalar potential through the unitarity of the scattering matrix. Considering the cases with and without Z_2-symmetry of the extra triplet, we derive constraints on the parameter space. We identify the region of the parameter space that corresponds to the stability and metastability of the electroweak vacuum. We also show that at large field values the scalar potential of this model is suitable to explain inflation.

  7. Curvature perturbation and waterfall dynamics in hybrid inflation

    NASA Astrophysics Data System (ADS)

    Akbar Abolhasani, Ali; Firouzjahi, Hassan; Sasaki, Misao

    2011-10-01

    We investigate the parameter spaces of hybrid inflation model with special attention paid to the dynamics of waterfall field and curvature perturbations induced from its quantum fluctuations. Depending on the inflaton field value at the time of phase transition and the sharpness of the phase transition inflation can have multiple extended stages. We find that for models with mild phase transition the induced curvature perturbation from the waterfall field is too large to satisfy the COBE normalization. We investigate the model parameter space where the curvature perturbations from the waterfall quantum fluctuations vary between the results of standard hybrid inflation and the results obtained here.

  8. Industrial laser welding evaluation study

    NASA Technical Reports Server (NTRS)

    Hella, R.; Locke, E.; Ream, S.

    1974-01-01

    High power laser welding was evaluated for fabricating space vehicle boosters. This evaluation was made for 1/4 in. and 1/2 in. aluminum (2219) and 1/4 in. and 1/2 in. D6AC steel. The Avco HPL 10 kW industrial laser was used to perform the evaluation. The objective has been achieved through the completion of the following technical tasks: (1) parameter study to optimize welding and material parameters; (2) preparation of welded panels for MSFC evaluation; and (3) demonstration of the repeatability of laser welding equipment. In addition, the design concept for a laser welding system capable of welding large space vehicle boosters has been developed.

  9. The photo-philic QCD axion

    DOE PAGES

    Farina, Marco; Pappadopulo, Duccio; Rompineve, Fabrizio; ...

    2017-01-23

    Here, we propose a framework in which the QCD axion has an exponentially large coupling to photons, relying on the “clockwork” mechanism. We discuss the impact of present and future axion experiments on the parameter space of the model. In addition to the axion, the model predicts a large number of pseudoscalars which can be light and observable at the LHC. In the most favorable scenario, axion Dark Matter will give a signal in multiple axion detection experiments and the pseudo-scalars will be discovered at the LHC, allowing us to determine most of the parameters of the model.

  10. Experimental evaluation and basis function optimization of the spatially variant image-space PSF on the Ingenuity PET/MR scanner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotasidis, Fotis A., E-mail: Fotis.Kotasidis@unige.ch; Zaidi, Habib; Geneva Neuroscience Centre, Geneva University, CH-1205 Geneva

    2014-06-15

    Purpose: The Ingenuity time-of-flight (TF) PET/MR is a recently developed hybrid scanner combining the molecular imaging capabilities of PET with the excellent soft tissue contrast of MRI. It is becoming common practice to characterize the system's point spread function (PSF) and understand its variation under spatial transformations to guide clinical studies and potentially use it within resolution recovery image reconstruction algorithms. Furthermore, due to the system's utilization of overlapping and spherical symmetric Kaiser-Bessel basis functions during image reconstruction, its image space PSF and reconstructed spatial resolution could be affected by the selection of the basis function parameters. Hence, a detailedmore » investigation into the multidimensional basis function parameter space is needed to evaluate the impact of these parameters on spatial resolution. Methods: Using an array of 12 × 7 printed point sources, along with a custom made phantom, and with the MR magnet on, the system's spatially variant image-based PSF was characterized in detail. Moreover, basis function parameters were systematically varied during reconstruction (list-mode TF OSEM) to evaluate their impact on the reconstructed resolution and the image space PSF. Following the spatial resolution optimization, phantom, and clinical studies were subsequently reconstructed using representative basis function parameters. Results: Based on the analysis and under standard basis function parameters, the axial and tangential components of the PSF were found to be almost invariant under spatial transformations (∼4 mm) while the radial component varied modestly from 4 to 6.7 mm. Using a systematic investigation into the basis function parameter space, the spatial resolution was found to degrade for basis functions with a large radius and small shape parameter. However, it was found that optimizing the spatial resolution in the reconstructed PET images, while having a good basis function superposition and keeping the image representation error to a minimum, is feasible, with the parameter combination range depending upon the scanner's intrinsic resolution characteristics. Conclusions: Using the printed point source array as a MR compatible methodology for experimentally measuring the scanner's PSF, the system's spatially variant resolution properties were successfully evaluated in image space. Overall the PET subsystem exhibits excellent resolution characteristics mainly due to the fact that the raw data are not under-sampled/rebinned, enabling the spatial resolution to be dictated by the scanner's intrinsic resolution and the image reconstruction parameters. Due to the impact of these parameters on the resolution properties of the reconstructed images, the image space PSF varies both under spatial transformations and due to basis function parameter selection. Nonetheless, for a range of basis function parameters, the image space PSF remains unaffected, with the range depending on the scanner's intrinsic resolution properties.« less

  11. Upper bounds on superpartner masses from upper bounds on the Higgs boson mass.

    PubMed

    Cabrera, M E; Casas, J A; Delgado, A

    2012-01-13

    The LHC is putting bounds on the Higgs boson mass. In this Letter we use those bounds to constrain the minimal supersymmetric standard model (MSSM) parameter space using the fact that, in supersymmetry, the Higgs mass is a function of the masses of sparticles, and therefore an upper bound on the Higgs mass translates into an upper bound for the masses for superpartners. We show that, although current bounds do not constrain the MSSM parameter space from above, once the Higgs mass bound improves big regions of this parameter space will be excluded, putting upper bounds on supersymmetry (SUSY) masses. On the other hand, for the case of split-SUSY we show that, for moderate or large tanβ, the present bounds on the Higgs mass imply that the common mass for scalars cannot be greater than 10(11)  GeV. We show how these bounds will evolve as LHC continues to improve the limits on the Higgs mass.

  12. Analysis and trade-off studies of large lightweight mirror structures. [large space telescope

    NASA Technical Reports Server (NTRS)

    Soosaar, K.; Grin, R.; Ayer, F.

    1975-01-01

    A candidate mirror, hexagonally lightweighted, is analyzed under various loadings using as complete a procedure as possible. Successive simplifications are introduced and compared to an original analysis. A model which is a reasonable compromise between accuracy and cost is found and is used for making trade-off studies of the various structural parameters of the lightweighted mirror.

  13. Scale Effects on Magnet Systems of Heliotron-Type Reactors

    NASA Astrophysics Data System (ADS)

    S, Imagawa; A, Sagara

    2005-02-01

    For power plants heliotron-type reactors have attractive advantages, such as no current-disruptions, no current-drive, and wide space between helical coils for the maintenance of in-vessel components. However, one disadvantage is that a major radius has to be large enough to obtain large Q-value or to produce sufficient space for blankets. Although the larger radius is considered to increase the construction cost, the influence has not been understood clearly, yet. Scale effects on superconducting magnet systems have been estimated under the conditions of a constant energy confinement time and similar geometrical parameters. Since the necessary magnetic field with a larger radius becomes lower, the increase rate of the weight of the coil support to the major radius is less than the square root. The necessary major radius will be determined mainly by the blanket space. The appropriate major radius will be around 13 m for a reactor similar to the Large Helical Device (LHD).

  14. Population Coding of Visual Space: Modeling

    PubMed Central

    Lehky, Sidney R.; Sereno, Anne B.

    2011-01-01

    We examine how the representation of space is affected by receptive field (RF) characteristics of the encoding population. Spatial responses were defined by overlapping Gaussian RFs. These responses were analyzed using multidimensional scaling to extract the representation of global space implicit in population activity. Spatial representations were based purely on firing rates, which were not labeled with RF characteristics (tuning curve peak location, for example), differentiating this approach from many other population coding models. Because responses were unlabeled, this model represents space using intrinsic coding, extracting relative positions amongst stimuli, rather than extrinsic coding where known RF characteristics provide a reference frame for extracting absolute positions. Two parameters were particularly important: RF diameter and RF dispersion, where dispersion indicates how broadly RF centers are spread out from the fovea. For large RFs, the model was able to form metrically accurate representations of physical space on low-dimensional manifolds embedded within the high-dimensional neural population response space, suggesting that in some cases the neural representation of space may be dimensionally isomorphic with 3D physical space. Smaller RF sizes degraded and distorted the spatial representation, with the smallest RF sizes (present in early visual areas) being unable to recover even a topologically consistent rendition of space on low-dimensional manifolds. Finally, although positional invariance of stimulus responses has long been associated with large RFs in object recognition models, we found RF dispersion rather than RF diameter to be the critical parameter. In fact, at a population level, the modeling suggests that higher ventral stream areas with highly restricted RF dispersion would be unable to achieve positionally-invariant representations beyond this narrow region around fixation. PMID:21344012

  15. Efficient Screening of Climate Model Sensitivity to a Large Number of Perturbed Input Parameters [plus supporting information

    DOE PAGES

    Covey, Curt; Lucas, Donald D.; Tannahill, John; ...

    2013-07-01

    Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less

  16. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines

    PubMed Central

    Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.

    2017-01-01

    Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445

  17. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.

    PubMed

    Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H

    2017-04-01

    Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  18. Trends in Array Antenna Research,

    DTIC Science & Technology

    1977-06-01

    design, because it is possible to record this single mode parameter and still account for all of the subtleties that occur at the array face. 2.5...waveguide field, but did properly account for the full spatial harmonic series (grating lobe series) in the free space half space. Some earlier...described some approximate procedures to account for coupling in large arrays where the numerical evaluation of all the higher order terms would

  19. Maximum Entropy/Optimal Projection (MEOP) control design synthesis: Optimal quantification of the major design tradeoffs

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.; Bernstein, D. S.

    1987-01-01

    The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modeling and reduced control design methodology for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed. The application of the methodology to several Large Space Structures (LSS) problems of representative complexity is illustrated.

  20. pypet: A Python Toolkit for Data Management of Parameter Explorations

    PubMed Central

    Meyer, Robert; Obermayer, Klaus

    2016-01-01

    pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines. PMID:27610080

  1. pypet: A Python Toolkit for Data Management of Parameter Explorations.

    PubMed

    Meyer, Robert; Obermayer, Klaus

    2016-01-01

    pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines.

  2. Testing general relativity's no-hair theorem with x-ray observations of black holes

    NASA Astrophysics Data System (ADS)

    Hoormann, Janie K.; Beheshtipour, Banafsheh; Krawczynski, Henric

    2016-02-01

    Despite its success in the weak gravity regime, general relativity (GR) has yet to be verified in the regime of strong gravity. In this paper, we present the results of detailed ray-tracing simulations aiming at clarifying if the combined information from x-ray spectroscopy, timing, and polarization observations of stellar mass and supermassive black holes can be used to test GR's no-hair theorem. The latter states that stationary astrophysical black holes are described by the Kerr family of metrics, with the black hole mass and spin being the only free parameters. We use four "non-Kerr metrics," some phenomenological in nature and others motivated by alternative theories of gravity, and study the observational signatures of deviations from the Kerr metric. Particular attention is given to the case when all the metrics are set to give the same innermost stable circular orbit in quasi-Boyer-Lindquist coordinates. We give a detailed discussion of similarities and differences of the observational signatures predicted for black holes in the Kerr metric and the non-Kerr metrics. We emphasize that even though some regions of the parameter space are nearly degenerate even when combining the information from all observational channels, x-ray observations of very rapidly spinning black holes can be used to exclude large regions of the parameter space of the alternative metrics. Although it proves difficult to distinguish between the Kerr and non-Kerr metrics for some portions of the parameter space, the observations of very rapidly spinning black holes like Cyg X-1 can be used to rule out large regions for several black hole metrics.

  3. A transformed path integral approach for solution of the Fokker-Planck equation

    NASA Astrophysics Data System (ADS)

    Subramaniam, Gnana M.; Vedula, Prakash

    2017-10-01

    A novel path integral (PI) based method for solution of the Fokker-Planck equation is presented. The proposed method, termed the transformed path integral (TPI) method, utilizes a new formulation for the underlying short-time propagator to perform the evolution of the probability density function (PDF) in a transformed computational domain where a more accurate representation of the PDF can be ensured. The new formulation, based on a dynamic transformation of the original state space with the statistics of the PDF as parameters, preserves the non-negativity of the PDF and incorporates short-time properties of the underlying stochastic process. New update equations for the state PDF in a transformed space and the parameters of the transformation (including mean and covariance) that better accommodate nonlinearities in drift and non-Gaussian behavior in distributions are proposed (based on properties of the SDE). Owing to the choice of transformation considered, the proposed method maps a fixed grid in transformed space to a dynamically adaptive grid in the original state space. The TPI method, in contrast to conventional methods such as Monte Carlo simulations and fixed grid approaches, is able to better represent the distributions (especially the tail information) and better address challenges in processes with large diffusion, large drift and large concentration of PDF. Additionally, in the proposed TPI method, error bounds on the probability in the computational domain can be obtained using the Chebyshev's inequality. The benefits of the TPI method over conventional methods are illustrated through simulations of linear and nonlinear drift processes in one-dimensional and multidimensional state spaces. The effects of spatial and temporal grid resolutions as well as that of the diffusion coefficient on the error in the PDF are also characterized.

  4. PC Software graphics tool for conceptual design of space/planetary electrical power systems

    NASA Technical Reports Server (NTRS)

    Truong, Long V.

    1995-01-01

    This paper describes the Decision Support System (DSS), a personal computer software graphics tool for designing conceptual space and/or planetary electrical power systems. By using the DSS, users can obtain desirable system design and operating parameters, such as system weight, electrical distribution efficiency, and bus power. With this tool, a large-scale specific power system was designed in a matter of days. It is an excellent tool to help designers make tradeoffs between system components, hardware architectures, and operation parameters in the early stages of the design cycle. The DSS is a user-friendly, menu-driven tool with online help and a custom graphical user interface. An example design and results are illustrated for a typical space power system with multiple types of power sources, frequencies, energy storage systems, and loads.

  5. Seminar presentation on the economic evaluation of the space shuttle system

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The proceedings of a seminar on the economic aspects of the space shuttle system are presented. Emphasis was placed on the problems of economic analysis of large scale public investments, the state of the art of cost estimation, the statistical data base for estimating costs of new technological systems, and the role of the main economic parameters affecting the results of the analyses. An explanation of the system components of a space program and the present choice of launch vehicles, spacecraft, and instruments was conducted.

  6. Conceptual spacecraft systems design and synthesis

    NASA Technical Reports Server (NTRS)

    Wright, R. L.; Deryder, D. D.; Ferebee, M. J., Jr.

    1984-01-01

    An interactive systems design and synthesis is performed on future spacecraft concepts using the Interactive Design and Evaluation of Advanced Systems (IDEAS) computer-aided design and analysis system. The capabilities and advantages of the systems-oriented interactive computer-aided design and analysis system are described. The synthesis of both large antenna and space station concepts, and space station evolutionary growth designs is demonstrated. The IDEAS program provides the user with both an interactive graphics and an interactive computing capability which consists of over 40 multidisciplinary synthesis and analysis modules. Thus, the user can create, analyze, and conduct parametric studies and modify earth-orbiting spacecraft designs (space stations, large antennas or platforms, and technologically advanced spacecraft) at an interactive terminal with relative ease. The IDEAS approach is useful during the conceptual design phase of advanced space missions when a multiplicity of parameters and concepts must be analyzed and evaluated in a cost-effective and timely manner.

  7. Interactive systems design and synthesis of future spacecraft concepts

    NASA Technical Reports Server (NTRS)

    Wright, R. L.; Deryder, D. D.; Ferebee, M. J., Jr.

    1984-01-01

    An interactive systems design and synthesis is performed on future spacecraft concepts using the Interactive Design and Evaluation of Advanced spacecraft (IDEAS) computer-aided design and analysis system. The capabilities and advantages of the systems-oriented interactive computer-aided design and analysis system are described. The synthesis of both large antenna and space station concepts, and space station evolutionary growth is demonstrated. The IDEAS program provides the user with both an interactive graphics and an interactive computing capability which consists of over 40 multidisciplinary synthesis and analysis modules. Thus, the user can create, analyze and conduct parametric studies and modify Earth-orbiting spacecraft designs (space stations, large antennas or platforms, and technologically advanced spacecraft) at an interactive terminal with relative ease. The IDEAS approach is useful during the conceptual design phase of advanced space missions when a multiplicity of parameters and concepts must be analyzed and evaluated in a cost-effective and timely manner.

  8. Parameter reduction in nonlinear state-space identification of hysteresis

    NASA Astrophysics Data System (ADS)

    Fakhrizadeh Esfahani, Alireza; Dreesen, Philippe; Tiels, Koen; Noël, Jean-Philippe; Schoukens, Johan

    2018-05-01

    Recent work on black-box polynomial nonlinear state-space modeling for hysteresis identification has provided promising results, but struggles with a large number of parameters due to the use of multivariate polynomials. This drawback is tackled in the current paper by applying a decoupling approach that results in a more parsimonious representation involving univariate polynomials. This work is carried out numerically on input-output data generated by a Bouc-Wen hysteretic model and follows up on earlier work of the authors. The current article discusses the polynomial decoupling approach and explores the selection of the number of univariate polynomials with the polynomial degree. We have found that the presented decoupling approach is able to reduce the number of parameters of the full nonlinear model up to about 50%, while maintaining a comparable output error level.

  9. Numerical simulation of the geodynamo reaches Earth's core dynamical regime

    NASA Astrophysics Data System (ADS)

    Aubert, J.; Gastine, T.; Fournier, A.

    2016-12-01

    Numerical simulations of the geodynamo have been successful at reproducing a number of static (field morphology) and kinematic (secular variation patterns, core surface flows and westward drift) features of Earth's magnetic field, making them a tool of choice for the analysis and retrieval of geophysical information on Earth's core. However, classical numerical models have been run in a parameter regime far from that of the real system, prompting the question of whether we do get "the right answers for the wrong reasons", i.e. whether the agreement between models and nature simply occurs by chance and without physical relevance in the dynamics. In this presentation, we show that classical models succeed in describing the geodynamo because their large-scale spatial structure is essentially invariant as one progresses along a well-chosen path in parameter space to Earth's core conditions. This path is constrained by the need to enforce the relevant force balance (MAC or Magneto-Archimedes-Coriolis) and preserve the ratio of the convective overturn and magnetic diffusion times. Numerical simulations performed along this path are shown to be spatially invariant at scales larger than that where the magnetic energy is ohmically dissipated. This property enables the definition of large-eddy simulations that show good agreement with direct numerical simulations in the range where both are feasible, and that can be computed at unprecedented values of the control parameters, such as an Ekman number E=10-8. Combining direct and large-eddy simulations, large-scale invariance is observed over half the logarithmic distance in parameter space between classical models and Earth. The conditions reached at this mid-point of the path are furthermore shown to be representative of the rapidly-rotating, asymptotic dynamical regime in which Earth's core resides, with a MAC force balance undisturbed by viscosity or inertia, the enforcement of a Taylor state and strong-field dynamo action. We conclude that numerical modelling has advanced to a stage where it is possible to use models correctly representing the statics, kinematics and now the dynamics of the geodynamo. This opens the way to a better analysis of the geomagnetic field in the time and space domains.

  10. Linking Retinal Microvasculature Features With Severity of Diabetic Retinopathy Using Optical Coherence Tomography Angiography.

    PubMed

    Bhanushali, Devanshi; Anegondi, Neha; Gadde, Santosh G K; Srinivasan, Priya; Chidambara, Lavanya; Yadav, Naresh Kumar; Sinha Roy, Abhijit

    2016-07-01

    To correlate retinal vascular features with severity and systemic indicators of diabetic retinopathy (DR) using optical coherence tomography angiography (OCTA). A total of 209 eyes of 122 type 2 diabetes mellitus patients with DR and 60 eyes of 31 normal Indian subjects underwent OCTA imaging. The diabetic retinopathy patients were graded as having either nonproliferative diabetic retinopathy (NPDR: mild, moderate, and severe NPDR using Early Treatment Diabetic Retinopathy Study classification) or proliferative diabetic retinopathy (PDR). Local fractal analysis was applied to the superficial and deep retinal OCTA images. Foveal avascular zone area (FAZ in mm2); vessel density (%); spacing between large vessels (%); and spacing between small vessels (%) were analyzed. Sensitivity and specificity of vascular parameters were assessed with receiver operating characteristics (ROC) curve. Normal eyes had a significantly lower FAZ area, higher vessel density, and lower spacing between large and small vessels compared with DR grades (P < 0.001). In the superficial layer, PDR and severe NPDR had higher spacing between large vessels than mild and moderate NPDR (P = 0.04). However, mild NPDR had higher spacing between the small vessels (P < 0.001). Spacing between the large vessels in the superficial retinal layer correlated positively with HbA1c (r = 0.25, P = 0.03); fasting (r = 0.23, P = 0.02); and postprandial (r = 0.26, P = 0.03) blood sugar. The same spacing in the deep retinal vascular plexus had the highest area under the ROC curve (0.99 ± 0.01) and was uniformly elevated in all diabetic eyes (P > 0.05). Spacing between the large vessels in the superficial and deep retinal layers had superior diagnostic performance than overall vessel density.

  11. Parallel optimization of signal detection in active magnetospheric signal injection experiments

    NASA Astrophysics Data System (ADS)

    Gowanlock, Michael; Li, Justin D.; Rude, Cody M.; Pankratius, Victor

    2018-05-01

    Signal detection and extraction requires substantial manual parameter tuning at different stages in the processing pipeline. Time-series data depends on domain-specific signal properties, necessitating unique parameter selection for a given problem. The large potential search space makes this parameter selection process time-consuming and subject to variability. We introduce a technique to search and prune such parameter search spaces in parallel and select parameters for time series filters using breadth- and depth-first search strategies to increase the likelihood of detecting signals of interest in the field of magnetospheric physics. We focus on studying geomagnetic activity in the extremely and very low frequency ranges (ELF/VLF) using ELF/VLF transmissions from Siple Station, Antarctica, received at Québec, Canada. Our technique successfully detects amplified transmissions and achieves substantial speedup performance gains as compared to an exhaustive parameter search. We present examples where our algorithmic approach reduces the search from hundreds of seconds down to less than 1 s, with a ranked signal detection in the top 99th percentile, thus making it valuable for real-time monitoring. We also present empirical performance models quantifying the trade-off between the quality of signal recovered and the algorithm response time required for signal extraction. In the future, improved signal extraction in scenarios like the Siple experiment will enable better real-time diagnostics of conditions of the Earth's magnetosphere for monitoring space weather activity.

  12. Development of space stable thermal control coatings for use on large space vehicles. [effects of ultraviolet radiation

    NASA Technical Reports Server (NTRS)

    Gilligan, J. E.; Harada, Y.

    1974-01-01

    The development of a large scale manufacturing method for the production of a stable zinc orthotitanate pigment is discussed. Major emphasis was placed on the evaluation of ultraviolet radiation stability tests of pigments derived from coprecipitated and individually precipitated oxalates. Emphasis was also placed on an investigation of the conditions (time and temperature) leading to high reflectance and high optical stability. Paints were formulated in OI-650 and in OI-650G vehicles from pigments which were prepared at various temperatures. Analyses of ultraviolet irradiation test data were conducted regarding optimum pigment preparation parameters and treatment conditions.

  13. Bone tissue phantoms for optical flowmeters at large interoptode spacing generated by 3D-stereolithography

    PubMed Central

    Binzoni, Tiziano; Torricelli, Alessandro; Giust, Remo; Sanguinetti, Bruno; Bernhard, Paul; Spinelli, Lorenzo

    2014-01-01

    A bone tissue phantom prototype allowing to test, in general, optical flowmeters at large interoptode spacings, such as laser-Doppler flowmetry or diffuse correlation spectroscopy, has been developed by 3D-stereolithography technique. It has been demonstrated that complex tissue vascular systems of any geometrical shape can be conceived. Absorption coefficient, reduced scattering coefficient and refractive index of the optical phantom have been measured to ensure that the optical parameters reasonably reproduce real human bone tissue in vivo. An experimental demonstration of a possible use of the optical phantom, utilizing a laser-Doppler flowmeter, is also presented. PMID:25136496

  14. Solar array electrical performance assessment for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Smith, Bryan K.; Brisco, Holly

    1993-01-01

    Electrical power for Space Station Freedom will be generated by large Photovoltaic arrays with a beginning of life power requirement of 30.8 kW per array. The solar arrays will operate in a Low Earth Orbit (LEO) over a design life of fifteen years. This paper provides an analysis of the predicted solar array electrical performance over the design life and presents a summary of supporting analysis and test data for the assigned model parameters and performance loss factors. Each model parameter and loss factor is assessed based upon program requirements, component analysis, and test data to date. A description of the LMSC performance model, future test plans, and predicted performance ranges are also given.

  15. Solar array electrical performance assessment for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Smith, Bryan K.; Brisco, Holly

    1993-01-01

    Electrical power for Space Station Freedom will be generated by large photovoltaic arrays with a beginning of life power requirement of 30.8 kW per array. The solar arrays will operate in a Low Earth Orbit (LEO) over a design life of fifteen years. This paper provides an analysis of the predicted solar array electrical performance over the design life and presents a summary of supporting analysis and test data for the assigned model parameters and performance loss factors. Each model parameter and loss factor is assessed based upon program requirements, component analysis and test data to date. A description of the LMSC performance model future test plans and predicted performance ranges are also given.

  16. Catchment Tomography - Joint Estimation of Surface Roughness and Hydraulic Conductivity with the EnKF

    NASA Astrophysics Data System (ADS)

    Baatz, D.; Kurtz, W.; Hendricks Franssen, H. J.; Vereecken, H.; Kollet, S. J.

    2017-12-01

    Parameter estimation for physically based, distributed hydrological models becomes increasingly challenging with increasing model complexity. The number of parameters is usually large and the number of observations relatively small, which results in large uncertainties. A moving transmitter - receiver concept to estimate spatially distributed hydrological parameters is presented by catchment tomography. In this concept, precipitation, highly variable in time and space, serves as a moving transmitter. As response to precipitation, runoff and stream discharge are generated along different paths and time scales, depending on surface and subsurface flow properties. Stream water levels are thus an integrated signal of upstream parameters, measured by stream gauges which serve as the receivers. These stream water level observations are assimilated into a distributed hydrological model, which is forced with high resolution, radar based precipitation estimates. Applying a joint state-parameter update with the Ensemble Kalman Filter, the spatially distributed Manning's roughness coefficient and saturated hydraulic conductivity are estimated jointly. The sequential data assimilation continuously integrates new information into the parameter estimation problem, especially during precipitation events. Every precipitation event constrains the possible parameter space. In the approach, forward simulations are performed with ParFlow, a variable saturated subsurface and overland flow model. ParFlow is coupled to the Parallel Data Assimilation Framework for the data assimilation and the joint state-parameter update. In synthetic, 3-dimensional experiments including surface and subsurface flow, hydraulic conductivity and the Manning's coefficient are efficiently estimated with the catchment tomography approach. A joint update of the Manning's coefficient and hydraulic conductivity tends to improve the parameter estimation compared to a single parameter update, especially in cases of biased initial parameter ensembles. The computational experiments additionally show to which degree of spatial heterogeneity and to which degree of uncertainty of subsurface flow parameters the Manning's coefficient and hydraulic conductivity can be estimated efficiently.

  17. SAChES: Scalable Adaptive Chain-Ensemble Sampling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Ray, Jaideep; Ebeida, Mohamed Salah

    We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the usemore » of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.« less

  18. Harnessing Orbital Debris to Sense the Space Environment

    NASA Astrophysics Data System (ADS)

    Mutschler, S.; Axelrad, P.; Matsuo, T.

    A key requirement for accurate space situational awareness (SSA) is knowledge of the non-conservative forces that act on space objects. These effects vary temporally and spatially, driven by the dynamical behavior of space weather. Existing SSA algorithms adjust space weather models based on observations of calibration satellites. However, lack of sufficient data and mismodeling of non-conservative forces cause inaccuracies in space object motion prediction. The uncontrolled nature of debris makes it particularly sensitive to the variations in space weather. Our research takes advantage of this behavior by inverting observations of debris objects to infer the space environment parameters causing their motion. In addition, this research will produce more accurate predictions of the motion of debris objects. The hypothesis of this research is that it is possible to utilize a "cluster" of debris objects, objects within relatively close proximity of each other, to sense their local environment. We focus on deriving parameters of an atmospheric density model to more precisely predict the drag force on LEO objects. An Ensemble Kalman Filter (EnKF) is used for assimilation; the prior ensemble to the posterior ensemble is transformed during the measurement update in a manner that does not require inversion of large matrices. A prior ensemble is utilized to empirically determine the nonlinear relationship between measurements and density parameters. The filter estimates an extended state that includes position and velocity of the debris object, and atmospheric density parameters. The density is parameterized as a grid of values, distributed by latitude and local sidereal time over a spherical shell encompassing Earth. This research focuses on LEO object motion, but it can also be extended to additional orbital regimes for observation and refinement of magnetic field and solar radiation models. An observability analysis of the proposed approach is presented in terms of the measurement cadence necessary to estimate the local space environment.

  19. Visual exploration of parameter influence on phylogenetic trees.

    PubMed

    Hess, Martin; Bremm, Sebastian; Weissgraeber, Stephanie; Hamacher, Kay; Goesele, Michael; Wiemeyer, Josef; von Landesberger, Tatiana

    2014-01-01

    Evolutionary relationships between organisms are frequently derived as phylogenetic trees inferred from multiple sequence alignments (MSAs). The MSA parameter space is exponentially large, so tens of thousands of potential trees can emerge for each dataset. A proposed visual-analytics approach can reveal the parameters' impact on the trees. Given input trees created with different parameter settings, it hierarchically clusters the trees according to their structural similarity. The most important clusters of similar trees are shown together with their parameters. This view offers interactive parameter exploration and automatic identification of relevant parameters. Biologists applied this approach to real data of 16S ribosomal RNA and protein sequences of ion channels. It revealed which parameters affected the tree structures. This led to a more reliable selection of the best trees.

  20. Searching for dark absorption with direct detection experiments

    DOE PAGES

    Bloch, Itay M.; Essig, Rouven; Tobioka, Kohsaku; ...

    2017-06-16

    We consider the absorption by bound electrons of dark matter in the form of dark photons and axion-like particles, as well as of dark photons from the Sun, in current and next-generation direct detection experiments. Experiments sensitive to electron recoils can detect such particles with masses between a few eV to more than 10 keV. For dark photon dark matter, we update a previous bound based on XENON10 data and derive new bounds based on data from XENON100 and CDMSlite. We find these experiments to disfavor previously allowed parameter space. Moreover, we derive sensitivity projections for SuperCDMS at SNOLAB formore » silicon and germanium targets, as well as for various possible experiments with scintillating targets (cesium iodide, sodium iodide, and gallium arsenide). The projected sensitivity can probe large new regions of parameter space. For axion-like particles, the same current direction detection data improves on previously known direct-detection constraints but does not bound new parameter space beyond known stellar cooling bounds. However, projected sensitivities of the upcoming SuperCDMS SNOLAB using germanium can go beyond these and even probe parameter space consistent with possible hints from the white dwarf luminosity function. We find similar results for dark photons from the sun. For all cases, direct-detection experiments can have unprecedented sensitivity to dark-sector particles.« less

  1. Searching for dark absorption with direct detection experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bloch, Itay M.; Essig, Rouven; Tobioka, Kohsaku

    We consider the absorption by bound electrons of dark matter in the form of dark photons and axion-like particles, as well as of dark photons from the Sun, in current and next-generation direct detection experiments. Experiments sensitive to electron recoils can detect such particles with masses between a few eV to more than 10 keV. For dark photon dark matter, we update a previous bound based on XENON10 data and derive new bounds based on data from XENON100 and CDMSlite. We find these experiments to disfavor previously allowed parameter space. Moreover, we derive sensitivity projections for SuperCDMS at SNOLAB formore » silicon and germanium targets, as well as for various possible experiments with scintillating targets (cesium iodide, sodium iodide, and gallium arsenide). The projected sensitivity can probe large new regions of parameter space. For axion-like particles, the same current direction detection data improves on previously known direct-detection constraints but does not bound new parameter space beyond known stellar cooling bounds. However, projected sensitivities of the upcoming SuperCDMS SNOLAB using germanium can go beyond these and even probe parameter space consistent with possible hints from the white dwarf luminosity function. We find similar results for dark photons from the sun. For all cases, direct-detection experiments can have unprecedented sensitivity to dark-sector particles.« less

  2. Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots.

    PubMed

    Wang, Junpeng; Liu, Xiaotong; Shen, Han-Wei; Lin, Guang

    2017-01-01

    Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.

  3. Looking for the WIMP next door

    NASA Astrophysics Data System (ADS)

    Evans, Jared A.; Gori, Stefania; Shelton, Jessie

    2018-02-01

    We comprehensively study experimental constraints and prospects for a class of minimal hidden sector dark matter (DM) models, highlighting how the cosmological history of these models informs the experimental signals. We study simple `secluded' models, where the DM freezes out into unstable dark mediator states, and consider the minimal cosmic history of this dark sector, where coupling of the dark mediator to the SM was sufficient to keep the two sectors in thermal equilibrium at early times. In the well-motivated case where the dark mediators couple to the Standard Model (SM) via renormalizable interactions, the requirement of thermal equilibrium provides a minimal, UV-insensitive, and predictive cosmology for hidden sector dark matter. We call DM that freezes out of a dark radiation bath in thermal equilibrium with the SM a WIMP next door, and demonstrate that the parameter space for such WIMPs next door is sharply defined, bounded, and in large part potentially accessible. This parameter space, and the corresponding signals, depend on the leading interaction between the SM and the dark mediator; we establish it for both Higgs and vector portal interactions. In particular, there is a cosmological lower bound on the portal coupling strength necessary to thermalize the two sectors in the early universe. We determine this thermalization floor as a function of equilibration temperature for the first time. We demonstrate that direct detection experiments are currently probing this cosmological lower bound in some regions of parameter space, while indirect detection signals and terrestrial searches for the mediator cut further into the viable parameter space. We present regions of interest for both direct detection and dark mediator searches, including motivated parameter space for the direct detection of sub-GeV DM.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaeffer, D. B.; Winske, D.; Larson, D. J.

    Collisionless shocks are common phenomena in space and astrophysical systems, and in many cases, the shocks can be modeled as the result of the expansion of a magnetic piston though a magnetized ambient plasma. Only recently, however, have laser facilities and diagnostic capabilities evolved sufficiently to allow the detailed study in the laboratory of the microphysics of piston-driven shocks. We review experiments on collisionless shocks driven by a laser-produced magnetic piston undertaken with the Phoenix laser laboratory and the Large Plasma Device at the University of California, Los Angeles. The experiments span a large parameter space in laser energy, backgroundmore » magnetic field, and ambient plasma properties that allow us to probe the physics of piston-ambient energy coupling, the launching of magnetosonic solitons, and the formation of subcritical shocks. Here, the results indicate that piston-driven magnetized collisionless shocks in the laboratory can be characterized with a small set of dimensionless formation parameters that place the formation process in an organized and predictive framework.« less

  5. Calculating the jet quenching parameter in the plasma of noncommutative Yang-Mills theory from gauge/gravity duality

    NASA Astrophysics Data System (ADS)

    Chakraborty, Somdeb; Roy, Shibaji

    2012-02-01

    A particular decoupling limit of the nonextremal (D1, D3) brane bound state system of type IIB string theory is known to give the gravity dual of space-space noncommutative Yang-Mills theory at finite temperature. We use a string probe in this background to compute the jet quenching parameter in a strongly coupled plasma of hot noncommutative Yang-Mills theory in (3+1) dimensions from gauge/gravity duality. We give expressions for the jet quenching parameter for both small and large noncommutativity. For small noncommutativity, we find that the value of the jet quenching parameter gets reduced from its commutative value. The reduction is enhanced with temperature as T7 for fixed noncommutativity and fixed ’t Hooft coupling. We also give an estimate of the correction due to noncommutativity at the present collider energies like in RHIC or in LHC and find it too small to be detected. We further generalize the results for noncommutative Yang-Mills theories in diverse dimensions.

  6. A phase transition in energy-filtered RNA secondary structures.

    PubMed

    Han, Hillary S W; Reidys, Christian M

    2012-10-01

    In this article we study the effect of energy parameters on minimum free energy (mfe) RNA secondary structures. Employing a simplified combinatorial energy model that is only dependent on the diagram representation and is not sequence-specific, we prove the following dichotomy result. Mfe structures derived via the Turner energy parameters contain only finitely many complex irreducible substructures, and just minor parameter changes produce a class of mfe structures that contain a large number of small irreducibles. We localize the exact point at which the distribution of irreducibles experiences this phase transition from a discrete limit to a central limit distribution and, subsequently, put our result into the context of quantifying the effect of sparsification of the folding of these respective mfe structures. We show that the sparsification of realistic mfe structures leads to a constant time and space reduction, and that the sparsification of the folding of structures with modified parameters leads to a linear time and space reduction. We, furthermore, identify the limit distribution at the phase transition as a Rayleigh distribution.

  7. Weldability of an iron meteorite by Friction Stir Spot Welding: A contribution to in-space manufacturing

    NASA Astrophysics Data System (ADS)

    Evans, William Todd; Neely, Kelsay E.; Strauss, Alvin M.; Cook, George E.

    2017-11-01

    Friction Stir Welding has been proposed as an efficient and appropriate method for in space welding. It has the potential to serve as a viable option for assembling large scale space structures. These large structures will require the use of natural in space materials such as those available from iron meteorites. Impurities present in most iron meteorites limit its ability to be welded by other space welding techniques such as electron beam laser welding. This study investigates the ability to weld pieces of in situ Campo del Cielo meteorites by Friction Stir Spot Welding. Due to the rarity of the material, low carbon steel was used as a model material to determine welding parameters. Welded samples of low carbon steel, invar, and Campo del Cielo meteorite were compared and found to behave in similar ways. This study shows that meteorites can be Friction Stir Spot Welded and that they exhibit properties analogous to that of FSSW low carbon steel welds. Thus, iron meteorites can be regarded as another viable option for in-space or Martian construction.

  8. PASP Plus: An experiment to measure space-environment effects on photovoltaic power subsystems

    NASA Technical Reports Server (NTRS)

    Guidice, Donald A.

    1992-01-01

    The Photovoltaic Array Space Power Plus Diagnostic experiment (PASP Plus) was accepted as part of the APEX Mission payload aboard a Pegastar satellite to be orbited by a Pegasus launch vehicle in late 1992. The mission's elliptical orbit will allow us to investigate both space plasma and space radiation effects. PASP Plus will have eleven types of solar arrays and a full complement of environmental and interactions diagnostic sensors. Measurements of space-plasma interactions on the various solar arrays will be made at large negative voltages (to investigate arcing parameters) and at large positive voltages (to investigate leakage currents) by biasing the arrays to various levels up to -500 and +500 volts. The long-term deterioration in solar array performance caused by exposure to space radiation will also be investigated; radiation dosage will be measured by an electron/proton dosimeter included in the environmental sensor complement. Experimental results from PASP Plus will help establish cause-and-effect relationships and lead to improved design guidelines and test standards for new-technology solar arrays.

  9. Transient and Steady-state Tests of the Space Power Research Engine with Resistive and Motor Loads

    NASA Technical Reports Server (NTRS)

    Rauch, Jeffrey S.; Kankam, M. David

    1995-01-01

    The NASA Lewis Research Center (LeRC) has been testing free-piston Stirling engine/linear alternators (FPSE/LA) to develop advanced power convertors for space-based electrical power generation. Tests reported herein were performed to evaluate the interaction and transient behavior of FPSE/LA-based power systems with typical user loads. Both resistive and small induction motor loads were tested with the space power research engine (SPRE) power system. Tests showed that the control system could maintain constant long term voltage and stable periodic operation over a large range of engine operating parameters and loads. Modest resistive load changes were shown to cause relatively large voltage and, therefore, piston and displacer amplitude excursions. Starting a typical small induction motor was shown to cause large and, in some cases, deleterious voltage transients. The tests identified the need for more effective controls, if FPSE/LAs are to be used for stand-alone power systems. The tests also generated a large body of transient dynamic data useful for analysis code validation.

  10. Transient and steady-state tests of the space power research engine with resistive and motor loads

    NASA Astrophysics Data System (ADS)

    Rauch, Jeffrey S.; Kankam, M. David

    1995-01-01

    The NASA Lewis Research Center (LeRC) has been testing free-piston Stirling engine/linear alternators (FPSE/LA) to develop advanced power convertors for space-based electrical power generation. Tests reported herein were performed to evaluate the interaction and transient behavior of FPSE/LA-based power systems with typical user loads. Both resistive and small induction motor loads were tested with the space power research engine (SPRE) power system. Tests showed that the control system could maintain constant long term voltage and stable periodic operation over a large range of engine operating parameters and loads. Modest resistive load changes were shown to cause relatively large voltage and, therefore, piston and displacer amplitude excursions. Starting a typical small induction motor was shown to cause large and, in some cases, deleterious voltage transients. The tests identified the need for more effective controls, if FPSE/LAs are to be used for stand-alone power systems. The tests also generated a large body of transient dynamic data useful for analysis code validation.

  11. Implications of dune pattern analysis for Titan's surface history

    NASA Astrophysics Data System (ADS)

    Savage, Christopher J.; Radebaugh, Jani; Christiansen, Eric H.; Lorenz, Ralph D.

    2014-02-01

    Analysis of large-scale morphological parameters can reveal the reaction of dunes to changes in atmospheric and sedimentary conditions. Over 7000 dune width and 7000 dune spacing measurements were obtained for linear dunes in regions across Saturn's moon Titan from images T21, T23, T28, T44 and T48 collected by the Synthetic Aperture RADAR (SAR) aboard the Cassini spacecraft in order to reconstruct the aeolian surface history of Titan. Dunes in the five study areas are all linear in form, with a mean width of 1.3 km and mean crest spacing of 2.7 km, similar to dunes in the African Saharan and Namib deserts on Earth. At the resolution of Cassini SAR, the dunes have the morphology of large linear dunes, and they lack evidence for features of compound or complex dunes. The large size, spacing and uniform morphology are all indicators that Titan's dunes are mature features, in that they have grown toward a steady state for a long period of time. Dune width decreases to the north, perhaps from increased sediment stabilization caused by a net transport of moisture from south to north, or from increased maturity in dunes to the south. Cumulative probability plots of dune parameters measured at different locations across Titan indicate there is a single population of intermediate-to-large-sized dunes on Titan. This suggests that, unlike analogous dunes in the Namib and Agneitir Sand Seas, dune-forming conditions that generated the current set of dunes were stable and active long enough to erase any evidence of past conditions.

  12. Controlling the column spacing in isothermal magnetic advection to enable tunable heat and mass transfer.

    DOE PAGES

    Solis, Kyle Jameson; Martin, James E.

    2012-11-01

    Isothermal magnetic advection is a recently discovered method of inducing highly organized, non-contact flow lattices in suspensions of magnetic particles, using only uniform ac magnetic fields of modest strength. The initiation of these vigorous flows requires neither a thermal gradient nor a gravitational field and so can be used to transfer heat and mass in circumstances where natural convection does not occur. These advection lattices are comprised of a square lattice of antiparallel flow columns. If the column spacing is sufficiently large compared to the column length, and the flow rate within the columns is sufficiently large, then one wouldmore » expect efficient transfer of both heat and mass. Otherwise, the flow lattice could act as a countercurrent heat exchanger and only mass will be efficiently transferred. Although this latter case might be useful for feeding a reaction front without extracting heat, it is likely that most interest will be focused on using IMA for heat transfer. In this paper we explore the various experimental parameters of IMA to determine which of these can be used to control the column spacing. These parameters include the field frequency, strength, and phase relation between the two field components, the liquid viscosity and particle volume fraction. We find that the column spacing can easily be tuned over a wide range, to enable the careful control of heat and mass transfer.« less

  13. Models of H II regions - Heavy element opacity, variation of temperature

    NASA Technical Reports Server (NTRS)

    Rubin, R. H.

    1985-01-01

    A detailed set of H II region models that use the same physics and self-consistent input have been computed and are used to examine where in parameter space the effects of heavy element opacity is important. The models are briefly described, and tabular data for the input parameters and resulting properties of the models are presented. It is found that the opacities of C, Ne, O, and to a lesser extent N play a vital role over a large region of parameter space, while S and Ar opacities are negligible. The variation of the average electron temperature T(e) of the models with metal abundance, density, and T(eff) is investigated. It is concluded that by far the most important determinator of T(e) is metal abundance; an almost 7000 K difference is expected over the factor of 10 change from up to down abundances.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burrage, Clare; Copeland, Edmund J.; Hinds, E.A., E-mail: Clare.Burrage@nottingham.ac.uk, E-mail: Edmund.Copeland@nottingham.ac.uk, E-mail: Ed.Hinds@imperial.ac.uk

    Theories of dark energy require a screening mechanism to explain why the associated scalar fields do not mediate observable long range fifth forces. The archetype of this is the chameleon field. Here we show that individual atoms are too small to screen the chameleon field inside a large high-vacuum chamber, and therefore can detect the field with high sensitivity. We derive new limits on the chameleon parameters from existing experiments, and show that most of the remaining chameleon parameter space is readily accessible using atom interferometry.

  15. Space processing of crystalline materials: A study of known methods of electrical characterization of semiconductors

    NASA Technical Reports Server (NTRS)

    Castle, J. G.

    1976-01-01

    A literature survey is presented covering nondestructive methods of electrical characterization of semiconductors. A synopsis of each technique deals with the applicability of the techniques to various device parameters and to potential in-flight use before, during, and after growth experiments on space flights. It is concluded that the very recent surge in the commercial production of large scale integrated circuitry and other semiconductor arrays requiring uniformity on the scale of a few microns, involves nondestructive test procedures which could well be useful to NASA for in-flight use in space processing.

  16. An analysis of the massless planet approximation in transit light curve models

    NASA Astrophysics Data System (ADS)

    Millholland, Sarah; Ruch, Gerry

    2015-08-01

    Many extrasolar planet transit light curve models use the approximation of a massless planet. They approximate the planet as orbiting elliptically with the host star at the orbit’s focus instead of depicting the planet and star as both orbiting around a common center of mass. This approximation should generally be very good because the transit is a small fraction of the full-phase curve and the planet to stellar mass ratio is typically very small. However, to fully examine the legitimacy of this approximation, it is useful to perform a robust, all-parameter space-encompassing statistical comparison between the massless planet model and the more accurate model.Towards this goal, we establish two questions: (1) In what parameter domain is the approximation invalid? (2) If characterizing an exoplanetary system in this domain, what is the error of the parameter estimates when using the simplified model? We first address question (1). Given each parameter vector in a finite space, we can generate the simplified and more complete model curves. Associated with these model curves is a measure of the deviation between them, such as the root mean square (RMS). We use Gibbs sampling to generate a sample that is distributed according to the RMS surface. The high-density regions in the sample correspond to a large deviation between the models. To determine the domains of these high-density areas, we first employ the Ordering Points to Identify the Clustering Structure (OPTICS) algorithm. We then characterize the subclusters by performing the Patient Rule Induction Method (PRIM) on the transformed Principal Component spaces of each cluster. This process yields descriptors of the parameter domains with large discrepancies between the models.To consider question (2), we start by generating synthetic transit curve observations in the domains specified by the above analysis. We then derive the best-fit parameters of these synthetic light curves according to each model and examine the quality of agreement between the estimated parameters. Taken as a whole, these steps allow for a thorough analysis of the validity of the massless planet approximation.

  17. The dynamics of blood biochemical parameters in cosmonauts during long-term space flights

    NASA Astrophysics Data System (ADS)

    Markin, Andrei; Strogonova, Lubov; Balashov, Oleg; Polyakov, Valery; Tigner, Timoty

    Most of the previously obtained data on cosmonauts' metabolic state concerned certain stages of the postflight period. In this connection, all conclusions, as to metabolism peculiarities during the space flight, were to a large extent probabilistic. The purpose of this work was study of metabolism characteristics in cosmonauts directly during long-term space flights. In the capillary blood samples taken from a finger, by "Reflotron IV" biochemical analyzer, "Boehringer Mannheim" GmbH, Germany, adapted to weightlessness environments, the activity of GOT, GPT, CK, gamma-GT, total and pancreatic amylase, as well as concentration of hemoglobin, glucose, total bilirubin, uric acid, urea, creatinine, total, HDL- and LDL cholesterol, triglycerides had been determined. HDL/LDL-cholesterol ratio also was computed. The crewmembers of 6 main missions to the "Mir" orbital station, a total of 17 cosmonauts, were examined. Biochemical tests were carryed out 30-60 days before lounch, and in the flights different stages between the 25-th and the 423-rd days of flights. In cosmonauts during space flight had been found tendency to increase, in compare with basal level, GOT, GPT, total amylase activity, glucose and total cholesterol concentration, and tendency to decrease of CK activity, hemoglobin, HDL-cholesterol concentration, and HDL/LDL — cholesterol ratio. Some definite trends in variations of other determined biochemical parameters had not been found. The same trends of mentioned biochemical parameters alterations observed in majority of tested cosmonauts, allows to suppose existence of connection between noted metabolic alterations with influence of space flight conditions upon cosmonaut's body. Variations of other studied blood biochemical parameters depends on, probably, pure individual causes.

  18. On the tidal effects in the motion of earth satellites and the love parameters of the earth

    NASA Technical Reports Server (NTRS)

    Musen, P.; Estes, R.

    1972-01-01

    The tidal effects in the motion of artificial satellites are studied to determine the elastic properties of the earth as they are observed from extraterrestrial space. Considering Love numbers, the disturbing potential is obtained as the analytical continuation of the tidal potential from the surface of the earth into-outer space, with parameters which characterize the earth's elastic response to tidal attraction by the moon and the sun. It is concluded that the tidal effects represent a superposition of a large number of periodic terms, and the rotation of the lunar orbital plane produces a term of 18 years period in tidal perturbations of the ascending node of the satellite's orbit.

  19. Expanding (3+1)-dimensional universe from a lorentzian matrix model for superstring theory in (9+1) dimensions.

    PubMed

    Kim, Sang-Woo; Nishimura, Jun; Tsuchiya, Asato

    2012-01-06

    We reconsider the matrix model formulation of type IIB superstring theory in (9+1)-dimensional space-time. Unlike the previous works in which the Wick rotation was used to make the model well defined, we regularize the Lorentzian model by introducing infrared cutoffs in both the spatial and temporal directions. Monte Carlo studies reveal that the two cutoffs can be removed in the large-N limit and that the theory thus obtained has no parameters other than one scale parameter. Moreover, we find that three out of nine spatial directions start to expand at some "critical time," after which the space has SO(3) symmetry instead of SO(9).

  20. A proposed experimental search for chameleons using asymmetric parallel plates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burrage, Clare; Copeland, Edmund J.; Stevenson, James A., E-mail: Clare.Burrage@nottingham.ac.uk, E-mail: ed.copeland@nottingham.ac.uk, E-mail: james.stevenson@nottingham.ac.uk

    2016-08-01

    Light scalar fields coupled to matter are a common consequence of theories of dark energy and attempts to solve the cosmological constant problem. The chameleon screening mechanism is commonly invoked in order to suppress the fifth forces mediated by these scalars, sufficiently to avoid current experimental constraints, without fine tuning. The force is suppressed dynamically by allowing the mass of the scalar to vary with the local density. Recently it has been shown that near future cold atoms experiments using atom-interferometry have the ability to access a large proportion of the chameleon parameter space. In this work we demonstrate howmore » experiments utilising asymmetric parallel plates can push deeper into the remaining parameter space available to the chameleon.« less

  1. Efficient geostatistical inversion of transient groundwater flow using preconditioned nonlinear conjugate gradients

    NASA Astrophysics Data System (ADS)

    Klein, Ole; Cirpka, Olaf A.; Bastian, Peter; Ippisch, Olaf

    2017-04-01

    In the geostatistical inverse problem of subsurface hydrology, continuous hydraulic parameter fields, in most cases hydraulic conductivity, are estimated from measurements of dependent variables, such as hydraulic heads, under the assumption that the parameter fields are autocorrelated random space functions. Upon discretization, the continuous fields become large parameter vectors with O (104 -107) elements. While cokriging-like inversion methods have been shown to be efficient for highly resolved parameter fields when the number of measurements is small, they require the calculation of the sensitivity of each measurement with respect to all parameters, which may become prohibitive with large sets of measured data such as those arising from transient groundwater flow. We present a Preconditioned Conjugate Gradient method for the geostatistical inverse problem, in which a single adjoint equation needs to be solved to obtain the gradient of the objective function. Using the autocovariance matrix of the parameters as preconditioning matrix, expensive multiplications with its inverse can be avoided, and the number of iterations is significantly reduced. We use a randomized spectral decomposition of the posterior covariance matrix of the parameters to perform a linearized uncertainty quantification of the parameter estimate. The feasibility of the method is tested by virtual examples of head observations in steady-state and transient groundwater flow. These synthetic tests demonstrate that transient data can reduce both parameter uncertainty and time spent conducting experiments, while the presented methods are able to handle the resulting large number of measurements.

  2. Universal dynamical properties preclude standard clustering in a large class of biochemical data.

    PubMed

    Gomez, Florian; Stoop, Ralph L; Stoop, Ruedi

    2014-09-01

    Clustering of chemical and biochemical data based on observed features is a central cognitive step in the analysis of chemical substances, in particular in combinatorial chemistry, or of complex biochemical reaction networks. Often, for reasons unknown to the researcher, this step produces disappointing results. Once the sources of the problem are known, improved clustering methods might revitalize the statistical approach of compound and reaction search and analysis. Here, we present a generic mechanism that may be at the origin of many clustering difficulties. The variety of dynamical behaviors that can be exhibited by complex biochemical reactions on variation of the system parameters are fundamental system fingerprints. In parameter space, shrimp-like or swallow-tail structures separate parameter sets that lead to stable periodic dynamical behavior from those leading to irregular behavior. We work out the genericity of this phenomenon and demonstrate novel examples for their occurrence in realistic models of biophysics. Although we elucidate the phenomenon by considering the emergence of periodicity in dependence on system parameters in a low-dimensional parameter space, the conclusions from our simple setting are shown to continue to be valid for features in a higher-dimensional feature space, as long as the feature-generating mechanism is not too extreme and the dimension of this space is not too high compared with the amount of available data. For online versions of super-paramagnetic clustering see http://stoop.ini.uzh.ch/research/clustering. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. A gravitational puzzle.

    PubMed

    Caldwell, Robert R

    2011-12-28

    The challenge to understand the physical origin of the cosmic acceleration is framed as a problem of gravitation. Specifically, does the relationship between stress-energy and space-time curvature differ on large scales from the predictions of general relativity. In this article, we describe efforts to model and test a generalized relationship between the matter and the metric using cosmological observations. Late-time tracers of large-scale structure, including the cosmic microwave background, weak gravitational lensing, and clustering are shown to provide good tests of the proposed solution. Current data are very close to proving a critical test, leaving only a small window in parameter space in the case that the generalized relationship is scale free above galactic scales.

  4. Expert-guided optimization for 3D printing of soft and liquid materials.

    PubMed

    Abdollahi, Sara; Davis, Alexander; Miller, John H; Feinberg, Adam W

    2018-01-01

    Additive manufacturing (AM) has rapidly emerged as a disruptive technology to build mechanical parts, enabling increased design complexity, low-cost customization and an ever-increasing range of materials. Yet these capabilities have also created an immense challenge in optimizing the large number of process parameters in order achieve a high-performance part. This is especially true for AM of soft, deformable materials and for liquid-like resins that require experimental printing methods. Here, we developed an expert-guided optimization (EGO) strategy to provide structure in exploring and improving the 3D printing of liquid polydimethylsiloxane (PDMS) elastomer resin. EGO uses three steps, starting first with expert screening to select the parameter space, factors, and factor levels. Second is a hill-climbing algorithm to search the parameter space defined by the expert for the best set of parameters. Third is expert decision making to try new factors or a new parameter space to improve on the best current solution. We applied the algorithm to two calibration objects, a hollow cylinder and a five-sided hollow cube that were evaluated based on a multi-factor scoring system. The optimum print settings were then used to print complex PDMS and epoxy 3D objects, including a twisted vase, water drop, toe, and ear, at a level of detail and fidelity previously not obtained.

  5. Expert-guided optimization for 3D printing of soft and liquid materials

    PubMed Central

    Abdollahi, Sara; Davis, Alexander; Miller, John H.

    2018-01-01

    Additive manufacturing (AM) has rapidly emerged as a disruptive technology to build mechanical parts, enabling increased design complexity, low-cost customization and an ever-increasing range of materials. Yet these capabilities have also created an immense challenge in optimizing the large number of process parameters in order achieve a high-performance part. This is especially true for AM of soft, deformable materials and for liquid-like resins that require experimental printing methods. Here, we developed an expert-guided optimization (EGO) strategy to provide structure in exploring and improving the 3D printing of liquid polydimethylsiloxane (PDMS) elastomer resin. EGO uses three steps, starting first with expert screening to select the parameter space, factors, and factor levels. Second is a hill-climbing algorithm to search the parameter space defined by the expert for the best set of parameters. Third is expert decision making to try new factors or a new parameter space to improve on the best current solution. We applied the algorithm to two calibration objects, a hollow cylinder and a five-sided hollow cube that were evaluated based on a multi-factor scoring system. The optimum print settings were then used to print complex PDMS and epoxy 3D objects, including a twisted vase, water drop, toe, and ear, at a level of detail and fidelity previously not obtained. PMID:29621286

  6. A note about Gaussian statistics on a sphere

    NASA Astrophysics Data System (ADS)

    Chave, Alan D.

    2015-11-01

    The statistics of directional data on a sphere can be modelled either using the Fisher distribution that is conditioned on the magnitude being unity, in which case the sample space is confined to the unit sphere, or using the latitude-longitude marginal distribution derived from a trivariate Gaussian model that places no constraint on the magnitude. These two distributions are derived from first principles and compared. The Fisher distribution more closely approximates the uniform distribution on a sphere for a given small value of the concentration parameter, while the latitude-longitude marginal distribution is always slightly larger than the Fisher distribution at small off-axis angles for large values of the concentration parameter. Asymptotic analysis shows that the two distributions only become equivalent in the limit of large concentration parameter and very small off-axis angle.

  7. Application of high performance computing for studying cyclic variability in dilute internal combustion engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FINNEY, Charles E A; Edwards, Kevin Dean; Stoyanov, Miroslav K

    2015-01-01

    Combustion instabilities in dilute internal combustion engines are manifest in cyclic variability (CV) in engine performance measures such as integrated heat release or shaft work. Understanding the factors leading to CV is important in model-based control, especially with high dilution where experimental studies have demonstrated that deterministic effects can become more prominent. Observation of enough consecutive engine cycles for significant statistical analysis is standard in experimental studies but is largely wanting in numerical simulations because of the computational time required to compute hundreds or thousands of consecutive cycles. We have proposed and begun implementation of an alternative approach to allowmore » rapid simulation of long series of engine dynamics based on a low-dimensional mapping of ensembles of single-cycle simulations which map input parameters to output engine performance. This paper details the use Titan at the Oak Ridge Leadership Computing Facility to investigate CV in a gasoline direct-injected spark-ignited engine with a moderately high rate of dilution achieved through external exhaust gas recirculation. The CONVERGE CFD software was used to perform single-cycle simulations with imposed variations of operating parameters and boundary conditions selected according to a sparse grid sampling of the parameter space. Using an uncertainty quantification technique, the sampling scheme is chosen similar to a design of experiments grid but uses functions designed to minimize the number of samples required to achieve a desired degree of accuracy. The simulations map input parameters to output metrics of engine performance for a single cycle, and by mapping over a large parameter space, results can be interpolated from within that space. This interpolation scheme forms the basis for a low-dimensional metamodel which can be used to mimic the dynamical behavior of corresponding high-dimensional simulations. Simulations of high-EGR spark-ignition combustion cycles within a parametric sampling grid were performed and analyzed statistically, and sensitivities of the physical factors leading to high CV are presented. With these results, the prospect of producing low-dimensional metamodels to describe engine dynamics at any point in the parameter space will be discussed. Additionally, modifications to the methodology to account for nondeterministic effects in the numerical solution environment are proposed« less

  8. Load Diffusion in Composite Structures

    NASA Technical Reports Server (NTRS)

    Horgan, Cornelius O.; Simmonds, J. G.

    2000-01-01

    This research has been concerned with load diffusion in composite structures. Fundamental solid mechanics studies were carried out to provide a basis for assessing the complicated modeling necessary for large scale structures used by NASA. An understanding of the fundamental mechanisms of load diffusion in composite subcomponents is essential in developing primary composite structures. Analytical models of load diffusion behavior are extremely valuable in building an intuitive base for developing refined modeling strategies and assessing results from finite element analyses. The decay behavior of stresses and other field quantities provides a significant aid towards this process. The results are also amendable to parameter study with a large parameter space and should be useful in structural tailoring studies.

  9. A method to describe inelastic gamma field distribution in neutron gamma density logging.

    PubMed

    Zhang, Feng; Zhang, Quanying; Liu, Juntao; Wang, Xinguang; Wu, He; Jia, Wenbao; Ti, Yongzhou; Qiu, Fei; Zhang, Xiaoyang

    2017-11-01

    Pulsed neutron gamma density logging (NGD) is of great significance for radioprotection and density measurement in LWD, however, the current methods have difficulty in quantitative calculation and single factor analysis for the inelastic gamma field distribution. In order to clarify the NGD mechanism, a new method is developed to describe the inelastic gamma field distribution. Based on the fast-neutron scattering and gamma attenuation, the inelastic gamma field distribution is characterized by the inelastic scattering cross section, fast-neutron scattering free path, formation density and other parameters. And the contribution of formation parameters on the field distribution is quantitatively analyzed. The results shows the contribution of density attenuation is opposite to that of inelastic scattering cross section and fast-neutron scattering free path. And as the detector-spacing increases, the density attenuation gradually plays a dominant role in the gamma field distribution, which means large detector-spacing is more favorable for the density measurement. Besides, the relationship of density sensitivity and detector spacing was studied according to this gamma field distribution, therefore, the spacing of near and far gamma ray detector is determined. The research provides theoretical guidance for the tool parameter design and density determination of pulsed neutron gamma density logging technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Lightning charge moment changes estimated by high speed photometric observations from ISS

    NASA Astrophysics Data System (ADS)

    Hobara, Y.; Kono, S.; Suzuki, K.; Sato, M.; Takahashi, Y.; Adachi, T.; Ushio, T.; Suzuki, M.

    2017-12-01

    Optical observations by the CCD camera using the orbiting satellite is generally used to derive the spatio-temporal global distributions of the CGs and ICs. However electrical properties of the lightning such as peak current and lightning charge are difficult to obtain from the space. In particular, CGs with considerably large lightning charge moment changes (CMC) and peak currents are crucial parameters to generate red sprites and elves, respectively, and so it must be useful to obtain these parameters from space. In this paper, we obtained the lightning optical signatures by using high speed photometric observations from the International Space Station GLIMS (Global Lightning and Sprit MeasurementS JEM-EF) mission. These optical signatures were compared quantitatively with radio signatures recognized as truth values derived from ELF electromagnetic wave observations on the ground to verify the accuracy of the optically derived values. High correlation (R > 0.9) was obtained between lightning optical irradiance and current moment, and quantitative relational expression between these two parameters was derived. Rather high correlation (R > 0.7) was also obtained between the integrated irradiance and the lightning CMC. Our results indicate the possibility to derive lightning electrical properties (current moment and CMC) from optical measurement from space. Moreover, we hope that these results will also contribute to forthcoming French microsatellite mission TARANIS.

  11. Inversion of Surface Wave Phase Velocities for Radial Anisotropy to an Depth of 1200 km

    NASA Astrophysics Data System (ADS)

    Xing, Z.; Beghein, C.; Yuan, K.

    2012-12-01

    This study aims to evaluate three dimensional radial anisotropy to an depth of 1200 km. Radial anisotropy describes the difference in velocity between horizontally polarized Rayleigh waves and vertically polarized Love waves. Its presence in the uppermost 200 km mantle has well been documented by different groups, and has been regarded as an indicator of mantle convection which aligns the intrinsically anisotropic minerals, largely olivine, to form large scale anisotropy. However, there is no global agreement on whether anisotropy exists in the region below 200 km. Recent models also associate a fast vertically polarized shear wave with vertical upwelling mantle flow. The data used in this study is the globally isotropic phase velocity models of fundamental and higher mode Love and Rayleigh waves (Visser, 2008). The inclusion of higher mode surface wave phase velocity provides sensitivities to structure at depth that extends to below the transition zone. While the data is the same as used by Visser (2008), a quite different parameterization is applied. All the six parameters - five elastic parameters A, C, F, L, N and density - are now regarded as independent, which rules out possible biased conclusions induced by scaling relation method used in several previous studies to reduce the number of parameters partly due to limited computing resources. The data need to be modified by crustal corrections (Crust2.0) as we want to look at the mantle structure only. We do this by eliminating the perturbation in surface wave phase velocity caused by the difference in crustal structure with respect to the referent model PREM. Sambridge's Neighborhood Algorithm is used to search the parameter space. The introduction of such a direct search technique pales the traditional inversion method, which requires regularization or some unnecessary priori restriction on the model space. On the contrary, the new method will search the full model space, providing probability density function of each anisotropic parameter and the corresponding resolution.

  12. Astrophysical Model Selection in Gravitational Wave Astronomy

    NASA Technical Reports Server (NTRS)

    Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.

    2012-01-01

    Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.

  13. Simulation study of interactions of Space Shuttle-generated electron beams with ambient plasmas

    NASA Technical Reports Server (NTRS)

    Lin, Chin S.

    1992-01-01

    This report summarizes results obtained through the support of NASA Grant NAGW-1936. The objective of this report is to conduct large scale simulations of electron beams injected into space. The topics covered include the following: (1) simulation of radial expansion of an injected electron beam; (2) simulations of the active injections of electron beams; (3) parameter study of electron beam injection into an ionospheric plasma; and (4) magnetosheath-ionospheric plasma interactions in the cusp.

  14. Low-Cost Radar Sensors for Personnel Detection and Tracking in Urban Areas

    DTIC Science & Technology

    2007-01-31

    progress on the reserach grant "Low-Cost Radar Sensors for Personnel Detection and Tracking in Urban Areas" during the period 1 May 2005 - 31 December...the DOA of target i with respect to the array boresight is given by: O 1sin-1 -/- fD)--F2(. )(1) where d is the spacing between the elements and A, is...wall. A large database was collected for different parameter spaces including number of humans, types of movements, wall types and radar polarization

  15. Predicting the Consequences of MMOD Penetrations on the International Space Station

    NASA Technical Reports Server (NTRS)

    Hyde, James; Christiansen, E.; Lear, D.; Evans

    2018-01-01

    The threat from micrometeoroid and orbital debris (MMOD) impacts on space vehicles is often quantified in terms of the probability of no penetration (PNP). However, for large spacecraft, especially those with multiple compartments, a penetration may have a number of possible outcomes. The extent of the damage (diameter of hole, crack length or penetration depth), the location of the damage relative to critical equipment or crew, crew response, and even the time of day of the penetration are among the many factors that can affect the outcome. For the International Space Station (ISS), a Monte-Carlo style software code called Manned Spacecraft Crew Survivability (MSCSurv) is used to predict the probability of several outcomes of an MMOD penetration-broadly classified as loss of crew (LOC), crew evacuation (Evac), loss of escape vehicle (LEV), and nominal end of mission (NEOM). By generating large numbers of MMOD impacts (typically in the billions) and tracking the consequences, MSCSurv allows for the inclusion of a large number of parameters and models as well as enabling the consideration of uncertainties in the models and parameters. MSCSurv builds upon the results from NASA's Bumper software (which provides the probability of penetration and critical input data to MSCSurv) to allow analysts to estimate the probability of LOC, Evac, LEV, and NEOM. This paper briefly describes the overall methodology used by NASA to quantify LOC, Evac, LEV, and NEOM with particular emphasis on describing in broad terms how MSCSurv works and its capabilities and most significant models.

  16. Role of jet spacing and strut geometry on the formation of large scale structures and mixing characteristics

    NASA Astrophysics Data System (ADS)

    Soni, Rahul Kumar; De, Ashoke

    2018-05-01

    The present study primarily focuses on the effect of the jet spacing and strut geometry on the evolution and structure of the large-scale vortices which play a key role in mixing characteristics in turbulent supersonic flows. Numerically simulated results corresponding to varying parameters such as strut geometry and jet spacing (Xn = nDj such that n = 2, 3, and 5) for a square jet of height Dj = 0.6 mm are presented in the current study, while the work also investigates the presence of the local quasi-two-dimensionality for the X2(2Dj) jet spacing; however, the same is not true for higher jet spacing. Further, the tapered strut (TS) section is modified into the straight strut (SS) for investigation, where the remarkable difference in flow physics is unfolded between the two configurations for similar jet spacing (X2: 2Dj). The instantaneous density and vorticity contours reveal the structures of varying scales undergoing different evolution for the different configurations. The effect of local spanwise rollers is clearly manifested in the mixing efficiency and the jet spreading rate. The SS configuration exhibits excellent near field mixing behavior amongst all the arrangements. However, in the case of TS cases, only the X2(2Dj) configuration performs better due to the presence of local spanwise rollers. The qualitative and quantitative analysis reveals that near-field mixing is strongly affected by the two-dimensional rollers, while the early onset of the wake mode is another crucial parameter to have improved mixing. Modal decomposition performed for the SS arrangement sheds light onto the spatial and temporal coherence of the structures, where the most dominant structures are found to be the von Kármán street vortices in the wake region.

  17. Hamilton's Equations with Euler Parameters for Rigid Body Dynamics Modeling. Chapter 3

    NASA Technical Reports Server (NTRS)

    Shivarama, Ravishankar; Fahrenthold, Eric P.

    2004-01-01

    A combination of Euler parameter kinematics and Hamiltonian mechanics provides a rigid body dynamics model well suited for use in strongly nonlinear problems involving arbitrarily large rotations. The model is unconstrained, free of singularities, includes a general potential energy function and a minimum set of momentum variables, and takes an explicit state space form convenient for numerical implementation. The general formulation may be specialized to address particular applications, as illustrated in several three dimensional example problems.

  18. Revealing the jet substructure in a compressed spectrum of new physics

    NASA Astrophysics Data System (ADS)

    Han, Chengcheng; Park, Myeonghun

    2016-07-01

    The physics beyond the Standard Model with parameters of the compressed spectrum is well motivated both in the theory side and with phenomenological reasons, especially related to dark matter phenomenology. In this letter, we propose a method to tag soft final state particles from a decaying process of a new particle in this parameter space. By taking a supersymmetric gluino search as an example, we demonstrate how the Large Hadron Collider experimental collaborations can improve sensitivity in these nontrivial search regions.

  19. Probing dark energy with atom interferometry

    NASA Astrophysics Data System (ADS)

    Burrage, Clare; Copeland, Edmund J.; Hinds, E. A.

    2015-03-01

    Theories of dark energy require a screening mechanism to explain why the associated scalar fields do not mediate observable long range fifth forces. The archetype of this is the chameleon field. Here we show that individual atoms are too small to screen the chameleon field inside a large high-vacuum chamber, and therefore can detect the field with high sensitivity. We derive new limits on the chameleon parameters from existing experiments, and show that most of the remaining chameleon parameter space is readily accessible using atom interferometry.

  20. 2D data-space cross-gradient joint inversion of MT, gravity and magnetic data

    NASA Astrophysics Data System (ADS)

    Pak, Yong-Chol; Li, Tonglin; Kim, Gang-Sop

    2017-08-01

    We have developed a data-space multiple cross-gradient joint inversion algorithm, and validated it through synthetic tests and applied it to magnetotelluric (MT), gravity and magnetic datasets acquired along a 95 km profile in Benxi-Ji'an area of northeastern China. To begin, we discuss a generalized cross-gradient joint inversion for multiple datasets and model parameters sets, and formulate it in data space. The Lagrange multiplier required for the structural coupling in the data-space method is determined using an iterative solver to avoid calculation of the inverse matrix in solving the large system of equations. Next, using model-space and data-space methods, we inverted the synthetic data and field data. Based on our result, the joint inversion in data-space not only delineates geological bodies more clearly than the separate inversion, but also yields nearly equal results with the one in model-space while consuming much less memory.

  1. Construction of CASCI-type wave functions for very large active spaces.

    PubMed

    Boguslawski, Katharina; Marti, Konrad H; Reiher, Markus

    2011-06-14

    We present a procedure to construct a configuration-interaction expansion containing arbitrary excitations from an underlying full-configuration-interaction-type wave function defined for a very large active space. Our procedure is based on the density-matrix renormalization group (DMRG) algorithm that provides the necessary information in terms of the eigenstates of the reduced density matrices to calculate the coefficient of any basis state in the many-particle Hilbert space. Since the dimension of the Hilbert space scales binomially with the size of the active space, a sophisticated Monte Carlo sampling routine is employed. This sampling algorithm can also construct such configuration-interaction-type wave functions from any other type of tensor network states. The configuration-interaction information obtained serves several purposes. It yields a qualitatively correct description of the molecule's electronic structure, it allows us to analyze DMRG wave functions converged for the same molecular system but with different parameter sets (e.g., different numbers of active-system (block) states), and it can be considered a balanced reference for the application of a subsequent standard multi-reference configuration-interaction method.

  2. Models Archive and ModelWeb at NSSDC

    NASA Astrophysics Data System (ADS)

    Bilitza, D.; Papitashvili, N.; King, J. H.

    2002-05-01

    In addition to its large data holdings, NASA's National Space Science Data Center (NSSDC) also maintains an archive of space physics models for public use (ftp://nssdcftp.gsfc.nasa.gov/models/). The more than 60 model entries cover a wide range of parameters from the atmosphere, to the ionosphere, to the magnetosphere, to the heliosphere. The models are primarily empirical models developed by the respective model authors based on long data records from ground and space experiments. An online model catalog (http://nssdc.gsfc.nasa.gov/space/model/) provides information about these and other models and links to the model software if available. We will briefly review the existing model holdings and highlight some of its usages and users. In response to a growing need by the user community, NSSDC began to develop web-interfaces for the most frequently requested models. These interfaces enable users to compute and plot model parameters online for the specific conditions that they are interested in. Currently included in the Modelweb system (http://nssdc.gsfc.nasa.gov/space/model/) are the following models: the International Reference Ionosphere (IRI) model, the Mass Spectrometer Incoherent Scatter (MSIS) E90 model, the International Geomagnetic Reference Field (IGRF) and the AP/AE-8 models for the radiation belt electrons and protons. User accesses to both systems have been steadily increasing over the last years with occasional spikes prior to large scientific meetings. The current monthly rate is between 5,000 to 10,000 accesses for either system; in February 2002 13,872 accesses were recorded to the Modelsweb and 7092 accesses to the models archive.

  3. A Statistical Approach to Identify Superluminous Supernovae and Probe Their Diversity

    NASA Astrophysics Data System (ADS)

    Inserra, C.; Prajs, S.; Gutierrez, C. P.; Angus, C.; Smith, M.; Sullivan, M.

    2018-02-01

    We investigate the identification of hydrogen-poor superluminous supernovae (SLSNe I) using a photometric analysis, without including an arbitrary magnitude threshold. We assemble a homogeneous sample of previously classified SLSNe I from the literature, and fit their light curves using Gaussian processes. From the fits, we identify four photometric parameters that have a high statistical significance when correlated, and combine them in a parameter space that conveys information on their luminosity and color evolution. This parameter space presents a new definition for SLSNe I, which can be used to analyze existing and future transient data sets. We find that 90% of previously classified SLSNe I meet our new definition. We also examine the evidence for two subclasses of SLSNe I, combining their photometric evolution with spectroscopic information, namely the photospheric velocity and its gradient. A cluster analysis reveals the presence of two distinct groups. “Fast” SLSNe show fast light curves and color evolution, large velocities, and a large velocity gradient. “Slow” SLSNe show slow light curve and color evolution, small expansion velocities, and an almost non-existent velocity gradient. Finally, we discuss the impact of our analyses in the understanding of the powering engine of SLSNe, and their implementation as cosmological probes in current and future surveys.

  4. High-mass diffraction in the QCD dipole picture

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Navelet, H.; Peschanski, R.

    1998-05-01

    Using the QCD dipole picture of the BFKL pomeron, the cross-section of single diffractive dissociation of virtual photons at high energy and large diffractively excited masses is calculated. The calculation takes into account the full impact-parameter phase-space and thus allows to obtain an exact value of the triple BFKL Pomeron vertex. It appears large enough to compensate the perturbative 6-gluon coupling factor (α/π)3 thus suggesting a rather appreciable diffractive cross-section.

  5. Scalable learning method for feedforward neural networks using minimal-enclosing-ball approximation.

    PubMed

    Wang, Jun; Deng, Zhaohong; Luo, Xiaoqing; Jiang, Yizhang; Wang, Shitong

    2016-06-01

    Training feedforward neural networks (FNNs) is one of the most critical issues in FNNs studies. However, most FNNs training methods cannot be directly applied for very large datasets because they have high computational and space complexity. In order to tackle this problem, the CCMEB (Center-Constrained Minimum Enclosing Ball) problem in hidden feature space of FNN is discussed and a novel learning algorithm called HFSR-GCVM (hidden-feature-space regression using generalized core vector machine) is developed accordingly. In HFSR-GCVM, a novel learning criterion using L2-norm penalty-based ε-insensitive function is formulated and the parameters in the hidden nodes are generated randomly independent of the training sets. Moreover, the learning of parameters in its output layer is proved equivalent to a special CCMEB problem in FNN hidden feature space. As most CCMEB approximation based machine learning algorithms, the proposed HFSR-GCVM training algorithm has the following merits: The maximal training time of the HFSR-GCVM training is linear with the size of training datasets and the maximal space consumption is independent of the size of training datasets. The experiments on regression tasks confirm the above conclusions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Effective control parameters in a deep convection scheme for improved simulation of the Madden-Julian oscillation

    NASA Astrophysics Data System (ADS)

    Choi, Jin-Ho; Seo, Kyong-Hwan

    2017-06-01

    This work seeks to find the most effective parameters in a deep convection scheme (relaxed Arakawa-Schubert scheme) of the National Centers of Environmental Prediction Climate Forecast System model for improved simulation of the Madden-Julian Oscillation (MJO). A suite of sensitivity experiments are performed by changing physical components such as the relaxation parameter of mass flux for adjustment of the environment, the evaporation rate from large-scale precipitation, the moisture trigger threshold using relative humidity of the boundary layer, and the fraction of re-evaporation of convective (subgrid-scale) rainfall. Among them, the last two parameters are found to produce a significant improvement. Increasing the strength of these two parameters reduces light rainfall that inhibits complete formation of the tropical convective system or supplies more moisture that help increase a potential energy to large-scale environment in the lower troposphere (especially at 700 hPa), leading to moisture preconditioning favorable for further development and eastward propagation of the MJO. In a more humid environment, more organized MJO structure (i.e., space-time spectral signal, eastward propagation, and tilted vertical structure) is produced.

  7. An adaptive learning control system for large flexible structures

    NASA Technical Reports Server (NTRS)

    Thau, F. E.

    1985-01-01

    The objective of the research has been to study the design of adaptive/learning control systems for the control of large flexible structures. In the first activity an adaptive/learning control methodology for flexible space structures was investigated. The approach was based on using a modal model of the flexible structure dynamics and an output-error identification scheme to identify modal parameters. In the second activity, a least-squares identification scheme was proposed for estimating both modal parameters and modal-to-actuator and modal-to-sensor shape functions. The technique was applied to experimental data obtained from the NASA Langley beam experiment. In the third activity, a separable nonlinear least-squares approach was developed for estimating the number of excited modes, shape functions, modal parameters, and modal amplitude and velocity time functions for a flexible structure. In the final research activity, a dual-adaptive control strategy was developed for regulating the modal dynamics and identifying modal parameters of a flexible structure. A min-max approach was used for finding an input to provide modal parameter identification while not exceeding reasonable bounds on modal displacement.

  8. The Microgravity Vibration Isolation Mount: A Dynamic Model for Optimal Controller Design

    NASA Technical Reports Server (NTRS)

    Hampton, R. David; Tryggvason, Bjarni V.; DeCarufel, Jean; Townsend, Miles A.; Wagar, William O.

    1997-01-01

    Vibration acceleration levels on large space platforms exceed the requirements of many space experiments. The Microgravity Vibration Isolation Mount (MIM) was built by the Canadian Space Agency to attenuate these disturbances to acceptable levels, and has been operational on the Russian Space Station Mir since May 1996. It has demonstrated good isolation performance and has supported several materials science experiments. The MIM uses Lorentz (voice-coil) magnetic actuators to levitate and isolate payloads at the individual experiment/sub-experiment (versus rack) level. Payload acceleration, relative position, and relative orientation (Euler-parameter) measurements are fed to a state-space controller. The controller, in turn, determines the actuator currents needed for effective experiment isolation. This paper presents the development of an algebraic, state-space model of the MIM, in a form suitable for optimal controller design.

  9. Calculations of High-Temperature Jet Flow Using Hybrid Reynolds-Average Navier-Stokes Formulations

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Elmiligui, Alaa; Giriamaji, Sharath S.

    2008-01-01

    Two multiscale-type turbulence models are implemented in the PAB3D solver. The models are based on modifying the Reynolds-averaged Navier Stokes equations. The first scheme is a hybrid Reynolds-averaged- Navier Stokes/large-eddy-simulation model using the two-equation k(epsilon) model with a Reynolds-averaged-Navier Stokes/large-eddy-simulation transition function dependent on grid spacing and the computed turbulence length scale. The second scheme is a modified version of the partially averaged Navier Stokes model in which the unresolved kinetic energy parameter f(sub k) is allowed to vary as a function of grid spacing and the turbulence length scale. This parameter is estimated based on a novel two-stage procedure to efficiently estimate the level of scale resolution possible for a given flow on a given grid for partially averaged Navier Stokes. It has been found that the prescribed scale resolution can play a major role in obtaining accurate flow solutions. The parameter f(sub k) varies between zero and one and is equal to one in the viscous sublayer and when the Reynolds-averaged Navier Stokes turbulent viscosity becomes smaller than the large-eddy-simulation viscosity. The formulation, usage methodology, and validation examples are presented to demonstrate the enhancement of PAB3D's time-accurate turbulence modeling capabilities. The accurate simulations of flow and turbulent quantities will provide a valuable tool for accurate jet noise predictions. Solutions from these models are compared with Reynolds-averaged Navier Stokes results and experimental data for high-temperature jet flows. The current results show promise for the capability of hybrid Reynolds-averaged Navier Stokes and large eddy simulation and partially averaged Navier Stokes in simulating such flow phenomena.

  10. Evolving discriminators for querying video sequences

    NASA Astrophysics Data System (ADS)

    Iyengar, Giridharan; Lippman, Andrew B.

    1997-01-01

    In this paper we present a framework for content based query and retrieval of information from large video databases. This framework enables content based retrieval of video sequences by characterizing the sequences using motion, texture and colorimetry cues. This characterization is biologically inspired and results in a compact parameter space where every segment of video is represented by an 8 dimensional vector. Searching and retrieval is done in real- time with accuracy in this parameter space. Using this characterization, we then evolve a set of discriminators using Genetic Programming Experiments indicate that these discriminators are capable of analyzing and characterizing video. The VideoBook is able to search and retrieve video sequences with 92% accuracy in real-time. Experiments thus demonstrate that the characterization is capable of extracting higher level structure from raw pixel values.

  11. Direct reconstruction of dark energy.

    PubMed

    Clarkson, Chris; Zunckel, Caroline

    2010-05-28

    An important issue in cosmology is reconstructing the effective dark energy equation of state directly from observations. With so few physically motivated models, future dark energy studies cannot only be based on constraining a dark energy parameter space. We present a new nonparametric method which can accurately reconstruct a wide variety of dark energy behavior with no prior assumptions about it. It is simple, quick and relatively accurate, and involves no expensive explorations of parameter space. The technique uses principal component analysis and a combination of information criteria to identify real features in the data, and tailors the fitting functions to pick up trends and smooth over noise. We find that we can constrain a large variety of w(z) models to within 10%-20% at redshifts z≲1 using just SNAP-quality data.

  12. Automatic high-throughput screening of colloidal crystals using machine learning

    NASA Astrophysics Data System (ADS)

    Spellings, Matthew; Glotzer, Sharon C.

    Recent improvements in hardware and software have united to pose an interesting problem for computational scientists studying self-assembly of particles into crystal structures: while studies covering large swathes of parameter space can be dispatched at once using modern supercomputers and parallel architectures, identifying the different regions of a phase diagram is often a serial task completed by hand. While analytic methods exist to distinguish some simple structures, they can be difficult to apply, and automatic identification of more complex structures is still lacking. In this talk we describe one method to create numerical ``fingerprints'' of local order and use them to analyze a study of complex ordered structures. We can use these methods as first steps toward automatic exploration of parameter space and, more broadly, the strategic design of new materials.

  13. The Supernovae Analysis Application (SNAP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayless, Amanda J.; Fryer, Christopher Lee; Wollaeger, Ryan Thomas

    The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginningmore » to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.« less

  14. The Supernovae Analysis Application (SNAP)

    DOE PAGES

    Bayless, Amanda J.; Fryer, Christopher Lee; Wollaeger, Ryan Thomas; ...

    2017-09-06

    The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginningmore » to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.« less

  15. The Supernovae Analysis Application (SNAP)

    NASA Astrophysics Data System (ADS)

    Bayless, Amanda J.; Fryer, Chris L.; Wollaeger, Ryan; Wiggins, Brandon; Even, Wesley; de la Rosa, Janie; Roming, Peter W. A.; Frey, Lucy; Young, Patrick A.; Thorpe, Rob; Powell, Luke; Landers, Rachel; Persson, Heather D.; Hay, Rebecca

    2017-09-01

    The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginning to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.

  16. Inverse design of bulk morphologies in block copolymers using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Khadilkar, Mihir; Delaney, Kris; Fredrickson, Glenn

    Multiblock polymers are a versatile platform for creating a large range of nanostructured materials with novel morphologies and properties. However, achieving desired structures or property combinations is difficult due to a vast design space comprised of parameters including monomer species, block sequence, block molecular weights and dispersity, copolymer architecture, and binary interaction parameters. Navigating through such vast design spaces to achieve an optimal formulation for a target structure or property set requires an efficient global optimization tool wrapped around a forward simulation technique such as self-consistent field theory (SCFT). We report on such an inverse design strategy utilizing particle swarm optimization (PSO) as the global optimizer and SCFT as the forward prediction engine. To avoid metastable states in forward prediction, we utilize pseudo-spectral variable cell SCFT initiated from a library of defect free seeds of known block copolymer morphologies. We demonstrate that our approach allows for robust identification of block copolymers and copolymer alloys that self-assemble into a targeted structure, optimizing parameters such as block fractions, blend fractions, and Flory chi parameters.

  17. Retrieving the aerosol lidar ratio profile by combining ground- and space-based elastic lidars.

    PubMed

    Feiyue, Mao; Wei, Gong; Yingying, Ma

    2012-02-15

    The aerosol lidar ratio is a key parameter for the retrieval of aerosol optical properties from elastic lidar, which changes largely for aerosols with different chemical and physical properties. We proposed a method for retrieving the aerosol lidar ratio profile by combining simultaneous ground- and space-based elastic lidars. The method was tested by a simulated case and a real case at 532 nm wavelength. The results demonstrated that our method is robust and can obtain accurate lidar ratio and extinction coefficient profiles. Our method can be useful for determining the local and global lidar ratio and validating space-based lidar datasets.

  18. Generic isolated horizons in loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Beetle, Christopher; Engle, Jonathan

    2010-12-01

    Isolated horizons model equilibrium states of classical black holes. A detailed quantization, starting from a classical phase space restricted to spherically symmetric horizons, exists in the literature and has since been extended to axisymmetry. This paper extends the quantum theory to horizons of arbitrary shape. Surprisingly, the Hilbert space obtained by quantizing the full phase space of all generic horizons with a fixed area is identical to that originally found in spherical symmetry. The entropy of a large horizon remains one-quarter its area, with the Barbero-Immirzi parameter retaining its value from symmetric analyses. These results suggest a reinterpretation of the intrinsic quantum geometry of the horizon surface.

  19. Kinematics and age of 15 stars-photometric solar analogs

    NASA Astrophysics Data System (ADS)

    Galeev, A. I.; Shimansky, V. V.

    2008-03-01

    The radial and space velocities are inferred for 15 stars that are photometric analogs of the Sun. The space velocity components (U, V, W) of most of these stars lie within the 10-60 km/s interval. The star HD 225239, which in our previous papers we classified as a subgiant, has a space velocity exceeding 100 km/s, and belongs to the thick disk. The inferred fundamental parameters of the atmospheres of solar analogs are combined with published evolutionary tracks to estimate the masses and ages of the stars studied. The kinematics of photometric analogs is compared to the data for a large group of solar-type stars.

  20. Uncertainty relations as Hilbert space geometry

    NASA Technical Reports Server (NTRS)

    Braunstein, Samuel L.

    1994-01-01

    Precision measurements involve the accurate determination of parameters through repeated measurements of identically prepared experimental setups. For many parameters there is a 'natural' choice for the quantum observable which is expected to give optimal information; and from this observable one can construct an Heinsenberg uncertainty principle (HUP) bound on the precision attainable for the parameter. However, the classical statistics of multiple sampling directly gives us tools to construct bounds for the precision available for the parameters of interest (even when no obvious natural quantum observable exists, such as for phase, or time); it is found that these direct bounds are more restrictive than those of the HUP. The implication is that the natural quantum observables typically do not encode the optimal information (even for observables such as position, and momentum); we show how this can be understood simply in terms of the Hilbert space geometry. Another striking feature of these bounds to parameter uncertainty is that for a large enough number of repetitions of the measurements all V quantum states are 'minimum uncertainty' states - not just Gaussian wave-packets. Thus, these bounds tell us what precision is achievable as well as merely what is allowed.

  1. State-space models’ dirty little secrets: even simple linear Gaussian models can have estimation problems

    NASA Astrophysics Data System (ADS)

    Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M.; Derocher, Andrew E.; Lewis, Mark A.; Jonsen, Ian D.; Mills Flemming, Joanna

    2016-05-01

    State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results.

  2. Nitric oxide production in the stratosphere within the Space Shuttle's solid rocket motor plumes

    NASA Technical Reports Server (NTRS)

    Gomberg, R. I.; Brannan, J. R.; Boney, L. R.

    1978-01-01

    This study focuses on establishing the sensitivity of predictions of NO production to uncertainties in altitude, reaction rate coefficients, turbulent mixing rates, and Mach disk size and location. The results show that relatively large variations in parameters related to these phenomena had surprisingly little effect on predicted NO production.

  3. Bayesian uncertainty analysis for complex systems biology models: emulation, global parameter searches and evaluation of gene functions.

    PubMed

    Vernon, Ian; Liu, Junli; Goldstein, Michael; Rowe, James; Topping, Jen; Lindsey, Keith

    2018-01-02

    Many mathematical models have now been employed across every area of systems biology. These models increasingly involve large numbers of unknown parameters, have complex structure which can result in substantial evaluation time relative to the needs of the analysis, and need to be compared to observed data of various forms. The correct analysis of such models usually requires a global parameter search, over a high dimensional parameter space, that incorporates and respects the most important sources of uncertainty. This can be an extremely difficult task, but it is essential for any meaningful inference or prediction to be made about any biological system. It hence represents a fundamental challenge for the whole of systems biology. Bayesian statistical methodology for the uncertainty analysis of complex models is introduced, which is designed to address the high dimensional global parameter search problem. Bayesian emulators that mimic the systems biology model but which are extremely fast to evaluate are embeded within an iterative history match: an efficient method to search high dimensional spaces within a more formal statistical setting, while incorporating major sources of uncertainty. The approach is demonstrated via application to a model of hormonal crosstalk in Arabidopsis root development, which has 32 rate parameters, for which we identify the sets of rate parameter values that lead to acceptable matches between model output and observed trend data. The multiple insights into the model's structure that this analysis provides are discussed. The methodology is applied to a second related model, and the biological consequences of the resulting comparison, including the evaluation of gene functions, are described. Bayesian uncertainty analysis for complex models using both emulators and history matching is shown to be a powerful technique that can greatly aid the study of a large class of systems biology models. It both provides insight into model behaviour and identifies the sets of rate parameters of interest.

  4. Applying transfer matrix method to the estimation of the modal characteristics of the NASA Mini-Mass Truss

    NASA Technical Reports Server (NTRS)

    Shen, Ji-Yao; Taylor, Lawrence W., Jr.

    1994-01-01

    It is beneficial to use a distributed parameter model for large space structures because the approach minimizes the number of model parameters. Holzer's transfer matrix method provides a useful means to simplify and standardize the procedure for solving the system of partial differential equations. Any large space structures can be broken down into sub-structures with simple elastic and dynamical properties. For each single element, such as beam, tether, or rigid body, we can derive the corresponding transfer matrix. Combining these elements' matrices enables the solution of the global system equations. The characteristics equation can then be formed by satisfying the appropriate boundary conditions. Then natural frequencies and mode shapes can be determined by searching the roots of the characteristic equation at frequencies within the range of interest. This paper applies this methodology, and the maximum likelihood estimation method, to refine the modal characteristics of the NASA Mini-Mast Truss by successively matching the theoretical response to the test data of the truss. The method is being applied to more complex configurations.

  5. Model Adaptation in Parametric Space for POD-Galerkin Models

    NASA Astrophysics Data System (ADS)

    Gao, Haotian; Wei, Mingjun

    2017-11-01

    The development of low-order POD-Galerkin models is largely motivated by the expectation to use the model developed with a set of parameters at their native values to predict the dynamic behaviors of the same system under different parametric values, in other words, a successful model adaptation in parametric space. However, most of time, even small deviation of parameters from their original value may lead to large deviation or unstable results. It has been shown that adding more information (e.g. a steady state, mean value of a different unsteady state, or an entire different set of POD modes) may improve the prediction of flow with other parametric states. For a simple case of the flow passing a fixed cylinder, an orthogonal mean mode at a different Reynolds number may stabilize the POD-Galerkin model when Reynolds number is changed. For a more complicated case of the flow passing an oscillatory cylinder, a global POD-Galerkin model is first applied to handle the moving boundaries, then more information (e.g. more POD modes) is required to predicate the flow under different oscillatory frequencies. Supported by ARL.

  6. Structural analysis of three space crane articulated-truss joint concepts

    NASA Technical Reports Server (NTRS)

    Wu, K. Chauncey; Sutter, Thomas R.

    1992-01-01

    Three space crane articulated truss joint concepts are studied to evaluate their static structural performance over a range of geometric design parameters. Emphasis is placed on maintaining the four longeron reference truss performance across the joint while allowing large angle articulation. A maximum positive articulation angle and the actuator length ratio required to reach the angle are computed for each concept as the design parameters are varied. Configurations with a maximum articulation angle less than 120 degrees or actuators requiring a length ratio over two are not considered. Tip rotation and lateral deflection of a truss beam with an articulated truss joint at the midspan are used to select a point design for each concept. Deflections for one point design are up to 40 percent higher than for the other two designs. Dynamic performance of the three point design is computed as a function of joint articulation angle. The two lowest frequencies of each point design are relatively insensitive to large variations in joint articulation angle. One point design has a higher maximum tip velocity for the emergency stop than the other designs.

  7. PLUTO'S SEASONS: NEW PREDICTIONS FOR NEW HORIZONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, L. A.

    Since the last Pluto volatile transport models were published in 1996, we have (1) new stellar occultation data from 2002 and 2006-2012 that show roughly twice the pressure as the first definitive occultation from 1988, (2) new information about the surface properties of Pluto, (3) a spacecraft due to arrive at Pluto in 2015, and (4) a new volatile transport model that is rapid enough to allow a large parameter-space search. Such a parameter-space search coarsely constrained by occultation results reveals three broad solutions: a high-thermal inertia, large volatile inventory solution with permanent northern volatiles (PNVs; using the rotational northmore » pole convention); a lower thermal-inertia, smaller volatile inventory solution with exchanges of volatiles between hemispheres and a pressure plateau beyond 2015 (exchange with pressure plateau, EPP); and solutions with still smaller volatile inventories, with exchanges of volatiles between hemispheres and an early collapse of the atmosphere prior to 2015 (exchange with early collapse, EEC). PNV and EPP are favored by stellar occultation data, but EEC cannot yet be definitively ruled out without more atmospheric modeling or additional occultation observations and analysis.« less

  8. Cornering pseudoscalar-mediated dark matter with the LHC and cosmology

    NASA Astrophysics Data System (ADS)

    Banerjee, Shankha; Barducci, Daniele; Bélanger, Geneviève; Fuks, Benjamin; Goudelis, Andreas; Zaldivar, Bryan

    2017-07-01

    Models in which dark matter particles communicate with the visible sector through a pseudoscalar mediator are well-motivated both from a theoretical and from a phenomenological standpoint. With direct detection bounds being typically subleading in such scenarios, the main constraints stem either from collider searches for dark matter, or from indirect detection experiments. However, LHC searches for the mediator particles themselves can not only compete with — or even supersede — the reach of direct collider dark matter probes, but they can also test scenarios in which traditional monojet searches become irrelevant, especially when the mediator cannot decay on-shell into dark matter particles or its decay is suppressed. In this work we perform a detailed analysis of a pseudoscalar-mediated dark matter simplified model, taking into account a large set of collider constraints and concentrating on the parameter space regions favoured by cos-mological and astrophysical data. We find that mediator masses above 100-200 GeV are essentially excluded by LHC searches in the case of large couplings to the top quark, while forthcoming collider and astrophysical measurements will further constrain the available parameter space.

  9. Constraining Dark Matter Interactions with Pseudoscalar and Scalar Mediators Using Collider Searches for Multijets plus Missing Transverse Energy.

    PubMed

    Buchmueller, Oliver; Malik, Sarah A; McCabe, Christopher; Penning, Bjoern

    2015-10-30

    The monojet search, looking for events involving missing transverse energy (E_{T}) plus one or two jets, is the most prominent collider dark matter search. We show that multijet searches, which look for E_{T} plus two or more jets, are significantly more sensitive than the monojet search for pseudoscalar- and scalar-mediated interactions. We demonstrate this in the context of a simplified model with a pseudoscalar interaction that explains the excess in GeV energy gamma rays observed by the Fermi Large Area Telescope. We show that multijet searches already constrain a pseudoscalar interpretation of the excess in much of the parameter space where the mass of the mediator M_{A} is more than twice the dark matter mass m_{DM}. With the forthcoming run of the Large Hadron Collider at higher energies, the remaining regions of the parameter space where M_{A}>2m_{DM} will be fully explored. Furthermore, we highlight the importance of complementing the monojet final state with multijet final states to maximize the sensitivity of the search for the production of dark matter at colliders.

  10. Locating and defining underground goaf caused by coal mining from space-borne SAR interferometry

    NASA Astrophysics Data System (ADS)

    Yang, Zefa; Li, Zhiwei; Zhu, Jianjun; Yi, Huiwei; Feng, Guangcai; Hu, Jun; Wu, Lixin; Preusse, Alex; Wang, Yunjia; Papst, Markus

    2018-01-01

    It is crucial to locate underground goafs (i.e., mined-out areas) resulting from coal mining and define their spatial dimensions for effectively controlling the induced damages and geohazards. Traditional geophysical techniques for locating and defining underground goafs, however, are ground-based, labour-consuming and costly. This paper presents a novel space-based method for locating and defining the underground goaf caused by coal extraction using Interferometric Synthetic Aperture Radar (InSAR) techniques. As the coal mining-induced goaf is often a cuboid-shaped void and eight critical geometric parameters (i.e., length, width, height, inclined angle, azimuth angle, mining depth, and two central geodetic coordinates) are capable of locating and defining this underground space, the proposed method reduces to determine the eight geometric parameters from InSAR observations. Therefore, it first applies the Probability Integral Method (PIM), a widely used model for mining-induced deformation prediction, to construct a functional relationship between the eight geometric parameters and the InSAR-derived surface deformation. Next, the method estimates these geometric parameters from the InSAR-derived deformation observations using a hybrid simulated annealing and genetic algorithm. Finally, the proposed method was tested with both simulated and two real data sets. The results demonstrate that the estimated geometric parameters of the goafs are accurate and compatible overall, with averaged relative errors of approximately 2.1% and 8.1% being observed for the simulated and the real data experiments, respectively. Owing to the advantages of the InSAR observations, the proposed method provides a non-contact, convenient and practical method for economically locating and defining underground goafs in a large spatial area from space.

  11. Large-scale systematic analysis of 2D fingerprint methods and parameters to improve virtual screening enrichments.

    PubMed

    Sastry, Madhavi; Lowrie, Jeffrey F; Dixon, Steven L; Sherman, Woody

    2010-05-24

    A systematic virtual screening study on 11 pharmaceutically relevant targets has been conducted to investigate the interrelation between 8 two-dimensional (2D) fingerprinting methods, 13 atom-typing schemes, 13 bit scaling rules, and 12 similarity metrics using the new cheminformatics package Canvas. In total, 157 872 virtual screens were performed to assess the ability of each combination of parameters to identify actives in a database screen. In general, fingerprint methods, such as MOLPRINT2D, Radial, and Dendritic that encode information about local environment beyond simple linear paths outperformed other fingerprint methods. Atom-typing schemes with more specific information, such as Daylight, Mol2, and Carhart were generally superior to more generic atom-typing schemes. Enrichment factors across all targets were improved considerably with the best settings, although no single set of parameters performed optimally on all targets. The size of the addressable bit space for the fingerprints was also explored, and it was found to have a substantial impact on enrichments. Small bit spaces, such as 1024, resulted in many collisions and in a significant degradation in enrichments compared to larger bit spaces that avoid collisions.

  12. Parametric-Studies and Data-Plotting Modules for the SOAP

    NASA Technical Reports Server (NTRS)

    2008-01-01

    "Parametric Studies" and "Data Table Plot View" are the names of software modules in the Satellite Orbit Analysis Program (SOAP). Parametric Studies enables parameterization of as many as three satellite or ground-station attributes across a range of values and computes the average, minimum, and maximum of a specified metric, the revisit time, or 21 other functions at each point in the parameter space. This computation produces a one-, two-, or three-dimensional table of data representing statistical results across the parameter space. Inasmuch as the output of a parametric study in three dimensions can be a very large data set, visualization is a paramount means of discovering trends in the data (see figure). Data Table Plot View enables visualization of the data table created by Parametric Studies or by another data source: this module quickly generates a display of the data in the form of a rotatable three-dimensional-appearing plot, making it unnecessary to load the SOAP output data into a separate plotting program. The rotatable three-dimensionalappearing plot makes it easy to determine which points in the parameter space are most desirable. Both modules provide intuitive user interfaces for ease of use.

  13. Cooperativity to increase Turing pattern space for synthetic biology.

    PubMed

    Diambra, Luis; Senthivel, Vivek Raj; Menendez, Diego Barcena; Isalan, Mark

    2015-02-20

    It is hard to bridge the gap between mathematical formulations and biological implementations of Turing patterns, yet this is necessary for both understanding and engineering these networks with synthetic biology approaches. Here, we model a reaction-diffusion system with two morphogens in a monostable regime, inspired by components that we recently described in a synthetic biology study in mammalian cells.1 The model employs a single promoter to express both the activator and inhibitor genes and produces Turing patterns over large regions of parameter space, using biologically interpretable Hill function reactions. We applied a stability analysis and identified rules for choosing biologically tunable parameter relationships to increase the likelihood of successful patterning. We show how to control Turing pattern sizes and time evolution by manipulating the values for production and degradation relationships. More importantly, our analysis predicts that steep dose-response functions arising from cooperativity are mandatory for Turing patterns. Greater steepness increases parameter space and even reduces the requirement for differential diffusion between activator and inhibitor. These results demonstrate some of the limitations of linear scenarios for reaction-diffusion systems and will help to guide projects to engineer synthetic Turing patterns.

  14. Resident Space Object Characterization and Behavior Understanding via Machine Learning and Ontology-based Bayesian Networks

    NASA Astrophysics Data System (ADS)

    Furfaro, R.; Linares, R.; Gaylor, D.; Jah, M.; Walls, R.

    2016-09-01

    In this paper, we present an end-to-end approach that employs machine learning techniques and Ontology-based Bayesian Networks (BN) to characterize the behavior of resident space objects. State-of-the-Art machine learning architectures (e.g. Extreme Learning Machines, Convolutional Deep Networks) are trained on physical models to learn the Resident Space Object (RSO) features in the vectorized energy and momentum states and parameters. The mapping from measurements to vectorized energy and momentum states and parameters enables behavior characterization via clustering in the features space and subsequent RSO classification. Additionally, Space Object Behavioral Ontologies (SOBO) are employed to define and capture the domain knowledge-base (KB) and BNs are constructed from the SOBO in a semi-automatic fashion to execute probabilistic reasoning over conclusions drawn from trained classifiers and/or directly from processed data. Such an approach enables integrating machine learning classifiers and probabilistic reasoning to support higher-level decision making for space domain awareness applications. The innovation here is to use these methods (which have enjoyed great success in other domains) in synergy so that it enables a "from data to discovery" paradigm by facilitating the linkage and fusion of large and disparate sources of information via a Big Data Science and Analytics framework.

  15. On the Use of the Log-Normal Particle Size Distribution to Characterize Global Rain

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Rincon, Rafael; Liao, Liang

    2003-01-01

    Although most parameterizations of the drop size distributions (DSD) use the gamma function, there are several advantages to the log-normal form, particularly if we want to characterize the large scale space-time variability of the DSD and rain rate. The advantages of the distribution are twofold: the logarithm of any moment can be expressed as a linear combination of the individual parameters of the distribution; the parameters of the distribution are approximately normally distributed. Since all radar and rainfall-related parameters can be written approximately as a moment of the DSD, the first property allows us to express the logarithm of any radar/rainfall variable as a linear combination of the individual DSD parameters. Another consequence is that any power law relationship between rain rate, reflectivity factor, specific attenuation or water content can be expressed in terms of the covariance matrix of the DSD parameters. The joint-normal property of the DSD parameters has applications to the description of the space-time variation of rainfall in the sense that any radar-rainfall quantity can be specified by the covariance matrix associated with the DSD parameters at two arbitrary space-time points. As such, the parameterization provides a means by which we can use the spaceborne radar-derived DSD parameters to specify in part the covariance matrices globally. However, since satellite observations have coarse temporal sampling, the specification of the temporal covariance must be derived from ancillary measurements and models. Work is presently underway to determine whether the use of instantaneous rain rate data from the TRMM Precipitation Radar can provide good estimates of the spatial correlation in rain rate from data collected in 5(sup 0)x 5(sup 0) x 1 month space-time boxes. To characterize the temporal characteristics of the DSD parameters, disdrometer data are being used from the Wallops Flight Facility site where as many as 4 disdrometers have been used to acquire data over a 2 km path. These data should help quantify the temporal form of the covariance matrix at this site.

  16. Joint Model and Parameter Dimension Reduction for Bayesian Inversion Applied to an Ice Sheet Flow Problem

    NASA Astrophysics Data System (ADS)

    Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.

    2016-12-01

    Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete empirical interpolation method (DEIM) to approximate the nonlinearity in the forward problem. We show that using only a limited number of forward solves, the resulting subspaces lead to an efficient method to explore the high-dimensional posterior.

  17. Cosmological Constraints from the Redshift Dependence of the Volume Effect Using the Galaxy 2-point Correlation Function across the Line of Sight

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Dong; Park, Changbom; Sabiu, Cristiano G.; Park, Hyunbae; Cheng, Cheng; Kim, Juhan; Hong, Sungwook E.

    2017-08-01

    We develop a methodology to use the redshift dependence of the galaxy 2-point correlation function (2pCF) across the line of sight, ξ ({r}\\perp ), as a probe of cosmological parameters. The positions of galaxies in comoving Cartesian space varies under different cosmological parameter choices, inducing a redshift-dependent scaling in the galaxy distribution. This geometrical distortion can be observed as a redshift-dependent rescaling in the measured ξ ({r}\\perp ). We test this methodology using a sample of 1.75 billion mock galaxies at redshifts 0, 0.5, 1, 1.5, and 2, drawn from the Horizon Run 4 N-body simulation. The shape of ξ ({r}\\perp ) can exhibit a significant redshift evolution when the galaxy sample is analyzed under a cosmology differing from the true, simulated one. Other contributions, including the gravitational growth of structure, galaxy bias, and the redshift space distortions, do not produce large redshift evolution in the shape. We show that one can make use of this geometrical distortion to constrain the values of cosmological parameters governing the expansion history of the universe. This method could be applicable to future large-scale structure surveys, especially photometric surveys such as DES and LSST, to derive tight cosmological constraints. This work is a continuation of our previous works as a strategy to constrain cosmological parameters using redshift-invariant physical quantities.

  18. Fluctuations, ghosts, and the cosmological constant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirayama, T.; Holdom, B.

    2004-12-15

    For a large region of parameter space involving the cosmological constant and mass parameters, we discuss fluctuating spacetime solutions that are effectively Minkowskian on large time and distance scales. Rapid, small amplitude oscillations in the scale factor have a frequency determined by the size of a negative cosmological constant. A field with modes of negative energy is required. If it is gravity that induces a coupling between the ghostlike and normal fields, we find that this results in stochastic rather than unstable behavior. The negative energy modes may also permit the existence of Lorentz invariant fluctuating solutions of finite energymore » density. Finally we consider higher derivative gravity theories and find oscillating metric solutions in these theories without the addition of other fields.« less

  19. Neutrino-two-Higgs-doublet model with the inverse seesaw mechanisms

    NASA Astrophysics Data System (ADS)

    Tang, Yi-Lei; Zhu, Shou-hua

    2017-09-01

    In this paper, we combine the ν -two-Higgs-doublet-model with the inverse seesaw mechanisms. In this model, the Yukawa couplings involving the sterile neutrinos and the exotic Higgs bosons can be of order 1 in the case of a large tan β . We calculated the corrections to the Z -resonance parameters Rli,Al i, and Nν, together with the l1→l2γ branching ratios and the muon anomalous g -2 . Compared with the current bounds and plans for the future colliders, we find that the corrections to the electroweak parameters can be constrained or discovered in much of the parameter space.

  20. Least-squares sequential parameter and state estimation for large space structures

    NASA Technical Reports Server (NTRS)

    Thau, F. E.; Eliazov, T.; Montgomery, R. C.

    1982-01-01

    This paper presents the formulation of simultaneous state and parameter estimation problems for flexible structures in terms of least-squares minimization problems. The approach combines an on-line order determination algorithm, with least-squares algorithms for finding estimates of modal approximation functions, modal amplitudes, and modal parameters. The approach combines previous results on separable nonlinear least squares estimation with a regression analysis formulation of the state estimation problem. The technique makes use of sequential Householder transformations. This allows for sequential accumulation of matrices required during the identification process. The technique is used to identify the modal prameters of a flexible beam.

  1. Non-Abelian vortex lattices

    NASA Astrophysics Data System (ADS)

    Tallarita, Gianni; Peterson, Adam

    2018-04-01

    We perform a numerical study of the phase diagram of the model proposed in [M. Shifman, Phys. Rev. D 87, 025025 (2013)., 10.1103/PhysRevD.87.025025], which is a simple model containing non-Abelian vortices. As per the case of Abrikosov vortices, we map out a region of parameter space in which the system prefers the formation of vortices in ordered lattice structures. These are generalizations of Abrikosov vortex lattices with extra orientational moduli in the vortex cores. At sufficiently large lattice spacing the low energy theory is described by a sum of C P (1 ) theories, each located on a vortex site. As the lattice spacing becomes smaller, when the self-interaction of the orientational field becomes relevant, only an overall rotation in internal space survives.

  2. LPV Modeling of a Flexible Wing Aircraft Using Modal Alignment and Adaptive Gridding Methods

    NASA Technical Reports Server (NTRS)

    Al-Jiboory, Ali Khudhair; Zhu, Guoming; Swei, Sean Shan-Min; Su, Weihua; Nguyen, Nhan T.

    2017-01-01

    One of the earliest approaches in gain-scheduling control is the gridding based approach, in which a set of local linear time-invariant models are obtained at various gridded points corresponding to the varying parameters within the flight envelop. In order to ensure smooth and effective Linear Parameter-Varying control, aligning all the flexible modes within each local model and maintaining small number of representative local models over the gridded parameter space are crucial. In addition, since the flexible structural models tend to have large dimensions, a tractable model reduction process is necessary. In this paper, the notion of s-shifted H2- and H Infinity-norm are introduced and used as a metric to measure the model mismatch. A new modal alignment algorithm is developed which utilizes the defined metric for aligning all the local models over the entire gridded parameter space. Furthermore, an Adaptive Grid Step Size Determination algorithm is developed to minimize the number of local models required to represent the gridded parameter space. For model reduction, we propose to utilize the concept of Composite Modal Cost Analysis, through which the collective contribution of each flexible mode is computed and ranked. Therefore, a reduced-order model is constructed by retaining only those modes with significant contribution. The NASA Generic Transport Model operating at various flight speeds is studied for verification purpose, and the analysis and simulation results demonstrate the effectiveness of the proposed modeling approach.

  3. Time Scale for Adiabaticity Breakdown in Driven Many-Body Systems and Orthogonality Catastrophe

    NASA Astrophysics Data System (ADS)

    Lychkovskiy, Oleg; Gamayun, Oleksandr; Cheianov, Vadim

    2017-11-01

    The adiabatic theorem is a fundamental result in quantum mechanics, which states that a system can be kept arbitrarily close to the instantaneous ground state of its Hamiltonian if the latter varies in time slowly enough. The theorem has an impressive record of applications ranging from foundations of quantum field theory to computational molecular dynamics. In light of this success it is remarkable that a practicable quantitative understanding of what "slowly enough" means is limited to a modest set of systems mostly having a small Hilbert space. Here we show how this gap can be bridged for a broad natural class of physical systems, namely, many-body systems where a small move in the parameter space induces an orthogonality catastrophe. In this class, the conditions for adiabaticity are derived from the scaling properties of the parameter-dependent ground state without a reference to the excitation spectrum. This finding constitutes a major simplification of a complex problem, which otherwise requires solving nonautonomous time evolution in a large Hilbert space.

  4. Active stability augmentation of large space structures: A stochastic control problem

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1987-01-01

    A problem in SCOLE is that of slewing an offset antenna on a long flexible beam-like truss attached to the space shuttle, with rather stringent pointing accuracy requirements. The relevant methodology aspects in robust feedback-control design for stability augmentation of the beam using on-board sensors is examined. It is framed as a stochastic control problem, boundary control of a distributed parameter system described by partial differential equations. While the framework is mathematical, the emphasis is still on an engineering solution. An abstract mathematical formulation is developed as a nonlinear wave equation in a Hilbert space. That the system is controllable is shown and a feedback control law that is robust in the sense that it does not require quantitative knowledge of system parameters is developed. The stochastic control problem that arises in instrumenting this law using appropriate sensors is treated. Using an engineering first approximation which is valid for small damping, formulas for optimal choice of the control gain are developed.

  5. Library of Giant Planet Reflection Spectra for WFirst and Future Space Telescopes

    NASA Astrophysics Data System (ADS)

    Smith, Adam J. R. W.; Fortney, Jonathan; Morley, Caroline; Batalha, Natasha E.; Lewis, Nikole K.

    2018-01-01

    Future large space space telescopes will be able to directly image exoplanets in optical light. The optical light of a resolved planet is due to stellar flux reflected by Rayleigh scattering or cloud scattering, with absorption features imprinted due to molecular bands in the planetary atmosphere. To aid in the design of such missions, and to better understand a wide range of giant planet atmospheres, we have built a library of model giant planet reflection spectra, for the purpose of determining effective methods of spectral analysis as well as for comparison with actual imaged objects. This library covers a wide range of parameters: objects are modeled at ten orbital distances between 0.5 AU and 5.0 AU, which ranges from planets too warm for water clouds, out to those that are true Jupiter analogs. These calculations include six metalicities between solar and 100x solar, with a variety of different cloud thickness parameters, and across all possible phase angles.

  6. Recent experience in simultaneous control-structure optimization

    NASA Technical Reports Server (NTRS)

    Salama, M.; Ramaker, R.; Milman, M.

    1989-01-01

    To show the feasibility of simultaneous optimization as design procedure, low order problems were used in conjunction with simple control formulations. The numerical results indicate that simultaneous optimization is not only feasible, but also advantageous. Such advantages come at the expense of introducing complexities beyond those encountered in structure optimization alone, or control optimization alone. Examples include: larger design parameter space, optimization may combine continuous and combinatoric variables, and the combined objective function may be nonconvex. Future extensions to include large order problems, more complex objective functions and constraints, and more sophisticated control formulations will require further research to ensure that the additional complexities do not outweigh the advantages of simultaneous optimization. Some areas requiring more efficient tools than currently available include: multiobjective criteria and nonconvex optimization. Efficient techniques to deal with optimization over combinatoric and continuous variables, and with truncation issues for structure and control parameters of both the model space as well as the design space need to be developed.

  7. Threshold corrections to the bottom quark mass revisited

    DOE PAGES

    Anandakrishnan, Archana; Bryant, B. Charles; Raby, Stuart

    2015-05-19

    Threshold corrections to the bottom quark mass are often estimated under the approximation that tan β enhanced contributions are the most dominant. In this work we revisit this common approximation made to the estimation of the supersymmetric thresh-old corrections to the bottom quark mass. We calculate the full one-loop supersymmetric corrections to the bottom quark mass and survey a large part of the phenomenological MSSM parameter space to study the validity of considering only the tan β enhanced corrections. Our analysis demonstrates that this approximation underestimates the size of the threshold corrections by ~12.5% for most of the considered parametermore » space. We discuss the consequences for fitting the bottom quark mass and for the effective couplings to Higgses. Here, we find that it is important to consider the additional contributions when fitting the bottom quark mass but the modifications to the effective Higgs couplings are typically O(few)% for the majority of the parameter space considered.« less

  8. Searching for a Link Between Suprathermal Ions and Solar Wind Parameters During Quiet Times.

    NASA Astrophysics Data System (ADS)

    Nickell, J.; Desai, M. I.; Dayeh, M. A.

    2017-12-01

    The acceleration processes that suprathermal particles undergo are largely ambiguous. The two prevailing acceleration processes are: 1) Continuous acceleration in the IP space due to i) Bulk velocity fluctuations (e.g., Fahr et al. 2012), ii) magnetic compressions (e.g., Fisk and Gloeckler 2012), iii) magnetic field waves and turbulence (e.g., Zhang and Lee 2013), and iv) reconnection between magnetic islands (e.g., Drake et al. 2014) . 2) Discrete acceleration that occurs in discrete solar events such as CIRs, CME-driven shocks, and flares (e.g., Reames 1999, Desai et al. 2008). Using data from ACE/ULEIS during solar cycles 23 and 24 (1997-present), we examine the solar wind and magnetic field parameters during quiet-times (e.g., Dayeh et al. 2017) in an attempt to gain insights into the acceleration processes of the suprathermal particle population. In particular, we look for compression regions by performing comparative studies between solar wind and magnetic field parameters during quiet-times in the interplanetary space.

  9. Astrobiological complexity with probabilistic cellular automata.

    PubMed

    Vukotić, Branislav; Ćirković, Milan M

    2012-08-01

    The search for extraterrestrial life and intelligence constitutes one of the major endeavors in science, but has yet been quantitatively modeled only rarely and in a cursory and superficial fashion. We argue that probabilistic cellular automata (PCA) represent the best quantitative framework for modeling the astrobiological history of the Milky Way and its Galactic Habitable Zone. The relevant astrobiological parameters are to be modeled as the elements of the input probability matrix for the PCA kernel. With the underlying simplicity of the cellular automata constructs, this approach enables a quick analysis of large and ambiguous space of the input parameters. We perform a simple clustering analysis of typical astrobiological histories with "Copernican" choice of input parameters and discuss the relevant boundary conditions of practical importance for planning and guiding empirical astrobiological and SETI projects. In addition to showing how the present framework is adaptable to more complex situations and updated observational databases from current and near-future space missions, we demonstrate how numerical results could offer a cautious rationale for continuation of practical SETI searches.

  10. Extension of wavelength-modulation spectroscopy to large modulation depth for diode laser absorption measurements in high-pressure gases

    NASA Astrophysics Data System (ADS)

    Li, Hejie; Rieker, Gregory B.; Liu, Xiang; Jeffries, Jay B.; Hanson, Ronald K.

    2006-02-01

    Tunable diode laser absorption measurements at high pressures by use of wavelength-modulation spectroscopy (WMS) require large modulation depths for optimum detection of molecular absorption spectra blended by collisional broadening or dense spacing of the rovibrational transitions. Diode lasers have a large and nonlinear intensity modulation when the wavelength is modulated over a large range by injection-current tuning. In addition to this intensity modulation, other laser performance parameters are measured, including the phase shift between the frequency modulation and the intensity modulation. Following published theory, these parameters are incorporated into an improved model of the WMS signal. The influence of these nonideal laser effects is investigated by means of wavelength-scanned WMS measurements as a function of bath gas pressure on rovibrational transitions of water vapor near 1388 nm. Lock-in detection of the magnitude of the 2f signal is performed to remove the dependence on detection phase. We find good agreement between measurements and the improved model developed for the 2f component of the WMS signal. The effects of the nonideal performance parameters of commercial diode lasers are especially important away from the line center of discrete spectra, and these contributions become more pronounced for 2f signals with the large modulation depths needed for WMS at elevated pressures.

  11. Highly light-weighted ZERODUR mirrors

    NASA Astrophysics Data System (ADS)

    Behar-Lafenetre, Stéphanie; Lasic, Thierry; Viale, Roger; Mathieu, Jean-Claude; Ruch, Eric; Tarreau, Michel; Etcheto, Pierre

    2017-11-01

    Due to more and more stringent requirements for observation missions, diameter of primary mirrors for space telescopes is increasing. Difficulty is then to have a design stiff enough to be able to withstand launch loads and keep a reasonable mass while providing high opto-mechanical performance. Among the possible solutions, Thales Alenia Space France has investigated optimization of ZERODUR mirrors. Indeed this material, although fragile, is very well mastered and its characteristics well known. Moreover, its thermo-elastic properties (almost null CTE) is unequalled yet, in particular at ambient temperature. Finally, this material can be polished down to very low roughness without any coating. Light-weighting can be achieved by two different means : either optimizing manufacturing parameters or optimizing design (or both). Manufacturing parameters such as walls and optical face thickness have been improved and tested on representative breadboards defined on the basis of SAGEM-REOSC and Thales Alenia Space France expertise and realized by SAGEM-REOSC. In the frame of CNES Research and Technology activities, specific mass has been decreased down to 36 kg/m2. Moreover SNAP study dealt with a 2 m diameter primary mirror. Design has been optimized by Thales Alenia Space France while using classical manufacturing parameters - thus ensuring feasibility and costs. Mass was decreased down to 60 kg/m2 for a gravity effect of 52 nm. It is thus demonstrated that high opto-mechanical performance can be guaranteed with large highly lightweighted ZERODUR mirrors.

  12. Highly light-weighted ZERODUR mirror and fixation for cryogenic applications

    NASA Astrophysics Data System (ADS)

    Behar-Lafenetre, Stephanie; Lasic, Thierry; Viale, Roger; Ruch, Eric

    2017-11-01

    Space telescopes require large primary mirrors within a demanding thermal environment: observatories at L2 orbit provide a stable environment with a drawback of very low temperature. Besides, it is necessary to limit as far as possible the mirrors mass while withstanding launch loads and keeping image quality within a cryogenic environment. ZERODUR is a well-known material extensively used for large telescope. Alcatel Alenia Space and Sagem/REOSC have combined their respective skills to go further in the lightweighting ratio of large mirror (36 kg/m2 on 1.5 m2) through a detailed design, performance assessment and technology demonstration with breadboards. Beyond on a large mirror detailed design supported by analysis, a ZERODUR mock-up has been manufacturing by Sagem/REOSC to demonstrate the achievability of the demanding parameters offering this high lightweighting ratio. Through the ISO experience on mirror attachments, a detailed design of the mirror fixation has been done as well. A full size mock-up has been manufactured and successfully tested under thermal cycling and static loading. Eventually, the ZERODUR stability behavior within this large temperature range has been verified through thermal cycling and image quality cryotest on a flat mirror breadboard. These developments demonstrate that ZERODUR is a good candidate for large space cryogenic mirrors offering outstanding optical performances associated to matured and proven technology and manufacturing process.

  13. Detection of tiny amounts of fissile materials in large-sized containers with radioactive waste

    NASA Astrophysics Data System (ADS)

    Batyaev, V. F.; Skliarov, S. V.

    2018-01-01

    The paper is devoted to non-destructive control of tiny amounts of fissile materials in large-sized containers filled with radioactive waste (RAW). The aim of this work is to model an active neutron interrogation facility for detection of fissile ma-terials inside NZK type containers with RAW and determine the minimal detectable mass of U-235 as a function of various param-eters: matrix type, nonuniformity of container filling, neutron gen-erator parameters (flux, pulse frequency, pulse duration), meas-urement time. As a result the dependence of minimal detectable mass on fissile materials location inside container is shown. Nonu-niformity of the thermal neutron flux inside a container is the main reason of the space-heterogeneity of minimal detectable mass in-side a large-sized container. Our experiments with tiny amounts of uranium-235 (<1 g) confirm the detection of fissile materials in NZK containers by using active neutron interrogation technique.

  14. Photon orbits and thermodynamic phase transition of d -dimensional charged AdS black holes

    NASA Astrophysics Data System (ADS)

    Wei, Shao-Wen; Liu, Yu-Xiao

    2018-05-01

    We study the relationship between the null geodesics and thermodynamic phase transition for the charged AdS black hole. In the reduced parameter space, we find that there exist nonmonotonic behaviors of the photon sphere radius and the minimum impact parameter for the pressure below its critical value. The study also shows that the changes of the photon sphere radius and the minimum impact parameter can serve as order parameters for the small-large black hole phase transition. In particular, these changes have an universal exponent of 1/2 near the critical point for any dimension d of spacetime. These results imply that there may exist universal critical behavior of gravity near the thermodynamic critical point of the black hole system.

  15. Exploring the parameter space of the coarse-grained UNRES force field by random search: selecting a transferable medium-resolution force field.

    PubMed

    He, Yi; Xiao, Yi; Liwo, Adam; Scheraga, Harold A

    2009-10-01

    We explored the energy-parameter space of our coarse-grained UNRES force field for large-scale ab initio simulations of protein folding, to obtain good initial approximations for hierarchical optimization of the force field with new virtual-bond-angle bending and side-chain-rotamer potentials which we recently introduced to replace the statistical potentials. 100 sets of energy-term weights were generated randomly, and good sets were selected by carrying out replica-exchange molecular dynamics simulations of two peptides with a minimal alpha-helical and a minimal beta-hairpin fold, respectively: the tryptophan cage (PDB code: 1L2Y) and tryptophan zipper (PDB code: 1LE1). Eight sets of parameters produced native-like structures of these two peptides. These eight sets were tested on two larger proteins: the engrailed homeodomain (PDB code: 1ENH) and FBP WW domain (PDB code: 1E0L); two sets were found to produce native-like conformations of these proteins. These two sets were tested further on a larger set of nine proteins with alpha or alpha + beta structure and found to locate native-like structures of most of them. These results demonstrate that, in addition to finding reasonable initial starting points for optimization, an extensive search of parameter space is a powerful method to produce a transferable force field. Copyright 2009 Wiley Periodicals, Inc.

  16. Long Term RST Analyses of TIR Satellite Radiances in Different Geotectonic Contexts: Results and Implications for a Time-Dependent Assessment of Seismic Hazard (t-DASH)

    NASA Astrophysics Data System (ADS)

    Tramutoli, V.; Armandi, B.; Coviello, I.; Eleftheriou, A.; Filizzola, C.; Genzano, N.; Lacava, T.; Lisi, M.; Paciello, R.; Pergola, N.; Satriano, V.; Vallianatos, F.

    2014-12-01

    A large scientific documentation is to-date available about the appearance of anomalous space-time patterns of geophysical parameters measured from days to week before earthquakes occurrence. Nevertheless up to now no one measurable parameter, no one observational methodology has demonstrated to be sufficiently reliable and effective for the implementation of an operational earthquake prediction system. In this context PRE-EARTHQUAKES EU-FP7 project (www.pre-earthquakes.org), investigated to which extent the combined use of different observations/parameters together with the refinement of data analysis methods, can reduce false alarm rates and improve reliability and precision (in the space-time domain) of predictions. Among the different parameters/methodologies proposed to provide useful information in the earthquake prediction system, since 2001 a statistical approach named RST (Robust Satellite Technique) has been used to identify the space-time fluctuations of Earth's emitted Thermal Infrared (TIR) radiation observed from satellite in seismically active regions. In this paper RST-based long-term analysis of TIR satellite record collected by MSG/SEVIRI over European (Italy and Greece) and by GOES/IMAGER over American (California) regions will be presented. Its enhanced potential, when applied in the framework of time-Dependent Assessment of Seismic Hazard (t-DASH) system continuously integrating independent observations, will be moreover discussed.

  17. Automatic Selection of Order Parameters in the Analysis of Large Scale Molecular Dynamics Simulations.

    PubMed

    Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S

    2014-12-09

    Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.

  18. The first products made in space: Monodisperse latex particles

    NASA Technical Reports Server (NTRS)

    Vanderhoff, J. W.; El-Aasser, M. S.; Micale, F. J.; Sudol, E. D.; Tseng, C.-M.; Sheu, H.-R.; Kornfeld, D. M.

    1988-01-01

    The preparation of large particle size 3 to 30 micrometer monodisperse latexes in space confirmed that original rationale unequivocally. The flight polymerizations formed negligible amounts of coagulum as compared to increasing amounts for the ground-based polymerizations. The number of offsize large particles in the flight latexes was smaller than in the ground-based latexes. The particle size distribution broadened and more larger offsize particles were formed when the polymerizations of the partially converted STS-4 latexes were completed on Earth. Polymerization in space also showed other unanticipated advantages. The flight latexes had narrower particle size distributions than the ground-based latexes. The particles of the flight latexes were more perfect spheres than those of the ground-based latexes. The superior uniformity of the flight latexes was confirmed by the National Bureau of Standards acceptance of the 10 micrometer STS-6 latex and the 30 micrometer STS-11 latexes as Standard Reference Materials, the first products made in space for sale on Earth. The polymerization rates in space were the same as those on Earth within experimental error. Further development of the ground-based polymerization recipes gave monodisperse particles as large as 100 micrometer with tolerable levels of coagulum, but their uniformity was significantly poorer than the flight latexes. Careful control of the polymerization parameters gave uniform nonspherical particles: symmetrical and asymmetrical doublets, ellipsoids, egg-shaped, ice cream cone-shaped, and popcorn-shaped particles.

  19. Model-based high-throughput design of ion exchange protein chromatography.

    PubMed

    Khalaf, Rushd; Heymann, Julia; LeSaout, Xavier; Monard, Florence; Costioli, Matteo; Morbidelli, Massimo

    2016-08-12

    This work describes the development of a model-based high-throughput design (MHD) tool for the operating space determination of a chromatographic cation-exchange protein purification process. Based on a previously developed thermodynamic mechanistic model, the MHD tool generates a large amount of system knowledge and thereby permits minimizing the required experimental workload. In particular, each new experiment is designed to generate information needed to help refine and improve the model. Unnecessary experiments that do not increase system knowledge are avoided. Instead of aspiring to a perfectly parameterized model, the goal of this design tool is to use early model parameter estimates to find interesting experimental spaces, and to refine the model parameter estimates with each new experiment until a satisfactory set of process parameters is found. The MHD tool is split into four sections: (1) prediction, high throughput experimentation using experiments in (2) diluted conditions and (3) robotic automated liquid handling workstations (robotic workstation), and (4) operating space determination and validation. (1) Protein and resin information, in conjunction with the thermodynamic model, is used to predict protein resin capacity. (2) The predicted model parameters are refined based on gradient experiments in diluted conditions. (3) Experiments on the robotic workstation are used to further refine the model parameters. (4) The refined model is used to determine operating parameter space that allows for satisfactory purification of the protein of interest on the HPLC scale. Each section of the MHD tool is used to define the adequate experimental procedures for the next section, thus avoiding any unnecessary experimental work. We used the MHD tool to design a polishing step for two proteins, a monoclonal antibody and a fusion protein, on two chromatographic resins, in order to demonstrate it has the ability to strongly accelerate the early phases of process development. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. From atoms to layers: in situ gold cluster growth kinetics during sputter deposition

    NASA Astrophysics Data System (ADS)

    Schwartzkopf, Matthias; Buffet, Adeline; Körstgens, Volker; Metwalli, Ezzeldin; Schlage, Kai; Benecke, Gunthard; Perlich, Jan; Rawolle, Monika; Rothkirch, André; Heidmann, Berit; Herzog, Gerd; Müller-Buschbaum, Peter; Röhlsberger, Ralf; Gehrke, Rainer; Stribeck, Norbert; Roth, Stephan V.

    2013-05-01

    The adjustment of size-dependent catalytic, electrical and optical properties of gold cluster assemblies is a very significant issue in modern applied nanotechnology. We present a real-time investigation of the growth kinetics of gold nanostructures from small nuclei to a complete gold layer during magnetron sputter deposition with high time resolution by means of in situ microbeam grazing incidence small-angle X-ray scattering (μGISAXS). We specify the four-stage growth including their thresholds with sub-monolayer resolution and identify phase transitions monitored in Yoneda intensity as a material-specific characteristic. An innovative and flexible geometrical model enables the extraction of morphological real space parameters, such as cluster size and shape, correlation distance, layer porosity and surface coverage, directly from reciprocal space scattering data. This approach enables a large variety of future investigations of the influence of different process parameters on the thin metal film morphology. Furthermore, our study allows for deducing the wetting behavior of gold cluster films on solid substrates and provides a better understanding of the growth kinetics in general, which is essential for optimization of manufacturing parameters, saving energy and resources.The adjustment of size-dependent catalytic, electrical and optical properties of gold cluster assemblies is a very significant issue in modern applied nanotechnology. We present a real-time investigation of the growth kinetics of gold nanostructures from small nuclei to a complete gold layer during magnetron sputter deposition with high time resolution by means of in situ microbeam grazing incidence small-angle X-ray scattering (μGISAXS). We specify the four-stage growth including their thresholds with sub-monolayer resolution and identify phase transitions monitored in Yoneda intensity as a material-specific characteristic. An innovative and flexible geometrical model enables the extraction of morphological real space parameters, such as cluster size and shape, correlation distance, layer porosity and surface coverage, directly from reciprocal space scattering data. This approach enables a large variety of future investigations of the influence of different process parameters on the thin metal film morphology. Furthermore, our study allows for deducing the wetting behavior of gold cluster films on solid substrates and provides a better understanding of the growth kinetics in general, which is essential for optimization of manufacturing parameters, saving energy and resources. Electronic supplementary information (ESI) available: The full GISAXS image sequence of the experiment, the model-based IsGISAXS-simulation sequence as movie files for comparison and detailed information about sample cleaning, XRR, FESEM, IsGISAXS, comparison μGIWAXS/μGISAXS, and sampling statistics. See DOI: 10.1039/c3nr34216f

  1. Color-Space Outliers in DPOSS: Quasars and Peculiar Objects

    NASA Astrophysics Data System (ADS)

    Djorgovski, S. G.; Gal, R. R.; Mahabal, A.; Brunner, R.; Castro, S. M.; Odewahn, S. C.; de Carvalho, R. R.; DPOSS Team

    2000-12-01

    The processing of DPOSS, a digital version of the POSS-II sky atlas, is now nearly complete. The resulting Palomar--Norris Sky Catalog (PNSC) is expected to contain > 5 x 107 galaxies and > 109 stars, including large numbers of quasars and other unresolved sources. For objects morphologically classified as stellar (i.e., PSF-like), colors and magnitudes provide the only additional source of discriminating information. We investigate the distribution of objects in the parameter space of (g-r) and (r-i) colors as a function of magnitude. Normal stars form a well-defined (temperature) sequence in this parameter space, and we explore the nature of the objects which deviate significantly from this stellar locus. The causes of the deviations include: non-thermal or peculiar spectra, interagalactic absorption (for high-z quasars), presence of strong emission lines in one or more of the bandpasses, or strong variability (because the plates are taken at widely separated epochs). In addition to minor contamination by misclassified compact galaxies, we find the following: (1) Quasars at z > 4; to date, ~ 100 of these objects have been found, and used for a variety of follow-up studies. They are made publicly available immediately after discovery, through http://astro.caltech.edu/ ~george/z4.qsos. (2) Type-2 quasars in the redshift interval z ~ 0.31 - 0.38. (3) Other quasars, starburst and emission-line galaxies, and emission-line stars. (4) Objects with highly peculiar spectra, some or all of which may be rare subtypes of BAL QSOs. (5) Highly variable stars and optical transients, some of which may be GRB ``orphan afterglows''. To date, systematic searches have been made only for (1) and (2); other types of objects were found serendipitously. However, we plan to explore systematically all of the statistically significant outliers in this parameter space. This illustrates the potential of large digital sky surveys for discovery of rare types of objects, both known (e.g., high-z quasars) and as yet unknown.

  2. An Experimental Study of Dependence of Optimum TBM Cutter Spacing on Pre-set Penetration Depth in Sandstone Fragmentation

    NASA Astrophysics Data System (ADS)

    Han, D. Y.; Cao, P.; Liu, J.; Zhu, J. B.

    2017-12-01

    Cutter spacing is an essential parameter in the TBM design. However, few efforts have been made to study the optimum cutter spacing incorporating penetration depth. To investigate the influence of pre-set penetration depth and cutter spacing on sandstone breakage and TBM performance, a series of sequential laboratory indentation tests were performed in a biaxial compression state. Effects of parameters including penetration force, penetration depth, chip mass, chip size distribution, groove volume, specific energy and maximum angle of lateral crack were investigated. Results show that the total mass of chips, the groove volume and the observed optimum cutter spacing increase with increasing pre-set penetration depth. It is also found that the total mass of chips could be an alternative means to determine optimum cutter spacing. In addition, analysis of chip size distribution suggests that the mass of large chips is dominated by both cutter spacing and pre-set penetration depth. After fractal dimension analysis, we found that cutter spacing and pre-set penetration depth have negligible influence on the formation of small chips and that small chips are formed due to squeezing of cutters and surface abrasion caused by shear failure. Analysis on specific energy indicates that the observed optimum spacing/penetration ratio is 10 for the sandstone, at which, the specific energy and the maximum angle of lateral cracks are smallest. The findings in this paper contribute to better understanding of the coupled effect of cutter spacing and pre-set penetration depth on TBM performance and rock breakage, and provide some guidelines for cutter arrangement.

  3. Observational Δν-ρ¯ Relation for δ Sct Stars using Eclipsing Binaries and Space Photometry

    NASA Astrophysics Data System (ADS)

    García Hernández, A.; Martín-Ruiz, S.; Monteiro, Mário J. P. F. G.; Suárez, J. C.; Reese, D. R.; Pascual-Granado, J.; Garrido, R.

    2015-10-01

    Delta Scuti (δ Sct) stars are intermediate-mass pulsators, whose intrinsic oscillations have been studied for decades. However, modeling their pulsations remains a real theoretical challenge, thereby even hampering the precise determination of global stellar parameters. In this work, we used space photometry observations of eclipsing binaries with a δ Sct component to obtain reliable physical parameters and oscillation frequencies. Using that information, we derived an observational scaling relation between the stellar mean density and a frequency pattern in the oscillation spectrum. This pattern is analogous to the solar-like large separation but in the low order regime. We also show that this relation is independent of the rotation rate. These findings open the possibility of accurately characterizing this type of pulsator and validate the frequency pattern as a new observable for δ Sct stars.

  4. High-efficiency 3 W/40 K single-stage pulse tube cryocooler for space application

    NASA Astrophysics Data System (ADS)

    Zhang, Ankuo; Wu, Yinong; Liu, Shaoshuai; Liu, Biqiang; Yang, Baoyu

    2018-03-01

    Temperature is an extremely important parameter for space-borne infrared detectors. To develop a quantum-well infrared photodetector (QWIP), a high-efficiency Stirling-type pulse tube cryocooler (PTC) has been designed, manufactured and experimentally investigated for providing a large cooling power at 40 K cold temperature. Simulated and experimental studies were carried out to analyse the effects of low temperature on different energy flows and losses, and the performance of the PTC was improved by optimizing components and parameters such as regenerator and operating frequency. A no-load lowest temperature of 26.2 K could be reached at a frequency of 51 Hz, and the PTC could efficiently offer cooling power of 3 W at 40 K cold temperature when the input power was 225 W. The efficiency relative to the Carnot efficiency was approximately 8.4%.

  5. An Image Encryption Algorithm Utilizing Julia Sets and Hilbert Curves

    PubMed Central

    Sun, Yuanyuan; Chen, Lina; Xu, Rudan; Kong, Ruiqing

    2014-01-01

    Image encryption is an important and effective technique to protect image security. In this paper, a novel image encryption algorithm combining Julia sets and Hilbert curves is proposed. The algorithm utilizes Julia sets’ parameters to generate a random sequence as the initial keys and gets the final encryption keys by scrambling the initial keys through the Hilbert curve. The final cipher image is obtained by modulo arithmetic and diffuse operation. In this method, it needs only a few parameters for the key generation, which greatly reduces the storage space. Moreover, because of the Julia sets’ properties, such as infiniteness and chaotic characteristics, the keys have high sensitivity even to a tiny perturbation. The experimental results indicate that the algorithm has large key space, good statistical property, high sensitivity for the keys, and effective resistance to the chosen-plaintext attack. PMID:24404181

  6. Mathematical Analysis of Space Radiator Segmenting for Increased Reliability and Reduced Mass

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.

    2001-01-01

    Spacecraft for long duration deep space missions will need to be designed to survive micrometeoroid bombardment of their surfaces some of which may actually be punctured. To avoid loss of the entire mission the damage due to such punctures must be limited to small, localized areas. This is especially true for power system radiators, which necessarily feature large surface areas to reject heat at relatively low temperature to the space environment by thermal radiation. It may be intuitively obvious that if a space radiator is composed of a large number of independently operating segments, such as heat pipes, a random micrometeoroid puncture will result only in the loss of the punctured segment, and not the entire radiator. Due to the redundancy achieved by independently operating segments, the wall thickness and consequently the weight of such segments can be drastically reduced. Probability theory is used to estimate the magnitude of such weight reductions as the number of segments is increased. An analysis of relevant parameter values required for minimum mass segmented radiators is also included.

  7. Laboratory simulation of space plasma phenomena*

    NASA Astrophysics Data System (ADS)

    Amatucci, B.; Tejero, E. M.; Ganguli, G.; Blackwell, D.; Enloe, C. L.; Gillman, E.; Walker, D.; Gatling, G.

    2017-12-01

    Laboratory devices, such as the Naval Research Laboratory's Space Physics Simulation Chamber, are large-scale experiments dedicated to the creation of large-volume plasmas with parameters realistically scaled to those found in various regions of the near-Earth space plasma environment. Such devices make valuable contributions to the understanding of space plasmas by investigating phenomena under carefully controlled, reproducible conditions, allowing for the validation of theoretical models being applied to space data. By working in collaboration with in situ experimentalists to create realistic conditions scaled to those found during the observations of interest, the microphysics responsible for the observed events can be investigated in detail not possible in space. To date, numerous investigations of phenomena such as plasma waves, wave-particle interactions, and particle energization have been successfully performed in the laboratory. In addition to investigations such as plasma wave and instability studies, the laboratory devices can also make valuable contributions to the development and testing of space plasma diagnostics. One example is the plasma impedance probe developed at NRL. Originally developed as a laboratory diagnostic, the sensor has now been flown on a sounding rocket, is included on a CubeSat experiment, and will be included on the DoD Space Test Program's STP-H6 experiment on the International Space Station. In this presentation, we will describe several examples of the laboratory investigation of space plasma waves and instabilities and diagnostic development. *This work supported by the NRL Base Program.

  8. Determination of Acreage Thermal Protection Foam Loss From Ice and Foam Impacts

    NASA Technical Reports Server (NTRS)

    Carney, Kelly S.; Lawrence, Charles

    2015-01-01

    A parametric study was conducted to establish Thermal Protection System (TPS) loss from foam and ice impact conditions similar to what might occur on the Space Launch System. This study was based upon the large amount of testing and analysis that was conducted with both ice and foam debris impacts on TPS acreage foam for the Space Shuttle Project External Tank. Test verified material models and modeling techniques that resulted from Space Shuttle related testing were utilized for this parametric study. Parameters varied include projectile mass, impact velocity and impact angle (5 degree and 10 degree impacts). The amount of TPS acreage foam loss as a result of the various impact conditions is presented.

  9. Rare behavior of growth processes via umbrella sampling of trajectories

    NASA Astrophysics Data System (ADS)

    Klymko, Katherine; Geissler, Phillip L.; Garrahan, Juan P.; Whitelam, Stephen

    2018-03-01

    We compute probability distributions of trajectory observables for reversible and irreversible growth processes. These results reveal a correspondence between reversible and irreversible processes, at particular points in parameter space, in terms of their typical and atypical trajectories. Thus key features of growth processes can be insensitive to the precise form of the rate constants used to generate them, recalling the insensitivity to microscopic details of certain equilibrium behavior. We obtained these results using a sampling method, inspired by the "s -ensemble" large-deviation formalism, that amounts to umbrella sampling in trajectory space. The method is a simple variant of existing approaches, and applies to ensembles of trajectories controlled by the total number of events. It can be used to determine large-deviation rate functions for trajectory observables in or out of equilibrium.

  10. Crystallization and preliminary X-ray crystallographic analysis of the heterodimeric crotoxin complex and the isolated subunits crotapotin and phospholipase A{sub 2}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos, K. F.; Murakami, M. T.; Cintra, A. C. O.

    2007-04-01

    Crotoxin, a potent neurotoxin from the venom of the South American rattlesnake Crotalus durissus terrificus, exists as a heterodimer formed between a phospholipase A{sub 2} and a catalytically inactive acidic phospholipase A{sub 2} analogue (crotapotin). Large single crystals of the crotoxin complex and of the isolated subunits have been obtained. Crotoxin, a potent neurotoxin from the venom of the South American rattlesnake Crotalus durissus terrificus, exists as a heterodimer formed between a phospholipase A{sub 2} and a catalytically inactive acidic phospholipase A{sub 2} analogue (crotapotin). Large single crystals of the crotoxin complex and of the isolated subunits have been obtained.more » The crotoxin complex crystal belongs to the orthorhombic space group P2{sub 1}2{sub 1}2, with unit-cell parameters a = 38.2, b = 68.7, c = 84.2 Å, and diffracted to 1.75 Å resolution. The crystal of the phospholipase A{sub 2} domain belongs to the hexagonal space group P6{sub 1}22 (or its enantiomorph P6{sub 5}22), with unit-cell parameters a = b = 38.7, c = 286.7 Å, and diffracted to 2.6 Å resolution. The crotapotin crystal diffracted to 2.3 Å resolution; however, the highly diffuse diffraction pattern did not permit unambiguous assignment of the unit-cell parameters.« less

  11. Large Eddy Simulation of Heat Entrainment Under Arctic Sea Ice

    NASA Astrophysics Data System (ADS)

    Ramudu, Eshwan; Gelderloos, Renske; Yang, Di; Meneveau, Charles; Gnanadesikan, Anand

    2018-01-01

    Arctic sea ice has declined rapidly in recent decades. The faster than projected retreat suggests that free-running large-scale climate models may not be accurately representing some key processes. The small-scale turbulent entrainment of heat from the mixed layer could be one such process. To better understand this mechanism, we model the Arctic Ocean's Canada Basin, which is characterized by a perennial anomalously warm Pacific Summer Water (PSW) layer residing at the base of the mixed layer and a summertime Near-Surface Temperature Maximum (NSTM) within the mixed layer trapping heat from solar radiation. We use large eddy simulation (LES) to investigate heat entrainment for different ice-drift velocities and different initial temperature profiles. The value of LES is that the resolved turbulent fluxes are greater than the subgrid-scale fluxes for most of our parameter space. The results show that the presence of the NSTM enhances heat entrainment from the mixed layer. Additionally there is no PSW heat entrained under the parameter space considered. We propose a scaling law for the ocean-to-ice heat flux which depends on the initial temperature anomaly in the NSTM layer and the ice-drift velocity. A case study of "The Great Arctic Cyclone of 2012" gives a turbulent heat flux from the mixed layer that is approximately 70% of the total ocean-to-ice heat flux estimated from the PIOMAS model often used for short-term predictions. Present results highlight the need for large-scale climate models to account for the NSTM layer.

  12. Modeling the Structure and Dynamics of Dwarf Spheroidal Galaxies with Dark Matter and Tides

    NASA Astrophysics Data System (ADS)

    Muñoz, Ricardo R.; Majewski, Steven R.; Johnston, Kathryn V.

    2008-05-01

    We report the results of N-body simulations of disrupting satellites aimed at exploring whether the observed features of dSphs can be accounted for with simple, mass-follows-light (MFL) models including tidal disruption. As a test case, we focus on the Carina dwarf spheroidal (dSph), which presently is the dSph system with the most extensive data at large radius. We find that previous N-body, MFL simulations of dSphs did not sufficiently explore the parameter space of satellite mass, density, and orbital shape to find adequate matches to Galactic dSph systems, whereas with a systematic survey of parameter space we are able to find tidally disrupting, MFL satellite models that rather faithfully reproduce Carina's velocity profile, velocity dispersion profile, and projected density distribution over its entire sampled radius. The successful MFL model satellites have very eccentric orbits, currently favored by CDM models, and central velocity dispersions that still yield an accurate representation of the bound mass and observed central M/L ~ 40 of Carina, despite inflation of the velocity dispersion outside the dSph core by unbound debris. Our survey of parameter space also allows us to address a number of commonly held misperceptions of tidal disruption and its observable effects on dSph structure and dynamics. The simulations suggest that even modest tidal disruption can have a profound effect on the observed dynamics of dSph stars at large radii. Satellites that are well described by tidally disrupting MFL models could still be fully compatible with ΛCDM if, for example, they represent a later stage in the evolution of luminous subhalos.

  13. Emulation: A fast stochastic Bayesian method to eliminate model space

    NASA Astrophysics Data System (ADS)

    Roberts, Alan; Hobbs, Richard; Goldstein, Michael

    2010-05-01

    Joint inversion of large 3D datasets has been the goal of geophysicists ever since the datasets first started to be produced. There are two broad approaches to this kind of problem, traditional deterministic inversion schemes and more recently developed Bayesian search methods, such as MCMC (Markov Chain Monte Carlo). However, using both these kinds of schemes has proved prohibitively expensive, both in computing power and time cost, due to the normally very large model space which needs to be searched using forward model simulators which take considerable time to run. At the heart of strategies aimed at accomplishing this kind of inversion is the question of how to reliably and practicably reduce the size of the model space in which the inversion is to be carried out. Here we present a practical Bayesian method, known as emulation, which can address this issue. Emulation is a Bayesian technique used with considerable success in a number of technical fields, such as in astronomy, where the evolution of the universe has been modelled using this technique, and in the petroleum industry where history matching is carried out of hydrocarbon reservoirs. The method of emulation involves building a fast-to-compute uncertainty-calibrated approximation to a forward model simulator. We do this by modelling the output data from a number of forward simulator runs by a computationally cheap function, and then fitting the coefficients defining this function to the model parameters. By calibrating the error of the emulator output with respect to the full simulator output, we can use this to screen out large areas of model space which contain only implausible models. For example, starting with what may be considered a geologically reasonable prior model space of 10000 models, using the emulator we can quickly show that only models which lie within 10% of that model space actually produce output data which is plausibly similar in character to an observed dataset. We can thus much more tightly constrain the input model space for a deterministic inversion or MCMC method. By using this technique jointly on several datasets (specifically seismic, gravity, and magnetotelluric (MT) describing the same region), we can include in our modelling uncertainties in the data measurements, the relationships between the various physical parameters involved, as well as the model representation uncertainty, and at the same time further reduce the range of plausible models to several percent of the original model space. Being stochastic in nature, the output posterior parameter distributions also allow our understanding of/beliefs about a geological region can be objectively updated, with full assessment of uncertainties, and so the emulator is also an inversion-type tool in it's own right, with the advantage (as with any Bayesian method) that our uncertainties from all sources (both data and model) can be fully evaluated.

  14. Multi-objective trajectory optimization for the space exploration vehicle

    NASA Astrophysics Data System (ADS)

    Qin, Xiaoli; Xiao, Zhen

    2016-07-01

    The research determines temperature-constrained optimal trajectory for the space exploration vehicle by developing an optimal control formulation and solving it using a variable order quadrature collocation method with a Non-linear Programming(NLP) solver. The vehicle is assumed to be the space reconnaissance aircraft that has specified takeoff/landing locations, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom aircraft model is adapted from previous work and includes flight dynamics, and thermal constraints.Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and exploration of space targets. In addition, the vehicle models include the environmental models(gravity and atmosphere). How these models are appropriately employed is key to gaining confidence in the results and conclusions of the research. Optimal trajectories are developed using several performance costs in the optimal control formation,minimum time,minimum time with control penalties,and maximum distance.The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for large-scale space exloration.

  15. Technology needs of advanced Earth observation spacecraft

    NASA Technical Reports Server (NTRS)

    Herbert, J. J.; Postuchow, J. R.; Schartel, W. A.

    1984-01-01

    Remote sensing missions were synthesized which could contribute significantly to the understanding of global environmental parameters. Instruments capable of sensing important land and sea parameters are combined with a large antenna designed to passively quantify surface emitted radiation at several wavelengths. A conceptual design for this large deployable antenna was developed. All subsystems required to make the antenna an autonomous spacecraft were conceptually designed. The entire package, including necessary orbit transfer propulsion, is folded to package within the Space Transportation System (STS) cargo bay. After separation, the antenna, its integral feed mast, radiometer receivers, power system, and other instruments are automatically deployed and transferred to the operational orbit. The design resulted in an antenna with a major antenna dimension of 120 meters, weighing 7650 kilograms, and operating at an altitude of 700 kilometers.

  16. PHYSICAL PROPERTIES OF LARGE AND SMALL GRANULES IN SOLAR QUIET REGIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu Daren; Xie Zongxia; Hu Qinghua

    The normal mode observations of seven quiet regions obtained by the Hinode spacecraft are analyzed to study the physical properties of granules. An artificial intelligence technique is introduced to automatically find the spatial distribution of granules in feature spaces. In this work, we investigate the dependence of granular continuum intensity, mean Doppler velocity, and magnetic fields on granular diameter. We recognized 71,538 granules by an automatic segmentation technique and then extracted five properties: diameter, continuum intensity, Doppler velocity, and longitudinal and transverse magnetic flux density to describe the granules. To automatically explore the intrinsic structures of the granules in themore » five-dimensional parameter space, the X-means clustering algorithm and one-rule classifier are introduced to define the rules for classifying the granules. It is found that diameter is a dominating parameter in classifying the granules and two families of granules are derived: small granules with diameters smaller than 1.''44, and large granules with diameters larger than 1.''44. Based on statistical analysis of the detected granules, the following results are derived: (1) the averages of diameter, continuum intensity, and Doppler velocity in the upward direction of large granules are larger than those of small granules; (2) the averages of absolute longitudinal, transverse, and unsigned flux density of large granules are smaller than those of small granules; (3) for small granules, the average of continuum intensity increases with their diameters, while the averages of Doppler velocity, transverse, absolute longitudinal, and unsigned magnetic flux density decrease with their diameters. However, the mean properties of large granules are stable; (4) the intensity distributions of all granules and small granules do not satisfy Gaussian distribution, while that of large granules almost agrees with normal distribution with a peak at 1.04 I{sub 0}.« less

  17. High-Latitude Topside Ionospheric Vertical Electron-Density-Profile Changes in Response to Large Magnetic Storms

    NASA Technical Reports Server (NTRS)

    Benson, Robert F.; Fainberg, Joseph; Osherovich, Vladimir A.; Truhlik, Vladimir; Wang, Yongli; Bilitza, Dieter; Fung, Shing F.

    2015-01-01

    Large magnetic-storm induced changes have been detected in high-latitude topside vertical electron-density profiles Ne(h). The investigation was based on the large database of topside Ne(h) profiles and digital topside ionograms from the International Satellites for Ionospheric Studies (ISIS) program available from the NASA Space Physics Data Facility (SPDF) at http://spdf.gsfc.nasa.gov/isis/isis-status.html. This large database enabled Ne(h) profiles to be obtained when an ISIS satellite passed through nearly the same region of space before, during, and after a major magnetic storm. A major goal was to relate the magnetic-storm induced high-latitude Ne(h) profile changes to solar-wind parameters. Thus an additional data constraint was to consider only storms where solar-wind data were available from the NASA/SPDF OMNIWeb database. Ten large magnetic storms (with Dst less than -100 nT) were identified that satisfied both the Ne(h) profile and the solar-wind data constraints. During five of these storms topside ionospheric Ne(h) profiles were available in the high-latitude northern hemisphere and during the other five storms similar ionospheric data were available in the southern hemisphere. Large Ne(h) changes were observed during each one of these storms. Our concentration in this paper is on the northern hemisphere. The data coverage was best for the northern-hemisphere winter. Here Ne(h) profile enhancements were always observed when the magnetic local time (MLT) was between 00 and 03 and Ne(h) profile depletions were always observed between 08 and 10 MLT. The observed Ne(h) deviations were compared with solar-wind parameters, with appropriate time shifts, for four storms.

  18. An information geometric approach to least squares minimization

    NASA Astrophysics Data System (ADS)

    Transtrum, Mark; Machta, Benjamin; Sethna, James

    2009-03-01

    Parameter estimation by nonlinear least squares minimization is a ubiquitous problem that has an elegant geometric interpretation: all possible parameter values induce a manifold embedded within the space of data. The minimization problem is then to find the point on the manifold closest to the origin. The standard algorithm for minimizing sums of squares, the Levenberg-Marquardt algorithm, also has geometric meaning. When the standard algorithm fails to efficiently find accurate fits to the data, geometric considerations suggest improvements. Problems involving large numbers of parameters, such as often arise in biological contexts, are notoriously difficult. We suggest an algorithm based on geodesic motion that may offer improvements over the standard algorithm for a certain class of problems.

  19. Fundamental properties of resonances

    PubMed Central

    Ceci, S.; Hadžimehmedović, M.; Osmanović, H.; Percan, A.; Zauner, B.

    2017-01-01

    All resonances, from hydrogen nuclei excited by the high-energy gamma rays in deep space to newly discovered particles produced in Large Hadron Collider, should be described by the same fundamental physical quantities. However, two distinct sets of properties are used to describe resonances: the pole parameters (complex pole position and residue) and the Breit-Wigner parameters (mass, width, and branching fractions). There is an ongoing decades-old debate on which one of them should be abandoned. In this study of nucleon resonances appearing in the elastic pion-nucleon scattering we discover an intricate interplay of the parameters from both sets, and realize that neither set is completely independent or fundamental on its own. PMID:28345595

  20. Cosmological Constraint on Brans-Dicke Theory

    NASA Astrophysics Data System (ADS)

    Chen, Xuelei; Wu, Fengquan

    We develop the covariant formalism of the cosmological perturbation theory for the Brans-Dicke gravity, and use it to calculate the cosmic microwave background (CMB) anisotropy and large scale structure (LSS) power spectrum. We introduce a new parameter ζ which is related to the Brans-Dicke parameter ζ = ln(1/ω + 1), and use the Markov-Chain Monte Carlo (MCMC) method to explore the parameter space. Using the latest CMB data published by WMAP, ACBAR, CBI, Boomerang teams, and the LSS data from the SDSS survey DR4, we find that the the 2σ (95.5%) bound on ζ is about |ζ| > 10-2, or |ω| > 102, the precise limit depends somewhat on the prior used.

  1. Static shape control for flexible structures

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Scheid, R. E., Jr.

    1986-01-01

    An integrated methodology is described for defining static shape control laws for large flexible structures. The techniques include modeling, identifying and estimating the control laws of distributed systems characterized in terms of infinite dimensional state and parameter spaces. The models are expressed as interconnected elliptic partial differential equations governing a range of static loads, with the capability of analyzing electromagnetic fields around antenna systems. A second-order analysis is carried out for statistical errors, and model parameters are determined by maximizing an appropriate defined likelihood functional which adjusts the model to observational data. The parameter estimates are derived from the conditional mean of the observational data, resulting in a least squares superposition of shape functions obtained from the structural model.

  2. Transport Regimes Spanning Magnetization-Coupling Phase Space

    NASA Astrophysics Data System (ADS)

    Baalrud, Scott D.; Tiwari, Sanat; Daligault, Jerome

    2017-10-01

    The manner in which transport properties vary over the entire parameter-space of coupling and magnetization strength is explored. Four regimes are identified based on the relative size of the gyroradius compared to other fundamental length scales: the collision mean free path, Debye length, distance of closest approach and interparticle spacing. Molecular dynamics simulations of self-diffusion and temperature anisotropy relaxation spanning the parameter space are found to agree well with the predicted boundaries. Comparison with existing theories reveals regimes where they succeed, where they fail, and where no theory has yet been developed. The results suggest that magnetic fields may be used to assist ultracold neutral plasma experiments to reach regimes of stronger electron coupling by reducing heating of electrons in the direction perpendicular to the magnetic field.. By constraining electron motion along the direction of the magnetic field, the overall electron temperature is reduced nearly by a factor of three. A large temperature anisotropy develops as a result, which can be maintained for a long time in the regime of high electron magnetization. Work supported by LDRD project 20150520ER at LANL, AFOSR FA9550-16-1-0221 and US DOE Award DE-SC00161.

  3. Computational study of the shock driven instability of a multiphase particle-gas system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    This paper considers the interaction of a shock wave with a multiphase particle-gas system which creates an instability somewhat similar to the Richtmyer-Meshkov instability but with a larger parameter space. Because this parameter space is large, we only present an introductory survey of the effects of many of these parameters. We highlight the effects of particle-gas coupling, incident shock strength, particle size, effective system density differences, and multiple particle relaxation time effects. We focus on dilute flows with mass loading up to 40% and do not attempt to cover all parametric combinations. Instead, we vary one parameter at a timemore » leaving additional parametric combinations for future work. The simulations are run with the Ares code, developed at Lawrence Livermore National Laboratory, which uses a multiphase particulate transport method to model two-way momentum and energy coupling. A brief validation of these models is presented and coupling effects are explored. It is shown that even for small particles, on the order of 1μm, multi-phase coupling effects are important and diminish the circulation deposition on the interface by up to 25%. These coupling effects are shown to create large temperature deviations from the dusty gas approximation, up to 20% greater, especially at higher shock strengths. It is also found that for a multiphase instability, the vortex sheet deposited at the interface separates into two sheets. In conclusion, depending on the particle and particle-gas Atwood numbers, the instability may be suppressed or enhanced by the interactions of these two vortex sheets.« less

  4. Computational study of the shock driven instability of a multiphase particle-gas system

    DOE PAGES

    None, None

    2016-02-01

    This paper considers the interaction of a shock wave with a multiphase particle-gas system which creates an instability somewhat similar to the Richtmyer-Meshkov instability but with a larger parameter space. Because this parameter space is large, we only present an introductory survey of the effects of many of these parameters. We highlight the effects of particle-gas coupling, incident shock strength, particle size, effective system density differences, and multiple particle relaxation time effects. We focus on dilute flows with mass loading up to 40% and do not attempt to cover all parametric combinations. Instead, we vary one parameter at a timemore » leaving additional parametric combinations for future work. The simulations are run with the Ares code, developed at Lawrence Livermore National Laboratory, which uses a multiphase particulate transport method to model two-way momentum and energy coupling. A brief validation of these models is presented and coupling effects are explored. It is shown that even for small particles, on the order of 1μm, multi-phase coupling effects are important and diminish the circulation deposition on the interface by up to 25%. These coupling effects are shown to create large temperature deviations from the dusty gas approximation, up to 20% greater, especially at higher shock strengths. It is also found that for a multiphase instability, the vortex sheet deposited at the interface separates into two sheets. In conclusion, depending on the particle and particle-gas Atwood numbers, the instability may be suppressed or enhanced by the interactions of these two vortex sheets.« less

  5. Computational study of the shock driven instability of a multiphase particle-gas system

    NASA Astrophysics Data System (ADS)

    McFarland, Jacob A.; Black, Wolfgang J.; Dahal, Jeevan; Morgan, Brandon E.

    2016-02-01

    This paper considers the interaction of a shock wave with a multiphase particle-gas system which creates an instability similar in some ways to the Richtmyer-Meshkov instability but with a larger parameter space. As this parameter space is large, we only present an introductory survey of the effects of many of these parameters. We highlight the effects of particle-gas coupling, incident shock strength, particle size, effective system density differences, and multiple particle relaxation time effects. We focus on dilute flows with mass loading up to 40% and do not attempt to cover all parametric combinations. Instead, we vary one parameter at a time leaving additional parametric combinations for future work. The simulations are run with the Ares code, developed at Lawrence Livermore National Laboratory, which uses a multiphase particulate transport method to model two-way momentum and energy coupling. A brief validation of these models is presented and coupling effects are explored. It is shown that even for small particles, on the order of 1 μm, multi-phase coupling effects are important and diminish the circulation deposition on the interface by up to 25%. These coupling effects are shown to create large temperature deviations from the dusty gas approximation, up to 20% greater, especially at higher shock strengths. It is also found that for a multiphase instability, the vortex sheet deposited at the interface separates into two sheets. Depending on the particle and particle-gas Atwood numbers, the instability may be suppressed or enhanced by the interactions of these two vortex sheets.

  6. Real-time imaging of perivascular transport of nanoparticles during convection-enhanced delivery in the rat cortex.

    PubMed

    Foley, Conor P; Nishimura, Nozomi; Neeves, Keith B; Schaffer, Chris B; Olbricht, William L

    2012-02-01

    Convection-enhanced delivery (CED) is a promising technique for administering large therapeutics that do not readily cross the blood brain barrier to neural tissue. It is of vital importance to understand how large drug constructs move through neural tissue during CED to optimize construct and delivery parameters so that drugs are concentrated in the targeted tissue, with minimal leakage outside the targeted zone. Experiments have shown that liposomes, viral vectors, high molecular weight tracers, and nanoparticles infused into neural tissue localize in the perivascular spaces of blood vessels within the brain parenchyma. In this work, we used two-photon excited fluorescence microscopy to monitor the real-time distribution of nanoparticles infused in the cortex of live, anesthetized rats via CED. Fluorescent nanoparticles of 24 and 100 nm nominal diameters were infused into rat cortex through microfluidic probes. We found that perivascular spaces provide a high permeability path for rapid convective transport of large nanoparticles through tissue, and that the effects of perivascular spaces on transport are more significant for larger particles that undergo hindered transport through the extracellular matrix. This suggests that the vascular topology of the target tissue volume must be considered when delivering large therapeutic constructs via CED.

  7. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.

  8. A panning DLT procedure for three-dimensional videography.

    PubMed

    Yu, B; Koh, T J; Hay, J G

    1993-06-01

    The direct linear transformation (DLT) method [Abdel-Aziz and Karara, APS Symposium on Photogrammetry. American Society of Photogrammetry, Falls Church, VA (1971)] is widely used in biomechanics to obtain three-dimensional space coordinates from film and video records. This method has some major shortcomings when used to analyze events which take place over large areas. To overcome these shortcomings, a three-dimensional data collection method based on the DLT method, and making use of panning cameras, was developed. Several small single control volumes were combined to construct a large total control volume. For each single control volume, a regression equation (calibration equation) is developed to express each of the 11 DLT parameters as a function of camera orientation, so that the DLT parameters can then be estimated from arbitrary camera orientations. Once the DLT parameters are known for at least two cameras, and the associated two-dimensional film or video coordinates of the event are obtained, the desired three-dimensional space coordinates can be computed. In a laboratory test, five single control volumes (in a total control volume of 24.40 x 2.44 x 2.44 m3) were used to test the effect of the position of the single control volume on the accuracy of the computed three dimensional space coordinates. Linear and quadratic calibration equations were used to test the effect of the order of the equation on the accuracy of the computed three dimensional space coordinates. For four of the five single control volumes tested, the mean resultant errors associated with the use of the linear calibration equation were significantly larger than those associated with the use of the quadratic calibration equation. The position of the single control volume had no significant effect on the mean resultant errors in computed three dimensional coordinates when the quadratic calibration equation was used. Under the same data collection conditions, the mean resultant errors in the computed three dimensional coordinates associated with the panning and stationary DLT methods were 17 and 22 mm, respectively. The major advantages of the panning DLT method lie in the large image sizes obtained and in the ease with which the data can be collected. The method also has potential for use in a wide variety of contexts. The major shortcoming of the method is the large amount of digitizing necessary to calibrate the total control volume. Adaptations of the method to reduce the amount of digitizing required are being explored.

  9. Forecasting Propagation and Evolution of CMEs in an Operational Setting: What Has Been Learned

    NASA Technical Reports Server (NTRS)

    Zheng, Yihua; Macneice, Peter; Odstrcil, Dusan; Mays, M. L.; Rastaetter, Lutz; Pulkkinen, Antti; Taktakishvili, Aleksandre; Hesse, Michael; Kuznetsova, M. Masha; Lee, Hyesook; hide

    2013-01-01

    One of the major types of solar eruption, coronal mass ejections (CMEs) not only impact space weather, but also can have significant societal consequences. CMEs cause intense geomagnetic storms and drive fast mode shocks that accelerate charged particles, potentially resulting in enhanced radiation levels both in ions and electrons. Human and technological assets in space can be endangered as a result. CMEs are also the major contributor to generating large amplitude Geomagnetically Induced Currents (GICs), which are a source of concern for power grid safety. Due to their space weather significance, forecasting the evolution and impacts of CMEs has become a much desired capability for space weather operations worldwide. Based on our operational experience at Space Weather Research Center at NASA Goddard Space Flight Center (http://swrc.gsfc.nasa.gov), we present here some of the insights gained about accurately predicting CME impacts, particularly in relation to space weather operations. These include: 1. The need to maximize information to get an accurate handle of three-dimensional (3-D) CME kinetic parameters and therefore improve CME forecast; 2. The potential use of CME simulation results for qualitative prediction of regions of space where solar energetic particles (SEPs) may be found; 3. The need to include all CMEs occurring within a 24 h period for a better representation of the CME interactions; 4. Various other important parameters in forecasting CME evolution in interplanetary space, with special emphasis on the CME propagation direction. It is noted that a future direction for our CME forecasting is to employ the ensemble modeling approach.

  10. Forecasting propagation and evolution of CMEs in an operational setting: What has been learned

    NASA Astrophysics Data System (ADS)

    Zheng, Yihua; Macneice, Peter; Odstrcil, Dusan; Mays, M. L.; Rastaetter, Lutz; Pulkkinen, Antti; Taktakishvili, Aleksandre; Hesse, Michael; Masha Kuznetsova, M.; Lee, Hyesook; Chulaki, Anna

    2013-10-01

    of the major types of solar eruption, coronal mass ejections (CMEs) not only impact space weather, but also can have significant societal consequences. CMEs cause intense geomagnetic storms and drive fast mode shocks that accelerate charged particles, potentially resulting in enhanced radiation levels both in ions and electrons. Human and technological assets in space can be endangered as a result. CMEs are also the major contributor to generating large amplitude Geomagnetically Induced Currents (GICs), which are a source of concern for power grid safety. Due to their space weather significance, forecasting the evolution and impacts of CMEs has become a much desired capability for space weather operations worldwide. Based on our operational experience at Space Weather Research Center at NASA Goddard Space Flight Center (http://swrc.gsfc.nasa.gov), we present here some of the insights gained about accurately predicting CME impacts, particularly in relation to space weather operations. These include: 1. The need to maximize information to get an accurate handle of three-dimensional (3-D) CME kinetic parameters and therefore improve CME forecast; 2. The potential use of CME simulation results for qualitative prediction of regions of space where solar energetic particles (SEPs) may be found; 3. The need to include all CMEs occurring within a 24 h period for a better representation of the CME interactions; 4. Various other important parameters in forecasting CME evolution in interplanetary space, with special emphasis on the CME propagation direction. It is noted that a future direction for our CME forecasting is to employ the ensemble modeling approach.

  11. AMTD - Advanced Mirror Technology Development in Mechanical Stability

    NASA Technical Reports Server (NTRS)

    Knight, J. Brent

    2015-01-01

    Analytical tools and processes are being developed at NASA Marshal Space Flight Center in support of the Advanced Mirror Technology Development (AMTD) project. One facet of optical performance is mechanical stability with respect to structural dynamics. Pertinent parameters are: (1) the spacecraft structural design, (2) the mechanical disturbances on-board the spacecraft (sources of vibratory/transient motion such as reaction wheels), (3) the vibration isolation systems (invariably required to meet future science needs), and (4) the dynamic characteristics of the optical system itself. With stability requirements of future large aperture space telescopes being in the lower Pico meter regime, it is paramount that all sources of mechanical excitation be considered in both feasibility studies and detailed analyses. The primary objective of this paper is to lay out a path to perform feasibility studies of future large aperture space telescope projects which require extreme stability. To get to that end, a high level overview of a structural dynamic analysis process to assess an integrated spacecraft and optical system is included.

  12. Development of reaction-sintered SiC mirror for space-borne optics

    NASA Astrophysics Data System (ADS)

    Yui, Yukari Y.; Kimura, Toshiyoshi; Tange, Yoshio

    2017-11-01

    We are developing high-strength reaction-sintered silicon carbide (RS-SiC) mirror as one of the new promising candidates for large-diameter space-borne optics. In order to observe earth surface or atmosphere with high spatial resolution from geostationary orbit, larger diameter primary mirrors of 1-2 m are required. One of the difficult problems to be solved to realize such optical system is to obtain as flat mirror surface as possible that ensures imaging performance in infrared - visible - ultraviolet wavelength region. This means that homogeneous nano-order surface flatness/roughness is required for the mirror. The high-strength RS-SiC developed and manufactured by TOSHIBA is one of the most excellent and feasible candidates for such purpose. Small RS-SiC plane sample mirrors have been manufactured and basic physical parameters and optical performances of them have been measured. We show the current state of the art of the RS-SiC mirror and the feasibility of a large-diameter RS-SiC mirror for space-borne optics.

  13. Cosmological constraints on extended Galileon models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felice, Antonio De; Tsujikawa, Shinji, E-mail: antoniod@nu.ac.th, E-mail: shinji@rs.kagu.tus.ac.jp

    2012-03-01

    The extended Galileon models possess tracker solutions with de Sitter attractors along which the dark energy equation of state is constant during the matter-dominated epoch, i.e. w{sub DE} = −1−s, where s is a positive constant. Even with this phantom equation of state there are viable parameter spaces in which the ghosts and Laplacian instabilities are absent. Using the observational data of the supernovae type Ia, the cosmic microwave background (CMB), and baryon acoustic oscillations, we place constraints on the tracker solutions at the background level and find that the parameter s is constrained to be s = 0.034{sub −0.034}{supmore » +0.327} (95 % CL) in the flat Universe. In order to break the degeneracy between the models we also study the evolution of cosmological density perturbations relevant to the large-scale structure (LSS) and the Integrated-Sachs-Wolfe (ISW) effect in CMB. We show that, depending on the model parameters, the LSS and the ISW effect is either positively or negatively correlated. It is then possible to constrain viable parameter spaces further from the observational data of the ISW-LSS cross-correlation as well as from the matter power spectrum.« less

  14. Virtual walks in spin space: A study in a family of two-parameter models

    NASA Astrophysics Data System (ADS)

    Mullick, Pratik; Sen, Parongama

    2018-05-01

    We investigate the dynamics of classical spins mapped as walkers in a virtual "spin" space using a generalized two-parameter family of spin models characterized by parameters y and z [de Oliveira et al., J. Phys. A 26, 2317 (1993), 10.1088/0305-4470/26/10/006]. The behavior of S (x ,t ) , the probability that the walker is at position x at time t , is studied in detail. In general S (x ,t ) ˜t-αf (x /tα) with α ≃1 or 0.5 at large times depending on the parameters. In particular, S (x ,t ) for the point y =1 ,z =0.5 corresponding to the Voter model shows a crossover in time; associated with this crossover, two timescales can be defined which vary with the system size L as L2logL . We also show that as the Voter model point is approached from the disordered regions along different directions, the width of the Gaussian distribution S (x ,t ) diverges in a power law manner with different exponents. For the majority Voter case, the results indicate that the the virtual walk can detect the phase transition perhaps more efficiently compared to other nonequilibrium methods.

  15. A Structure-Adaptive Hybrid RBF-BP Classifier with an Optimized Learning Strategy

    PubMed Central

    Wen, Hui; Xie, Weixin; Pei, Jihong

    2016-01-01

    This paper presents a structure-adaptive hybrid RBF-BP (SAHRBF-BP) classifier with an optimized learning strategy. SAHRBF-BP is composed of a structure-adaptive RBF network and a BP network of cascade, where the number of RBF hidden nodes is adjusted adaptively according to the distribution of sample space, the adaptive RBF network is used for nonlinear kernel mapping and the BP network is used for nonlinear classification. The optimized learning strategy is as follows: firstly, a potential function is introduced into training sample space to adaptively determine the number of initial RBF hidden nodes and node parameters, and a form of heterogeneous samples repulsive force is designed to further optimize each generated RBF hidden node parameters, the optimized structure-adaptive RBF network is used for adaptively nonlinear mapping the sample space; then, according to the number of adaptively generated RBF hidden nodes, the number of subsequent BP input nodes can be determined, and the overall SAHRBF-BP classifier is built up; finally, different training sample sets are used to train the BP network parameters in SAHRBF-BP. Compared with other algorithms applied to different data sets, experiments show the superiority of SAHRBF-BP. Especially on most low dimensional and large number of data sets, the classification performance of SAHRBF-BP outperforms other training SLFNs algorithms. PMID:27792737

  16. Electroweak baryogenesis in two Higgs doublet models and B meson anomalies

    NASA Astrophysics Data System (ADS)

    Cline, James M.; Kainulainen, Kimmo; Trott, Michael

    2011-11-01

    Motivated by 3.9 σ evidence of a CP-violating phase beyond the standard model in the like-sign dimuon asymmetry reported by D∅, we examine the potential for two Higgs doublet models (2HDMs) to achieve successful electroweak baryogenesis (EWBG) while explaining the dimuon anomaly. Our emphasis is on the minimal flavour violating 2HDM, but our numerical scans of model parameter space include type I and type II models as special cases. We incorporate relevant particle physics constraints, including electroweak precision data, b → sγ, the neutron electric dipole moment, R b , and perturbative coupling bounds to constrain the model. Surprisingly, we find that a large enough baryon asymmetry is only consistently achieved in a small subset of parameter space in 2HDMs, regardless of trying to simultaneously account for any B physics anomaly. There is some tension between simultaneous explanation of the dimuon anomaly and baryogenesis, but using a Markov chain Monte Carlo we find several models within 1 σ of the central values. We point out shortcomings with previous studies that reached different conclusions. The restricted parameter space that allows for EWBG makes this scenario highly predictive for collider searches. We discuss the most promising signatures to pursue at the LHC for EWBG-compatible models.

  17. Life Support Filtration System Trade Study for Deep Space Missions

    NASA Technical Reports Server (NTRS)

    Agui, Juan H.; Perry, Jay L.

    2017-01-01

    The National Aeronautics and Space Administrations (NASA) technical developments for highly reliable life support systems aim to maximize the viability of long duration deep space missions. Among the life support system functions, airborne particulate matter filtration is a significant driver of launch mass because of the large geometry required to provide adequate filtration performance and because of the number of replacement filters needed to a sustain a mission. A trade analysis incorporating various launch, operational and maintenance parameters was conducted to investigate the trade-offs between the various particulate matter filtration configurations. In addition to typical launch parameters such as mass, volume and power, the amount of crew time dedicated to system maintenance becomes an increasingly crucial factor for long duration missions. The trade analysis evaluated these parameters for conventional particulate matter filtration technologies and a new multi-stage particulate matter filtration system under development by NASAs Glenn Research Center. The multi-stage filtration system features modular components that allow for physical configuration flexibility. Specifically, the filtration system components can be configured in distributed, centralized, and hybrid physical layouts that can result in considerable mass savings compared to conventional particulate matter filtration technologies. The trade analysis results are presented and implications for future transit and surface missions are discussed.

  18. M$^3$: A New Muon Missing Momentum Experiment to Probe $$(g-2)_{\\mu}$$ and Dark Matter at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahn, Yonatan; Krnjaic, Gordan; Tran, Nhan

    New light, weakly-coupled particles are commonly invoked to address the persistentmore » $$\\sim 4\\sigma$$ anomaly in $$(g-2)_\\mu$$ and serve as mediators between dark and visible matter. If such particles couple predominantly to heavier generations and decay invisibly, much of their best-motivated parameter space is inaccessible with existing experimental techniques. In this paper, we present a new fixed-target, missing-momentum search strategy to probe invisibly decaying particles that couple preferentially to muons. In our setup, a relativistic muon beam impinges on a thick active target. The signal consists of events in which a muon loses a large fraction of its incident momentum inside the target without initiating any detectable electromagnetic or hadronic activity in downstream veto systems. We propose a two-phase experiment, M$^3$ (Muon Missing Momentum), based at Fermilab. Phase 1 with $$\\sim 10^{10}$$ muons on target can test the remaining parameter space for which light invisibly-decaying particles can resolve the $$(g-2)_\\mu$$ anomaly, while Phase 2 with $$\\sim 10^{13}$$ muons on target can test much of the predictive parameter space over which sub-GeV dark matter achieves freeze-out via muon-philic forces, including gauged $$U(1)_{L_\\mu - L_\\tau}$$.« less

  19. The measurement and prediction of proton upset

    NASA Astrophysics Data System (ADS)

    Shimano, Y.; Goka, T.; Kuboyama, S.; Kawachi, K.; Kanai, T.

    1989-12-01

    The authors evaluate tolerance to proton upset for three kinds of memories and one microprocessor unit for space use by irradiating them with high-energy protons up to nearly 70 MeV. They predict the error rates of these memories using a modified semi-empirical equation of Bendel and Petersen (1983). A two-parameter method was used instead of Bendel's one-parameter method. There is a large difference between these two methods with regard to the fitted parameters. The calculation of upset rates in orbits were carried out using these parameters and NASA AP8MAC, AP8MIC. For the 93419 RAM the result of this calculation was compared with the in-orbit data taken on the MOS-1 spacecraft. A good agreement was found between the two sets of upset-rate data.

  20. Behavior of sensitivities in the one-dimensional advection-dispersion equation: Implications for parameter estimation and sampling design

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1987-01-01

    The spatial and temporal variability of sensitivities has a significant impact on parameter estimation and sampling design for studies of solute transport in porous media. Physical insight into the behavior of sensitivities is offered through an analysis of analytically derived sensitivities for the one-dimensional form of the advection-dispersion equation. When parameters are estimated in regression models of one-dimensional transport, the spatial and temporal variability in sensitivities influences variance and covariance of parameter estimates. Several principles account for the observed influence of sensitivities on parameter uncertainty. (1) Information about a physical parameter may be most accurately gained at points in space and time with a high sensitivity to the parameter. (2) As the distance of observation points from the upstream boundary increases, maximum sensitivity to velocity during passage of the solute front increases and the consequent estimate of velocity tends to have lower variance. (3) The frequency of sampling must be “in phase” with the S shape of the dispersion sensitivity curve to yield the most information on dispersion. (4) The sensitivity to the dispersion coefficient is usually at least an order of magnitude less than the sensitivity to velocity. (5) The assumed probability distribution of random error in observations of solute concentration determines the form of the sensitivities. (6) If variance in random error in observations is large, trends in sensitivities of observation points may be obscured by noise and thus have limited value in predicting variance in parameter estimates among designs. (7) Designs that minimize the variance of one parameter may not necessarily minimize the variance of other parameters. (8) The time and space interval over which an observation point is sensitive to a given parameter depends on the actual values of the parameters in the underlying physical system.

  1. The Microgravity Isolation Mount: A Linearized State-Space Model a la Newton and Kane

    NASA Technical Reports Server (NTRS)

    Hampton, R. David; Tryggvason, Bjarni V.; DeCarufel, Jean; Townsend, Miles A.; Wagar, William O.

    1999-01-01

    Vibration acceleration levels on large space platforms exceed the requirements of many space experiments. The Microgravity Vibration Isolation Mount (MIM) was built by the Canadian Space Agency to attenuate these disturbances to acceptable levels, and has been operational on the Russian Space Station Mir since May 1996. It has demonstrated good isolation performance and has supported several materials science experiments. The MIM uses Lorentz (voice-coil) magnetic actuators to levitate and isolate payloads at the individual experiment/sub-experiment (versus rack) level. Payload acceleration, relative position, and relative orientation (Euler-parameter) measurements are fed to a state-space controller. The controller, in turn, determines the actuator currents needed for effective experiment isolation. This paper presents the development of an algebraic, state-space model of the MIM, in a form suitable for optimal controller design. The equations are first derived using Newton's Second Law directly; then a second derivation (i.e., validation) of the same equations is provided, using Kane's approach.

  2. Development and application of a probability distribution retrieval scheme to the remote sensing of clouds and precipitation

    NASA Astrophysics Data System (ADS)

    McKague, Darren Shawn

    2001-12-01

    The statistical properties of clouds and precipitation on a global scale are important to our understanding of climate. Inversion methods exist to retrieve the needed cloud and precipitation properties from satellite data pixel-by-pixel that can then be summarized over large data sets to obtain the desired statistics. These methods can be quite computationally expensive, and typically don't provide errors on the statistics. A new method is developed to directly retrieve probability distributions of parameters from the distribution of measured radiances. The method also provides estimates of the errors on the retrieved distributions. The method can retrieve joint distributions of parameters that allows for the study of the connection between parameters. A forward radiative transfer model creates a mapping from retrieval parameter space to radiance space. A Monte Carlo procedure uses the mapping to transform probability density from the observed radiance histogram to a two- dimensional retrieval property probability distribution function (PDF). An estimate of the uncertainty in the retrieved PDF is calculated from random realizations of the radiance to retrieval parameter PDF transformation given the uncertainty of the observed radiances, the radiance PDF, the forward radiative transfer, the finite number of prior state vectors, and the non-unique mapping to retrieval parameter space. The retrieval method is also applied to the remote sensing of precipitation from SSM/I microwave data. A method of stochastically generating hydrometeor fields based on the fields from a numerical cloud model is used to create the precipitation parameter radiance space transformation. The impact of vertical and horizontal variability within the hydrometeor fields has a significant impact on algorithm performance. Beamfilling factors are computed from the simulated hydrometeor fields. The beamfilling factors vary quite a bit depending upon the horizontal structure of the rain. The algorithm is applied to SSM/I images from the eastern tropical Pacific and is compared to PDFs of rain rate computed using pixel-by-pixel retrievals from Wilheit and from Liu and Curry. Differences exist between the three methods, but good general agreement is seen between the PDF retrieval algorithm and the algorithm of Liu and Curry. (Abstract shortened by UMI.)

  3. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  4. Fractional Transport in Strongly Turbulent Plasmas.

    PubMed

    Isliker, Heinz; Vlahos, Loukas; Constantinescu, Dana

    2017-07-28

    We analyze statistically the energization of particles in a large scale environment of strong turbulence that is fragmented into a large number of distributed current filaments. The turbulent environment is generated through strongly perturbed, 3D, resistive magnetohydrodynamics simulations, and it emerges naturally from the nonlinear evolution, without a specific reconnection geometry being set up. Based on test-particle simulations, we estimate the transport coefficients in energy space for use in the classical Fokker-Planck (FP) equation, and we show that the latter fails to reproduce the simulation results. The reason is that transport in energy space is highly anomalous (strange), the particles perform Levy flights, and the energy distributions show extended power-law tails. Newly then, we motivate the use and derive the specific form of a fractional transport equation (FTE), we determine its parameters and the order of the fractional derivatives from the simulation data, and we show that the FTE is able to reproduce the high energy part of the simulation data very well. The procedure for determining the FTE parameters also makes clear that it is the analysis of the simulation data that allows us to make the decision whether a classical FP equation or a FTE is appropriate.

  5. Fractional Transport in Strongly Turbulent Plasmas

    NASA Astrophysics Data System (ADS)

    Isliker, Heinz; Vlahos, Loukas; Constantinescu, Dana

    2017-07-01

    We analyze statistically the energization of particles in a large scale environment of strong turbulence that is fragmented into a large number of distributed current filaments. The turbulent environment is generated through strongly perturbed, 3D, resistive magnetohydrodynamics simulations, and it emerges naturally from the nonlinear evolution, without a specific reconnection geometry being set up. Based on test-particle simulations, we estimate the transport coefficients in energy space for use in the classical Fokker-Planck (FP) equation, and we show that the latter fails to reproduce the simulation results. The reason is that transport in energy space is highly anomalous (strange), the particles perform Levy flights, and the energy distributions show extended power-law tails. Newly then, we motivate the use and derive the specific form of a fractional transport equation (FTE), we determine its parameters and the order of the fractional derivatives from the simulation data, and we show that the FTE is able to reproduce the high energy part of the simulation data very well. The procedure for determining the FTE parameters also makes clear that it is the analysis of the simulation data that allows us to make the decision whether a classical FP equation or a FTE is appropriate.

  6. Thermal dark matter through the Dirac neutrino portal

    NASA Astrophysics Data System (ADS)

    Batell, Brian; Han, Tao; McKeen, David; Haghi, Barmak Shams Es

    2018-04-01

    We study a simple model of thermal dark matter annihilating to standard model neutrinos via the neutrino portal. A (pseudo-)Dirac sterile neutrino serves as a mediator between the visible and the dark sectors, while an approximate lepton number symmetry allows for a large neutrino Yukawa coupling and, in turn, efficient dark matter annihilation. The dark sector consists of two particles, a Dirac fermion and complex scalar, charged under a symmetry that ensures the stability of the dark matter. A generic prediction of the model is a sterile neutrino with a large active-sterile mixing angle that decays primarily invisibly. We derive existing constraints and future projections from direct detection experiments, colliders, rare meson and tau decays, electroweak precision tests, and small scale structure observations. Along with these phenomenological tests, we investigate the consequences of perturbativity and scalar mass fine tuning on the model parameter space. A simple, conservative scheme to confront the various tests with the thermal relic target is outlined, and we demonstrate that much of the cosmologically-motivated parameter space is already constrained. We also identify new probes of this scenario such as multibody kaon decays and Drell-Yan production of W bosons at the LHC.

  7. Least square regularized regression in sum space.

    PubMed

    Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu

    2013-04-01

    This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.

  8. Cosmological parameter constraints with the Deep Lens Survey using galaxy-shear correlations and galaxy clustering properties

    NASA Astrophysics Data System (ADS)

    Yoon, Mijin; Jee, Myungkook James; Tyson, Tony

    2018-01-01

    The Deep Lens Survey (DLS), a precursor to the Large Synoptic Survey Telescope (LSST), is a 20 sq. deg survey carried out with NOAO’s Blanco and Mayall telescopes. The strength of the survey lies in its depth reaching down to ~27th mag in BVRz bands. This enables a broad redshift baseline study and allows us to investigate cosmological evolution of the large-scale structure. In this poster, we present the first cosmological analysis from the DLS using galaxy-shear correlations and galaxy clustering signals. Our DLS shear calibration accuracy has been validated through the most recent public weak-lensing data challenge. Photometric redshift systematic errors are tested by performing lens-source flip tests. Instead of real-space correlations, we reconstruct band-limited power spectra for cosmological parameter constraints. Our analysis puts a tight constraint on the matter density and the power spectrum normalization parameters. Our results are highly consistent with our previous cosmic shear analysis and also with the Planck CMB results.

  9. Profiling optimization for big data transfer over dedicated channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yun, D.; Wu, Qishi; Rao, Nageswara S

    The transfer of big data is increasingly supported by dedicated channels in high-performance networks, where transport protocols play an important role in maximizing applicationlevel throughput and link utilization. The performance of transport protocols largely depend on their control parameter settings, but it is prohibitively time consuming to conduct an exhaustive search in a large parameter space to find the best set of parameter values. We propose FastProf, a stochastic approximation-based transport profiler, to quickly determine the optimal operational zone of a given data transfer protocol/method over dedicated channels. We implement and test the proposed method using both emulations based onmore » real-life performance measurements and experiments over physical connections with short (2 ms) and long (380 ms) delays. Both the emulation and experimental results show that FastProf significantly reduces the profiling overhead while achieving a comparable level of end-to-end throughput performance with the exhaustive search-based approach.« less

  10. The structure and dynamics of tornado-like vortices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nolan, D.S.; Farrell, B.F.

    The structure and dynamics of axisymmetric tornado-like vortices are explored with a numerical model of axisymmetric incompressible flow based on recently developed numerical methods. The model is first shown to compare favorably with previous results and is then used to study the effects of varying the major parameters controlling the vortex: the strength of the convective forcing, the strength of the rotational forcing, and the magnitude of the model eddy viscosity. Dimensional analysis of the model problem indicates that the results must depend on only two dimensionless parameters. The natural choices for these two parameters are a convective Reynolds numbermore » (based on the velocity scale associated with the convective forcing) and a parameter analogous to the swirl ratio in laboratory models. However, by examining sets of simulations with different model parameters it is found that a dimensionless parameter known as the vortex Reynolds number, which is the ratio of the far-field circulation to the eddy viscosity, is more effective than the convention swirl ratio for predicting the structure of the vortex. The parameter space defined by the choices for model parameters is further explored with large sets of numerical simulations. For much of this parameter space it is confirmed that the vortex structure and time-dependent behavior depend strongly on the vortex Reynolds number and only weakly on the convective Reynolds number. The authors also find that for higher convective Reynolds numbers, the maximum possible wind speed increases, and the rotational forcing necessary to achieve that wind speed decreases. Physical reasoning is used to explain this behavior, and implications for tornado dynamics are discussed.« less

  11. Field-theoretic simulations of block copolymer nanocomposites in a constant interfacial tension ensemble.

    PubMed

    Koski, Jason P; Riggleman, Robert A

    2017-04-28

    Block copolymers, due to their ability to self-assemble into periodic structures with long range order, are appealing candidates to control the ordering of functionalized nanoparticles where it is well-accepted that the spatial distribution of nanoparticles in a polymer matrix dictates the resulting material properties. The large parameter space associated with block copolymer nanocomposites makes theory and simulation tools appealing to guide experiments and effectively isolate parameters of interest. We demonstrate a method for performing field-theoretic simulations in a constant volume-constant interfacial tension ensemble (nVγT) that enables the determination of the equilibrium properties of block copolymer nanocomposites, including when the composites are placed under tensile or compressive loads. Our approach is compatible with the complex Langevin simulation framework, which allows us to go beyond the mean-field approximation. We validate our approach by comparing our nVγT approach with free energy calculations to determine the ideal domain spacing and modulus of a symmetric block copolymer melt. We analyze the effect of numerical and thermodynamic parameters on the efficiency of the nVγT ensemble and subsequently use our method to investigate the ideal domain spacing, modulus, and nanoparticle distribution of a lamellar forming block copolymer nanocomposite. We find that the nanoparticle distribution is directly linked to the resultant domain spacing and is dependent on polymer chain density, nanoparticle size, and nanoparticle chemistry. Furthermore, placing the system under tension or compression can qualitatively alter the nanoparticle distribution within the block copolymer.

  12. A systematic construction of microstate geometries with low angular momentum

    NASA Astrophysics Data System (ADS)

    Bena, Iosif; Heidmann, Pierre; Ramírez, Pedro F.

    2017-10-01

    We outline a systematic procedure to obtain horizonless microstate geometries that have the same charges as three-charge five-dimensional black holes with a macroscopically-large horizon area and an arbitrarily-small angular momentum. There are two routes through which such solutions can be constructed: using multi-center Gibbons-Hawking (GH) spaces or using superstratum technology. So far the only solutions corre-sponding to microstate geometries for black holes with no angular momentum have been obtained via superstrata [1], and multi-center Gibbons-Hawking spaces have been believed to give rise only to microstate geometries of BMPV black holes with a large angular mo-mentum [2]. We perform a thorough search throughout the parameter space of smooth horizonless solutions with four GH centers and find that these have an angular momentum that is generally larger than 80% of the cosmic censorship bound. However, we find that solutions with three GH centers and one supertube (which are smooth in six-dimensional supergravity) can have an arbitrarily-low angular momentum. Our construction thus gives a recipe to build large classes of microstate geometries for zero-angular-momentum black holes without resorting to superstratum technology.

  13. Using internal discharge data in a distributed conceptual model to reduce uncertainty in streamflow simulations

    NASA Astrophysics Data System (ADS)

    Guerrero, J.; Halldin, S.; Xu, C.; Lundin, L.

    2011-12-01

    Distributed hydrological models are important tools in water management as they account for the spatial variability of the hydrological data, as well as being able to produce spatially distributed outputs. They can directly incorporate and assess potential changes in the characteristics of our basins. A recognized problem for models in general is equifinality, which is only exacerbated for distributed models who tend to have a large number of parameters. We need to deal with the fundamentally ill-posed nature of the problem that such models force us to face, i.e. a large number of parameters and very few variables that can be used to constrain them, often only the catchment discharge. There is a growing but yet limited literature showing how the internal states of a distributed model can be used to calibrate/validate its predictions. In this paper, a distributed version of WASMOD, a conceptual rainfall runoff model with only three parameters, combined with a routing algorithm based on the high-resolution HydroSHEDS data was used to simulate the discharge in the Paso La Ceiba basin in Honduras. The parameter space was explored using Monte-Carlo simulations and the region of space containing the parameter-sets that were considered behavioral according to two different criteria was delimited using the geometric concept of alpha-shapes. The discharge data from five internal sub-basins was used to aid in the calibration of the model and to answer the following questions: Can this information improve the simulations at the outlet of the catchment, or decrease their uncertainty? Also, after reducing the number of model parameters needing calibration through sensitivity analysis: Is it possible to relate them to basin characteristics? The analysis revealed that in most cases the internal discharge data can be used to reduce the uncertainty in the discharge at the outlet, albeit with little improvement in the overall simulation results.

  14. Searching for Sterile Neutrinos with MINOS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timmons, Ashley

    2016-01-01

    This document presents the latest results for a 3+1 sterile neutrino search using themore » $$10.56 \\times 10^{20}$$ protons-on-target data set taken from 2005 - 2012. By searching for oscillations driven by a large mass splitting, MINOS is sensitive to the existence of sterile neutrinos through any energy dependent deviations using a charged current sample, as well as looking at any relative deficit between neutral current events between the far and near detectors. This document will discuss the novel analysis that enabled a search for sterile neutrinos setting a limit in the previously unexplored regions in the parameter space $$\\{\\Delta m^{2}_{41}, \\sin^2\\theta_{24}\\}$$. The results presented can be compared to the parameter space suggested by LSND and MiniBooNE and complements other previous experimental searches for sterile neutrinos in the electron neutrino appearance channel.« less

  15. X-Ray diffraction on large single crystals using a powder diffractometer

    DOE PAGES

    Jesche, A.; Fix, M.; Kreyssig, A.; ...

    2016-06-16

    Information on the lattice parameter of single crystals with known crystallographic structure allows for estimations of sample quality and composition. In many cases it is sufficient to determine one lattice parameter or the lattice spacing along a certain, high- symmetry direction, e.g. in order to determine the composition in a substitution series by taking advantage of Vegard’s rule. Here we present a guide to accurate measurements of single crystals with dimensions ranging from 200 μm up to several millimeter using a standard powder diffractometer in Bragg-Brentano geometry. The correction of the error introduced by the sample height and the optimizationmore » of the alignment are discussed in detail. Finally, in particular for single crystals with a plate-like habit, the described procedure allows for measurement of the lattice spacings normal to the plates with high accuracy on a timescale of minutes.« less

  16. Energy landscapes for a machine-learning prediction of patient discharge

    NASA Astrophysics Data System (ADS)

    Das, Ritankar; Wales, David J.

    2016-06-01

    The energy landscapes framework is applied to a configuration space generated by training the parameters of a neural network. In this study the input data consists of time series for a collection of vital signs monitored for hospital patients, and the outcomes are patient discharge or continued hospitalisation. Using machine learning as a predictive diagnostic tool to identify patterns in large quantities of electronic health record data in real time is a very attractive approach for supporting clinical decisions, which have the potential to improve patient outcomes and reduce waiting times for discharge. Here we report some preliminary analysis to show how machine learning might be applied. In particular, we visualize the fitting landscape in terms of locally optimal neural networks and the connections between them in parameter space. We anticipate that these results, and analogues of thermodynamic properties for molecular systems, may help in the future design of improved predictive tools.

  17. A Low Nuclear Recoil Energy Threshold for Dark Matter Search with CRESST-III Detectors

    NASA Astrophysics Data System (ADS)

    Mancuso, M.; Angloher, G.; Bauer, P.; Bento, A.; Bucci, C.; Canonica, L.; D'Addabbo, A.; Defay, X.; Erb, A.; von Feilitzsch, Franz; Ferreiro Iachellini, N.; Gorla, P.; Gütlein, A.; Hauff, D.; Jochum, J.; Kiefer, M.; Kluck, H.; Kraus, H.; Lanfranchi, J. C.; Langenkämper, A.; Loebell, J.; Mondragon, E.; Münster, A.; Pagliarone, C.; Petricca, F.; Potzel, W.; Pröbst, F.; Puig, R.; Reindl, F.; Rothe, J.; Schäffner, K.; Schieck, J.; Schipperges, V.; Schönert, S.; Seidel, W.; Stahlberg, M.; Stodolsky, L.; Strandhagen, C.; Strauss, R.; Tanzke, A.; Thi, H. H. Trinh; Türkoglu, C.; Uffinger, M.; Ulrich, A.; Usherov, I.; Wawoczny, S.; Willers, M.; Wüstrich, M.

    2018-05-01

    The CRESST-III experiment (Cryogenic Rare Events Search with Superconducting Thermometers), located at the underground facility Laboratori Nazionali del Gran Sasso in Italy, uses scintillating CaWO_4 crystals as cryogenic calorimeters to search for direct dark matter interactions in detectors. A large part of the parameter space for spin-independent scattering off nuclei remains untested for dark matter particles with masses below a few GeV/c^2 , despite many naturally motivated theoretical models for light dark matter particles. The CRESST-III detectors are designed to achieve the performance required to probe the low-mass region of the parameter space with a sensitivity never reached before. In this paper, new results on the performance and an overview of the CRESST-III detectors will be presented, emphasizing the results about the low-energy threshold for nuclear recoil of CRESST-III Phase 1 which started collecting data in August 2016.

  18. Transition from Poissonian to Gaussian-orthogonal-ensemble level statistics in a modified Artin's billiard

    NASA Astrophysics Data System (ADS)

    Csordás, A.; Graham, R.; Szépfalusy, P.; Vattay, G.

    1994-01-01

    One wall of an Artin's billiard on the Poincaré half-plane is replaced by a one-parameter (cp) family of nongeodetic walls. A brief description of the classical phase space of this system is given. In the quantum domain, the continuous and gradual transition from the Poisson-like to Gaussian-orthogonal-ensemble (GOE) level statistics due to the small perturbations breaking the symmetry responsible for the ``arithmetic chaos'' at cp=1 is studied. Another GOE-->Poisson transition due to the mixed phase space for large perturbations is also investigated. A satisfactory description of the intermediate level statistics by the Brody distribution was found in both cases. The study supports the existence of a scaling region around cp=1. A finite-size scaling relation for the Brody parameter as a function of 1-cp and the number of levels considered can be established.

  19. Atmospheric turbulence review of space shuttle launches

    NASA Technical Reports Server (NTRS)

    Susko, Michael

    1991-01-01

    Research and analysis on the identification of turbulent regions from the surface to 16 km during Space Shuttle launches are discussed. It was demonstrated that the results from the FPS-16 radar/jimsphere balloon system in measuring winds can indeed indicate the presence or conditions ripe for turbulence in the troposphere and lower stratosphere. It was further demonstrated that atmospheric data obtained during the shuttle launches by the rawinsonde in conjunction with the jimsphere provides the necessary meteorological data to compute aerodynamic parameters to identify turbulence, such as Reynolds number drag coefficient, turbulent stresses, total energy, stability parameter, vertical gradient of kinetic energy, Richardson number, and the turbulence probability index. Enhanced temperature lapse rates and inversion rates, strong vector wind shears, and large changes in wind direction identify the occurrence of turbulence at the troposphere. When any two of the above conditions occur simultaneously, a significant probability of turbulence can occur.

  20. Latest astronomical constraints on some non-linear parametric dark energy models

    NASA Astrophysics Data System (ADS)

    Yang, Weiqiang; Pan, Supriya; Paliathanasis, Andronikos

    2018-04-01

    We consider non-linear redshift-dependent equation of state parameters as dark energy models in a spatially flat Friedmann-Lemaître-Robertson-Walker universe. To depict the expansion history of the universe in such cosmological scenarios, we take into account the large-scale behaviour of such parametric models and fit them using a set of latest observational data with distinct origin that includes cosmic microwave background radiation, Supernove Type Ia, baryon acoustic oscillations, redshift space distortion, weak gravitational lensing, Hubble parameter measurements from cosmic chronometers, and finally the local Hubble constant from Hubble space telescope. The fitting technique avails the publicly available code Cosmological Monte Carlo (COSMOMC), to extract the cosmological information out of these parametric dark energy models. From our analysis, it follows that those models could describe the late time accelerating phase of the universe, while they are distinguished from the Λ-cosmology.

  1. Analytic and simulation studies on the use of torque-wheel actuators for the control of flexible robotic arms

    NASA Technical Reports Server (NTRS)

    Montgomery, Raymond C.; Ghosh, Dave; Kenny, Sean

    1991-01-01

    This paper presents results of analytic and simulation studies to determine the effectiveness of torque-wheel actuators in suppressing the vibrations of two-link telerobotic arms with attached payloads. The simulations use a planar generic model of a two-link arm with a torque wheel at the free end. Parameters of the arm model are selected to be representative of a large space-based robotic arm of the same class as the Space Shuttle Remote Manipulator, whereas parameters of the torque wheel are selected to be similar to those of the Mini-Mast facility at the Langley Research Center. Results show that this class of torque-wheel can produce an oscillation of 2.5 cm peak-to-peak in the end point of the arm and that the wheel produces significantly less overshoot when the arm is issued an abrupt stop command from the telerobotic input station.

  2. On using large scale correlation of the Ly-α forest and redshifted 21-cm signal to probe HI distribution during the post reionization era

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkar, Tapomoy Guha; Datta, Kanan K., E-mail: tapomoy@pilani.bits-pilani.ac.in, E-mail: kanan.physics@presiuniv.ac.in

    We investigate the possibility of detecting the 3D cross correlation power spectrum of the Ly-α forest and HI 21 cm signal from the post reionization epoch. (The cross-correlation signal is directly dependent on the dark matter power spectrum and is sensitive to the 21-cm brightness temperature and Ly-α forest biases. These bias parameters dictate the strength of anisotropy in redshift space.) We find that the cross-correlation power spectrum can be detected using 400 hrs observation with SKA-mid (phase 1) and a futuristic BOSS like experiment with a quasar (QSO) density of 30 deg{sup −2} at a peak SNR of 15 for amore » single field experiment at redshift z = 2.5. on large scales using the linear bias model. We also study the possibility of constraining various bias parameters using the cross power spectrum. We find that with the same experiment 1 σ (conditional errors) on the 21-cm linear redshift space distortion parameter β{sub T} and β{sub F} corresponding to the Ly-α  forest are ∼ 2.7 % and ∼ 1.4 % respectively for 01 independent pointings of the SKA-mid (phase 1). This prediction indicates a significant improvement over existing measurements. We claim that the detection of the 3D cross correlation power spectrum will not only ascertain the cosmological origin of the signal in presence of astrophysical foregrounds but will also provide stringent constraints on large scale HI biases. This provides an independent probe towards understanding cosmological structure formation.« less

  3. Adaptive parametric model order reduction technique for optimization of vibro-acoustic models: Application to hearing aid design

    NASA Astrophysics Data System (ADS)

    Creixell-Mediante, Ester; Jensen, Jakob S.; Naets, Frank; Brunskog, Jonas; Larsen, Martin

    2018-06-01

    Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we present an adaptive pMOR technique where the construction of the projection basis is embedded in the optimization process and requires fewer full system analyses, while the accuracy of the reduced system is monitored by a cheap error indicator. The performance of the proposed method is evaluated for a 4-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system.

  4. Antenna concepts for interstellar search systems

    NASA Technical Reports Server (NTRS)

    Basler, R. P.; Johnson, G. L.; Vondrak, R. R.

    1977-01-01

    An evaluation is made of microwave receiving systems designed to search for signals from extraterrestrial intelligence. Specific design concepts are analyzed parametrically to determine whether the optimum antenna system location is on earth, in space, or on the moon. Parameters considered include the hypothesized number of transmitting civilizations, the number of stars that must be searched to give any desired probability of receiving a signal, the antenna collecting area, the search time, the search range, and the cost. This analysis suggests that (1) search systems based on the moon are not cost-competitive, (2) if the search is extended only a few hundred light years from the earth, a Cyclops-type array on earth may be the most cost-effective system, (3) for a search extending to 500 light years or more, a substantial cost and search-time advantage can be achieved with a large spherical reflector in space with multiple feeds, (4) radio frequency interference shields can be provided for space systems, and (5) cost can range from a few hundred million to tens of billions of dollars, depending on the parameter values assumed.

  5. Vortex conception of rotor and mutual effect of screw/propellers

    NASA Technical Reports Server (NTRS)

    Lepilkin, A. M.

    1986-01-01

    A vortex theory of screw/propellers with variable circulation according to the blade and its azimuth is proposed, the problem is formulated and circulation is expanded in a Fourier series. Equations are given for inductive velocities in space for crews, including those with an infinitely large number of blades and expansion of the inductive velocity by blade azimuth of a second screw. Multiparameter improper integrals are given as a combination of elliptical integrals and elementary functions, and it is shown how to reduce elliptical integrals of the third kind with a complex parameter to integrals with a real parameter.

  6. Inferential Framework for Autonomous Cryogenic Loading Operations

    NASA Technical Reports Server (NTRS)

    Luchinsky, Dmitry G.; Khasin, Michael; Timucin, Dogan; Sass, Jared; Perotti, Jose; Brown, Barbara

    2017-01-01

    We address problem of autonomous cryogenic management of loading operations on the ground and in space. As a step towards solution of this problem we develop a probabilistic framework for inferring correlations parameters of two-fluid cryogenic flow. The simulation of two-phase cryogenic flow is performed using nearly-implicit scheme. A concise set of cryogenic correlations is introduced. The proposed approach is applied to an analysis of the cryogenic flow in experimental Propellant Loading System built at NASA KSC. An efficient simultaneous optimization of a large number of model parameters is demonstrated and a good agreement with the experimental data is obtained.

  7. Tensor methods for parameter estimation and bifurcation analysis of stochastic reaction networks

    PubMed Central

    Liao, Shuohao; Vejchodský, Tomáš; Erban, Radek

    2015-01-01

    Stochastic modelling of gene regulatory networks provides an indispensable tool for understanding how random events at the molecular level influence cellular functions. A common challenge of stochastic models is to calibrate a large number of model parameters against the experimental data. Another difficulty is to study how the behaviour of a stochastic model depends on its parameters, i.e. whether a change in model parameters can lead to a significant qualitative change in model behaviour (bifurcation). In this paper, tensor-structured parametric analysis (TPA) is developed to address these computational challenges. It is based on recently proposed low-parametric tensor-structured representations of classical matrices and vectors. This approach enables simultaneous computation of the model properties for all parameter values within a parameter space. The TPA is illustrated by studying the parameter estimation, robustness, sensitivity and bifurcation structure in stochastic models of biochemical networks. A Matlab implementation of the TPA is available at http://www.stobifan.org. PMID:26063822

  8. Tensor methods for parameter estimation and bifurcation analysis of stochastic reaction networks.

    PubMed

    Liao, Shuohao; Vejchodský, Tomáš; Erban, Radek

    2015-07-06

    Stochastic modelling of gene regulatory networks provides an indispensable tool for understanding how random events at the molecular level influence cellular functions. A common challenge of stochastic models is to calibrate a large number of model parameters against the experimental data. Another difficulty is to study how the behaviour of a stochastic model depends on its parameters, i.e. whether a change in model parameters can lead to a significant qualitative change in model behaviour (bifurcation). In this paper, tensor-structured parametric analysis (TPA) is developed to address these computational challenges. It is based on recently proposed low-parametric tensor-structured representations of classical matrices and vectors. This approach enables simultaneous computation of the model properties for all parameter values within a parameter space. The TPA is illustrated by studying the parameter estimation, robustness, sensitivity and bifurcation structure in stochastic models of biochemical networks. A Matlab implementation of the TPA is available at http://www.stobifan.org.

  9. Redshift-space distortions with the halo occupation distribution - II. Analytic model

    NASA Astrophysics Data System (ADS)

    Tinker, Jeremy L.

    2007-01-01

    We present an analytic model for the galaxy two-point correlation function in redshift space. The cosmological parameters of the model are the matter density Ωm, power spectrum normalization σ8, and velocity bias of galaxies αv, circumventing the linear theory distortion parameter β and eliminating nuisance parameters for non-linearities. The model is constructed within the framework of the halo occupation distribution (HOD), which quantifies galaxy bias on linear and non-linear scales. We model one-halo pairwise velocities by assuming that satellite galaxy velocities follow a Gaussian distribution with dispersion proportional to the virial dispersion of the host halo. Two-halo velocity statistics are a combination of virial motions and host halo motions. The velocity distribution function (DF) of halo pairs is a complex function with skewness and kurtosis that vary substantially with scale. Using a series of collisionless N-body simulations, we demonstrate that the shape of the velocity DF is determined primarily by the distribution of local densities around a halo pair, and at fixed density the velocity DF is close to Gaussian and nearly independent of halo mass. We calibrate a model for the conditional probability function of densities around halo pairs on these simulations. With this model, the full shape of the halo velocity DF can be accurately calculated as a function of halo mass, radial separation, angle and cosmology. The HOD approach to redshift-space distortions utilizes clustering data from linear to non-linear scales to break the standard degeneracies inherent in previous models of redshift-space clustering. The parameters of the occupation function are well constrained by real-space clustering alone, separating constraints on bias and cosmology. We demonstrate the ability of the model to separately constrain Ωm,σ8 and αv in models that are constructed to have the same value of β at large scales as well as the same finger-of-god distortions at small scales.

  10. Uncertainty, Sensitivity Analysis, and Causal Identification in the Arctic using a Perturbed Parameter Ensemble of the HiLAT Climate Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunke, Elizabeth Clare; Urrego Blanco, Jorge Rolando; Urban, Nathan Mark

    Coupled climate models have a large number of input parameters that can affect output uncertainty. We conducted a sensitivity analysis of sea ice proper:es and Arc:c related climate variables to 5 parameters in the HiLAT climate model: air-ocean turbulent exchange parameter (C), conversion of water vapor to clouds (cldfrc_rhminl) and of ice crystals to snow (micro_mg_dcs), snow thermal conduc:vity (ksno), and maximum snow grain size (rsnw_mlt). We used an elementary effect (EE) approach to rank their importance for output uncertainty. EE is an extension of one-at-a-time sensitivity analyses, but it is more efficient in sampling multi-dimensional parameter spaces. We lookedmore » for emerging relationships among climate variables across the model ensemble, and used causal discovery algorithms to establish potential pathways for those relationships.« less

  11. Uncertainty quantification of crustal scale thermo-chemical properties in Southeast Australia

    NASA Astrophysics Data System (ADS)

    Mather, B.; Moresi, L. N.; Rayner, P. J.

    2017-12-01

    The thermo-chemical properties of the crust are essential to understanding the mechanical and thermal state of the lithosphere. The uncertainties associated with these parameters are connected to the available geophysical observations and a priori information to constrain the objective function. Often, it is computationally efficient to reduce the parameter space by mapping large portions of the crust into lithologies that have assumed homogeneity. However, the boundaries of these lithologies are, in themselves, uncertain and should also be included in the inverse problem. We assimilate geological uncertainties from an a priori geological model of Southeast Australia with geophysical uncertainties from S-wave tomography and 174 heat flow observations within an adjoint inversion framework. This reduces the computational cost of inverting high dimensional probability spaces, compared to probabilistic inversion techniques that operate in the `forward' mode, but at the sacrifice of uncertainty and covariance information. We overcome this restriction using a sensitivity analysis, that perturbs our observations and a priori information within their probability distributions, to estimate the posterior uncertainty of thermo-chemical parameters in the crust.

  12. Space Shuttle Solid Rocket Booster decelerator subsystem - Air drop test vehicle/B-52 design

    NASA Technical Reports Server (NTRS)

    Runkle, R. E.; Drobnik, R. F.

    1979-01-01

    The air drop development test program for the Space Shuttle Solid Rocket Booster Recovery System required the design of a large drop test vehicle that would meet all the stringent requirements placed on it by structural loads, safety considerations, flight recovery system interfaces, and sequence. The drop test vehicle had to have the capability to test the drogue and the three main parachutes both separately and in the total flight deployment sequence and still be low-cost to fit in a low-budget development program. The design to test large ribbon parachutes to loads of 300,000 pounds required the detailed investigation and integration of several parameters such as carrier aircraft mechanical interface, drop test vehicle ground transportability, impact point ground penetration, salvageability, drop test vehicle intelligence, flight design hardware interfaces, and packaging fidelity.

  13. Comprehensive phase diagram of two-dimensional space charge doped Bi2Sr2CaCu2O8+x.

    PubMed

    Sterpetti, Edoardo; Biscaras, Johan; Erb, Andreas; Shukla, Abhay

    2017-12-12

    The phase diagram of hole-doped high critical temperature superconductors as a function of doping and temperature has been intensively studied with chemical variation of doping. Chemical doping can provoke structural changes and disorder, masking intrinsic effects. Alternatively, a field-effect transistor geometry with an electrostatically doped, ultra-thin sample can be used. However, to probe the phase diagram, carrier density modulation beyond 10 14  cm -2 and transport measurements performed over a large temperature range are needed. Here we use the space charge doping method to measure transport characteristics from 330 K to low temperature. We extract parameters and characteristic temperatures over a large doping range and establish a comprehensive phase diagram for one-unit-cell-thick BSCCO-2212 as a function of doping, temperature and disorder.

  14. Ultralow-Background Large-Format Bolometer Arrays

    NASA Technical Reports Server (NTRS)

    Benford, Dominic; Chervenak, Jay; Irwin, Kent; Moseley, S. Harvey; Oegerle, William (Technical Monitor)

    2002-01-01

    In the coming decade, work will commence in earnest on large cryogenic far-infrared telescopes and interferometers. All such observatories - for example, SAFIR, SPIRIT, and SPECS - require large format, two dimensional arrays of close-packed detectors capable of reaching the fundamental limits imposed by the very low photon backgrounds present in deep space. In the near term, bolometer array architectures which permit 1000 pixels - perhaps sufficient for the next generation of space-based instruments - can be arrayed efficiently. Demonstrating the necessary performance, with Noise Equivalent Powers (NEPs) of order 10-20 W/square root of Hz, will be a hurdle in the coming years. Superconducting bolometer arrays are a promising technology for providing both the performance and the array size necessary. We discuss the requirements for future detector arrays in the far-infrared and submillimeter, describe the parameters of superconducting bolometer arrays able to meet these requirements, and detail the present and near future technology of superconducting bolometer arrays. Of particular note is the coming development of large format planar arrays with absorber-coupled and antenna-coupled bolometers.

  15. Validating the simulation of large-scale parallel applications using statistical characteristics

    DOE PAGES

    Zhang, Deli; Wilke, Jeremiah; Hendry, Gilbert; ...

    2016-03-01

    Simulation is a widely adopted method to analyze and predict the performance of large-scale parallel applications. Validating the hardware model is highly important for complex simulations with a large number of parameters. Common practice involves calculating the percent error between the projected and the real execution time of a benchmark program. However, in a high-dimensional parameter space, this coarse-grained approach often suffers from parameter insensitivity, which may not be known a priori. Moreover, the traditional approach cannot be applied to the validation of software models, such as application skeletons used in online simulations. In this work, we present a methodologymore » and a toolset for validating both hardware and software models by quantitatively comparing fine-grained statistical characteristics obtained from execution traces. Although statistical information has been used in tasks like performance optimization, this is the first attempt to apply it to simulation validation. Lastly, our experimental results show that the proposed evaluation approach offers significant improvement in fidelity when compared to evaluation using total execution time, and the proposed metrics serve as reliable criteria that progress toward automating the simulation tuning process.« less

  16. Optimal synchronization in space

    NASA Astrophysics Data System (ADS)

    Brede, Markus

    2010-02-01

    In this Rapid Communication we investigate spatially constrained networks that realize optimal synchronization properties. After arguing that spatial constraints can be imposed by limiting the amount of “wire” available to connect nodes distributed in space, we use numerical optimization methods to construct networks that realize different trade offs between optimal synchronization and spatial constraints. Over a large range of parameters such optimal networks are found to have a link length distribution characterized by power-law tails P(l)∝l-α , with exponents α increasing as the networks become more constrained in space. It is also shown that the optimal networks, which constitute a particular type of small world network, are characterized by the presence of nodes of distinctly larger than average degree around which long-distance links are centered.

  17. An iterative bidirectional heuristic placement algorithm for solving the two-dimensional knapsack packing problem

    NASA Astrophysics Data System (ADS)

    Shiangjen, Kanokwatt; Chaijaruwanich, Jeerayut; Srisujjalertwaja, Wijak; Unachak, Prakarn; Somhom, Samerkae

    2018-02-01

    This article presents an efficient heuristic placement algorithm, namely, a bidirectional heuristic placement, for solving the two-dimensional rectangular knapsack packing problem. The heuristic demonstrates ways to maximize space utilization by fitting the appropriate rectangle from both sides of the wall of the current residual space layer by layer. The iterative local search along with a shift strategy is developed and applied to the heuristic to balance the exploitation and exploration tasks in the solution space without the tuning of any parameters. The experimental results on many scales of packing problems show that this approach can produce high-quality solutions for most of the benchmark datasets, especially for large-scale problems, within a reasonable duration of computational time.

  18. Time-dependent radiation dose estimations during interplanetary space flights

    NASA Astrophysics Data System (ADS)

    Dobynde, M. I.; Shprits, Y.; Drozdov, A.

    2015-12-01

    Time-dependent radiation dose estimations during interplanetary space flights 1,2Dobynde M.I., 2,3Drozdov A.Y., 2,4Shprits Y.Y.1Skolkovo institute of science and technology, Moscow, Russia 2University of California Los Angeles, Los Angeles, USA 3Lomonosov Moscow State University Skobeltsyn Institute of Nuclear Physics, Moscow, Russia4Massachusetts Institute of Technology, Cambridge, USASpace radiation is the main restriction for long-term interplanetary space missions. It induces degradation of external components and propagates inside providing damage to internal environment. Space radiation particles and induced secondary particle showers can lead to variety of damage to astronauts in short- and long- term perspective. Contribution of two main sources of space radiation- Sun and out-of-heliosphere space varies in time in opposite phase due to the solar activity state. Currently the only habituated mission is the international interplanetary station that flights on the low Earth orbit. Besides station shell astronauts are protected with the Earth magnetosphere- a natural shield that prevents significant damage for all humanity. Current progress in space exploration tends to lead humanity out of magnetosphere bounds. With the current study we make estimations of spacecraft parameters and astronauts damage for long-term interplanetary flights. Applying time dependent model of GCR spectra and data on SEP spectra we show the time dependence of the radiation in a human phantom inside the shielding capsule. We pay attention to the shielding capsule design, looking for an optimal geometry parameters and materials. Different types of particles affect differently on the human providing more or less harm to the tissues. Incident particles provide a large amount of secondary particles while propagating through the shielding capsule. We make an attempt to find an optimal combination of shielding capsule parameters, namely material and thickness, that will effectively decrease the incident particle energy, at the same time minimizing flow of secondary induced particles and minimizing most harmful particle types flows.

  19. New bounds on axionlike particles from the Fermi Large Area Telescope observation of PKS 2155 -304

    NASA Astrophysics Data System (ADS)

    Zhang, Cun; Liang, Yun-Feng; Li, Shang; Liao, Neng-Hui; Feng, Lei; Yuan, Qiang; Fan, Yi-Zhong; Ren, Zhong-Zhou

    2018-03-01

    The axionlike particle (ALP)-photon mixing in the magnetic field around γ -ray sources or along the line of sight could induce oscillation between photons and ALPs, which then causes irregularities in the γ -ray spectra. In this work we search for such spectral irregularities in the spectrum of PKS 2155 -304 using 8.6 years of data from the Fermi Large Area Telescope (Fermi-LAT). No significant evidence for the presence of ALP-photon oscillation is obtained, and the parameter space of ALPs is constrained. The exclusion region sensitively depends on the poorly known magnetic field of the host galaxy cluster of PKS 2155 -304 . If the magnetic field is as high as ˜10 μ G , the "holelike" parameter region allowed in Ref. [1] can be ruled out.

  20. A statistical survey of heat input parameters into the cusp thermosphere

    NASA Astrophysics Data System (ADS)

    Moen, J. I.; Skjaeveland, A.; Carlson, H. C.

    2017-12-01

    Based on three winters of observational data, we present those ionosphere parameters deemed most critical to realistic space weather ionosphere and thermosphere representation and prediction, in regions impacted by variability in the cusp. The CHAMP spacecraft revealed large variability in cusp thermosphere densities, measuring frequent satellite drag enhancements, up to doublings. The community recognizes a clear need for more realistic representation of plasma flows and electron densities near the cusp. Existing average-value models produce order of magnitude errors in these parameters, resulting in large under estimations of predicted drag. We fill this knowledge gap with statistics-based specification of these key parameters over their range of observed values. The EISCAT Svalbard Radar (ESR) tracks plasma flow Vi , electron density Ne, and electron, ion temperatures Te, Ti , with consecutive 2-3 minute windshield-wipe scans of 1000x500 km areas. This allows mapping the maximum Ti of a large area within or near the cusp with high temporal resolution. In magnetic field-aligned mode the radar can measure high-resolution profiles of these plasma parameters. By deriving statistics for Ne and Ti , we enable derivation of thermosphere heating deposition under background and frictional-drag-dominated magnetic reconnection conditions. We separate our Ne and Ti profiles into quiescent and enhanced states, which are not closely correlated due to the spatial structure of the reconnection foot point. Use of our data-based parameter inputs can make order of magnitude corrections to input data driving thermosphere models, enabling removal of previous two fold drag errors.

  1. Heterogeneity-induced large deviations in activity and (in some cases) entropy production

    NASA Astrophysics Data System (ADS)

    Gingrich, Todd R.; Vaikuntanathan, Suriyanarayanan; Geissler, Phillip L.

    2014-10-01

    We solve a simple model that supports a dynamic phase transition and show conditions for the existence of the transition. Using methods of large deviation theory we analytically compute the probability distribution for activity and entropy production rates of the trajectories on a large ring with a single heterogeneous link. The corresponding joint rate function demonstrates two dynamical phases—one localized and the other delocalized, but the marginal rate functions do not always exhibit the underlying transition. Symmetries in dynamic order parameters influence the observation of a transition, such that distributions for certain dynamic order parameters need not reveal an underlying dynamical bistability. Solution of our model system furthermore yields the form of the effective Markov transition matrices that generate dynamics in which the two dynamical phases are at coexistence. We discuss the implications of the transition for the response of bacterial cells to antibiotic treatment, arguing that even simple models of a cell cycle lacking an explicit bistability in configuration space will exhibit a bistability of dynamical phases.

  2. Multiclustered chimeras in large semiconductor laser arrays with nonlocal interactions

    NASA Astrophysics Data System (ADS)

    Shena, J.; Hizanidis, J.; Hövel, P.; Tsironis, G. P.

    2017-09-01

    The dynamics of a large array of coupled semiconductor lasers is studied numerically for a nonlocal coupling scheme. Our focus is on chimera states, a self-organized spatiotemporal pattern of coexisting coherence and incoherence. In laser systems, such states have been previously found for global and nearest-neighbor coupling, mainly in small networks. The technological advantage of large arrays has motivated us to study a system of 200 nonlocally coupled lasers with respect to the emerging collective dynamics. Moreover, the nonlocal nature of the coupling allows us to obtain robust chimera states with multiple (in)coherent domains. The crucial parameters are the coupling strength, the coupling phase and the range of the nonlocal interaction. We find that multiclustered chimera states exist in a wide region of the parameter space and we provide quantitative characterization for the obtained spatiotemporal patterns. By proposing two different experimental setups for the realization of the nonlocal coupling scheme, we are confident that our results can be confirmed in the laboratory.

  3. Tracking Resilience to Infections by Mapping Disease Space

    PubMed Central

    Thomas Tate, Ann; Rath, Poonam; Cumnock, Katherine; Schneider, David S.

    2016-01-01

    Infected hosts differ in their responses to pathogens; some hosts are resilient and recover their original health, whereas others follow a divergent path and die. To quantitate these differences, we propose mapping the routes infected individuals take through “disease space.” We find that when plotting physiological parameters against each other, many pairs have hysteretic relationships that identify the current location of the host and predict the future route of the infection. These maps can readily be constructed from experimental longitudinal data, and we provide two methods to generate the maps from the cross-sectional data that is commonly gathered in field trials. We hypothesize that resilient hosts tend to take small loops through disease space, whereas nonresilient individuals take large loops. We support this hypothesis with experimental data in mice infected with Plasmodium chabaudi, finding that dying mice trace a large arc in red blood cells (RBCs) by reticulocyte space as compared to surviving mice. We find that human malaria patients who are heterozygous for sickle cell hemoglobin occupy a small area of RBCs by reticulocyte space, suggesting this approach can be used to distinguish resilience in human populations. This technique should be broadly useful in describing the in-host dynamics of infections in both model hosts and patients at both population and individual levels. PMID:27088359

  4. A Self-Organizing State-Space-Model Approach for Parameter Estimation in Hodgkin-Huxley-Type Models of Single Neurons

    PubMed Central

    Vavoulis, Dimitrios V.; Straub, Volko A.; Aston, John A. D.; Feng, Jianfeng

    2012-01-01

    Traditional approaches to the problem of parameter estimation in biophysical models of neurons and neural networks usually adopt a global search algorithm (for example, an evolutionary algorithm), often in combination with a local search method (such as gradient descent) in order to minimize the value of a cost function, which measures the discrepancy between various features of the available experimental data and model output. In this study, we approach the problem of parameter estimation in conductance-based models of single neurons from a different perspective. By adopting a hidden-dynamical-systems formalism, we expressed parameter estimation as an inference problem in these systems, which can then be tackled using a range of well-established statistical inference methods. The particular method we used was Kitagawa's self-organizing state-space model, which was applied on a number of Hodgkin-Huxley-type models using simulated or actual electrophysiological data. We showed that the algorithm can be used to estimate a large number of parameters, including maximal conductances, reversal potentials, kinetics of ionic currents, measurement and intrinsic noise, based on low-dimensional experimental data and sufficiently informative priors in the form of pre-defined constraints imposed on model parameters. The algorithm remained operational even when very noisy experimental data were used. Importantly, by combining the self-organizing state-space model with an adaptive sampling algorithm akin to the Covariance Matrix Adaptation Evolution Strategy, we achieved a significant reduction in the variance of parameter estimates. The algorithm did not require the explicit formulation of a cost function and it was straightforward to apply on compartmental models and multiple data sets. Overall, the proposed methodology is particularly suitable for resolving high-dimensional inference problems based on noisy electrophysiological data and, therefore, a potentially useful tool in the construction of biophysical neuron models. PMID:22396632

  5. Identifying large scale structures at 1 AU using fluctuations and wavelets

    NASA Astrophysics Data System (ADS)

    Niembro, T.; Lara, A.

    2016-12-01

    The solar wind (SW) is inhomogeneous and it is dominated for two types of flows: one quasi-stationary and one related to large scale transients (such as coronal mass ejections and co-rotating interaction regions). The SW inhomogeneities can be study as fluctuations characterized by a wide range of length and time scales. We are interested in the study of the characteristic fluctuations caused by large scale transient events. To do so, we define the vector space F with the normalized moving monthly/annual deviations as the orthogonal basis. Then, we compute the norm in this space of the solar wind parameters (velocity, magnetic field, density and temperature) fluctuations using WIND data from August 1992 to August 2015. This norm gives important information about the presence of a large structure disturbance in the solar wind and by applying a wavelet transform to this norm, we are able to determine, without subjectivity, the duration of the compression regions of these large transient structures and, even more, to identify if the structure corresponds to a single or complex (or merged) event. With this method we have automatically detected most of the events identified and published by other authors.

  6. The use of a multidimensional space for fusion candidate representation in a maritime domain awareness application

    NASA Astrophysics Data System (ADS)

    Lefebvre, Eric; Helleur, Christopher; Kashyap, Nathan

    2008-03-01

    Maritime surveillance of coastal regions requires operational staff to integrate a large amount of information from a variety of military and civilian sources. The diverse nature of the information sources makes complete automation difficult. The volume of vessels tracked and the number of sources makes it difficult for the limited operation centre staff to fuse all the information manually within a reasonable timeframe. In this paper, a conceptual decision space is proposed to provide a framework for automating the process of operators integrating the sources needed to maintain Maritime Domain Awareness. The decision space contains all potential pairs of ship tracks that are candidates for fusion. The location of the candidate pairs in this defined space depends on the value of the parameters used to make a decision. In the application presented, three independent parameters are used: the source detection efficiency, the geo-feasibility, and the track quality. One of three decisions is applied to each candidate track pair based on these three parameters: 1. to accept the fusion, in which case tracks are fused in one track, 2. to reject the fusion, in which case the candidate track pair is removed from the list of potential fusion, and 3. to defer the fusion, in which case no fusion occurs but the candidate track pair remains in the list of potential fusion until sufficient information is provided. This paper demonstrates in an operational setting how a proposed conceptual space is used to optimize the different thresholds for automatic fusion decision while minimizing the list of unresolved cases when the decision is left to the operator.

  7. Rapid production of optimal-quality reduced-resolution representations of very large databases

    DOEpatents

    Sigeti, David E.; Duchaineau, Mark; Miller, Mark C.; Wolinsky, Murray; Aldrich, Charles; Mineev-Weinstein, Mark B.

    2001-01-01

    View space representation data is produced in real time from a world space database representing terrain features. The world space database is first preprocessed. A database is formed having one element for each spatial region corresponding to a finest selected level of detail. A multiresolution database is then formed by merging elements and a strict error metric is computed for each element at each level of detail that is independent of parameters defining the view space. The multiresolution database and associated strict error metrics are then processed in real time for real time frame representations. View parameters for a view volume comprising a view location and field of view are selected. The error metric with the view parameters is converted to a view-dependent error metric. Elements with the coarsest resolution are chosen for an initial representation. Data set first elements from the initial representation data set are selected that are at least partially within the view volume. The first elements are placed in a split queue ordered by the value of the view-dependent error metric. If the number of first elements in the queue meets or exceeds a predetermined number of elements or whether the largest error metric is less than or equal to a selected upper error metric bound, the element at the head of the queue is force split and the resulting elements are inserted into the queue. Force splitting is continued until the determination is positive to form a first multiresolution set of elements. The first multiresolution set of elements is then outputted as reduced resolution view space data representing the terrain features.

  8. The INAF/IAPS Plasma Chamber for ionospheric simulation experiment

    NASA Astrophysics Data System (ADS)

    Diego, Piero

    2016-04-01

    The plasma chamber is particularly suitable to perform studies for the following applications: - plasma compatibility and functional tests on payloads envisioned to operate in the ionosphere (e.g. sensors onboard satellites, exposed to the external plasma environment); - calibration/testing of plasma diagnostic sensors; - characterization and compatibility tests on components for space applications (e.g. optical elements, harness, satellite paints, photo-voltaic cells, etc.); - experiments on satellite charging in a space plasma environment; - tests on active experiments which use ion, electron or plasma sources (ion thrusters, hollow cathodes, field effect emitters, plasma contactors, etc.); - possible studies relevant to fundamental space plasma physics. The facility consists of a large volume vacuum tank (a cylinder of length 4.5 m and diameter 1.7 m) equipped with a Kaufman type plasma source, operating with Argon gas, capable to generate a plasma beam with parameters (i.e. density and electron temperature) close to the values encountered in the ionosphere at F layer altitudes. The plasma beam (A+ ions and electrons) is accelerated into the chamber at a velocity that reproduces the relative motion between an orbiting satellite and the ionosphere (≈ 8 km/s). This feature, in particular, allows laboratory simulations of the actual compression and depletion phenomena which take place in the ram and wake regions around satellites moving through the ionosphere. The reproduced plasma environment is monitored using Langmuir Probes (LP) and Retarding Potential Analyzers (RPA). These sensors can be automatically moved within the experimental space using a sled mechanism. Such a feature allows the acquisition of the plasma parameters all around the space payload installed into the chamber for testing. The facility is currently in use to test the payloads of CSES satellite (Chinese Seismic Electromagnetic Satellite) devoted to plasma parameters and electric field measurements in a polar orbit at 500 km altitude.

  9. Science and Technology Text Mining: Near-Earth Space

    DTIC Science & Technology

    2003-07-21

    TRANSFER; 177SATELLITE IMAGES; 175 SPATIAL RESOLUTION ; 174 SEA ICE; 166 SYSTEM GPS; 166 TOPEX POSEIDON; 165 SATELLITE MEASUREMENTS; 163 RADIATION BUDGET...1073 ICE; 1065 SATELLITES; 1062 PAPER; 1009 EARTH; 1008 RESOLUTION ; 1000 MODELS; 962 RADIATION; 943 DERIVED; 938 OCEAN; 928 CURRENT; 925 SPATIAL ; 899...PARAMETERS; 729 TECHNIQUE; 714 OPTICAL; 714 SPACECRAFT; 711 DEGREE; 702 TRANSMISSION; 696 LARGE; 693 TEST; 686 NUMBER; 671 EFFECTS ; 662 SPECTRAL ; 661

  10. Dynamical phase transition in the simplest molecular chain model

    NASA Astrophysics Data System (ADS)

    Malyshev, V. A.; Muzychka, S. A.

    2014-04-01

    We consider the dynamics of the simplest chain of a large number N of particles. In the double scaling limit, we find the partition of the parameter space into two domains: for one domain, the supremum over the time interval ( 0,∞) of the relative extension of the chain tends to 1 as N → ∞, and for the other domain, to infinity.

  11. Exploring Neutrino Oscillation Parameter Space with a Monte Carlo Algorithm

    NASA Astrophysics Data System (ADS)

    Espejel, Hugo; Ernst, David; Cogswell, Bernadette; Latimer, David

    2015-04-01

    The χ2 (or likelihood) function for a global analysis of neutrino oscillation data is first calculated as a function of the neutrino mixing parameters. A computational challenge is to obtain the minima or the allowed regions for the mixing parameters. The conventional approach is to calculate the χ2 (or likelihood) function on a grid for a large number of points, and then marginalize over the likelihood function. As the number of parameters increases with the number of neutrinos, making the calculation numerically efficient becomes necessary. We implement a new Monte Carlo algorithm (D. Foreman-Mackey, D. W. Hogg, D. Lang and J. Goodman, Publications of the Astronomical Society of the Pacific, 125 306 (2013)) to determine its computational efficiency at finding the minima and allowed regions. We examine a realistic example to compare the historical and the new methods.

  12. Application and optimization of input parameter spaces in mass flow modelling: a case study with r.randomwalk and r.ranger

    NASA Astrophysics Data System (ADS)

    Krenn, Julia; Zangerl, Christian; Mergili, Martin

    2017-04-01

    r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This strategy is best demonstrated for two input parameters, but can be extended arbitrarily. We use a set of small rock avalanches from western Austria, and some larger ones from Canada and New Zealand, to optimize the basal friction coefficient and the mass-to-drag ratio of the two-parameter friction model implemented with r.randomwalk. Thereby we repeat the optimization procedure with conservative and non-conservative assumptions of a set of complementary parameters and with different raster cell sizes. Our preliminary results indicate that the model performance in terms of AUROC achieved with broad parameter spaces is hardly surpassed by the performance achieved with narrow parameter spaces. However, broad spaces may result in very conservative or very non-conservative predictions. Therefore, guiding parameter spaces have to be (i) broad enough to avoid the risk of being off target; and (ii) narrow enough to ensure a reasonable level of conservativeness of the results. The next steps will consist in (i) extending the study to other types of mass flow processes in order to support forward calculations using r.randomwalk; and (ii) in applying the same strategy to the more complex, dynamic model r.avaflow.

  13. Using state-space models to predict the abundance of juvenile and adult sea lice on Atlantic salmon.

    PubMed

    Elghafghuf, Adel; Vanderstichel, Raphael; St-Hilaire, Sophie; Stryhn, Henrik

    2018-04-11

    Sea lice are marine parasites affecting salmon farms, and are considered one of the most costly pests of the salmon aquaculture industry. Infestations of sea lice on farms significantly increase opportunities for the parasite to spread in the surrounding ecosystem, making control of this pest a challenging issue for salmon producers. The complexity of controlling sea lice on salmon farms requires frequent monitoring of the abundance of different sea lice stages over time. Industry-based data sets of counts of lice are amenable to multivariate time-series data analyses. In this study, two sets of multivariate autoregressive state-space models were applied to Chilean sea lice data from six Atlantic salmon production cycles on five isolated farms (at least 20 km seaway distance away from other known active farms), to evaluate the utility of these models for predicting sea lice abundance over time on farms. The models were constructed with different parameter configurations, and the analysis demonstrated large heterogeneity between production cycles for the autoregressive parameter, the effects of chemotherapeutant bath treatments, and the process-error variance. A model allowing for different parameters across production cycles had the best fit and the smallest overall prediction errors. However, pooling information across cycles for the drift and observation error parameters did not substantially affect model performance, thus reducing the number of necessary parameters in the model. Bath treatments had strong but variable effects for reducing sea lice burdens, and these effects were stronger for adult lice than juvenile lice. Our multivariate state-space models were able to handle different sea lice stages and provide predictions for sea lice abundance with reasonable accuracy up to five weeks out. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  14. How to couple identical ring oscillators to get quasiperiodicity, extended chaos, multistability, and the loss of symmetry

    NASA Astrophysics Data System (ADS)

    Hellen, Edward H.; Volkov, Evgeny

    2018-09-01

    We study the dynamical regimes demonstrated by a pair of identical 3-element ring oscillators (reduced version of synthetic 3-gene genetic Repressilator) coupled using the design of the 'quorum sensing (QS)' process natural for interbacterial communications. In this work QS is implemented as an additional network incorporating elements of the ring as both the source and the activation target of the fast diffusion QS signal. This version of indirect nonlinear coupling, in cooperation with the reasonable extension of the parameters which control properties of the isolated oscillators, exhibits the formation of a very rich array of attractors. Using a parameter-space defined by the individual oscillator amplitude and the coupling strength, we found the extended area of parameter-space where the identical oscillators demonstrate quasiperiodicity, which evolves to chaos via the period doubling of either resonant limit cycles or complex antiphase symmetric limit cycles with five winding numbers. The symmetric chaos extends over large parameter areas up to its loss of stability, followed by a system transition to an unexpected mode: an asymmetric limit cycle with a winding number of 1:2. In turn, after long evolution across the parameter-space, this cycle demonstrates a period doubling cascade which restores the symmetry of dynamics by formation of symmetric chaos, which nevertheless preserves the memory of the asymmetric limit cycles in the form of stochastic alternating "polarization" of the time series. All stable attractors coexist with some others, forming remarkable and complex multistability including the coexistence of torus and limit cycles, chaos and regular attractors, symmetric and asymmetric regimes. We traced the paths and bifurcations leading to all areas of chaos, and presented a detailed map of all transformations of the dynamics.

  15. Finite frequency shear wave splitting tomography: a model space search approach

    NASA Astrophysics Data System (ADS)

    Mondal, P.; Long, M. D.

    2017-12-01

    Observations of seismic anisotropy provide key constraints on past and present mantle deformation. A common method for upper mantle anisotropy is to measure shear wave splitting parameters (delay time and fast direction). However, the interpretation is not straightforward, because splitting measurements represent an integration of structure along the ray path. A tomographic approach that allows for localization of anisotropy is desirable; however, tomographic inversion for anisotropic structure is a daunting task, since 21 parameters are needed to describe general anisotropy. Such a large parameter space does not allow a straightforward application of tomographic inversion. Building on previous work on finite frequency shear wave splitting tomography, this study aims to develop a framework for SKS splitting tomography with a new parameterization of anisotropy and a model space search approach. We reparameterize the full elastic tensor, reducing the number of parameters to three (a measure of strength based on symmetry considerations for olivine, plus the dip and azimuth of the fast symmetry axis). We compute Born-approximation finite frequency sensitivity kernels relating model perturbations to splitting intensity observations. The strong dependence of the sensitivity kernels on the starting anisotropic model, and thus the strong non-linearity of the inverse problem, makes a linearized inversion infeasible. Therefore, we implement a Markov Chain Monte Carlo technique in the inversion procedure. We have performed tests with synthetic data sets to evaluate computational costs and infer the resolving power of our algorithm for synthetic models with multiple anisotropic layers. Our technique can resolve anisotropic parameters on length scales of ˜50 km for realistic station and event configurations for dense broadband experiments. We are proceeding towards applications to real data sets, with an initial focus on the High Lava Plains of Oregon.

  16. Transformation to equivalent dimensions—a new methodology to study earthquake clustering

    NASA Astrophysics Data System (ADS)

    Lasocki, Stanislaw

    2014-05-01

    A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.

  17. Verification of Space Station Secondary Power System Stability Using Design of Experiment

    NASA Technical Reports Server (NTRS)

    Karimi, Kamiar J.; Booker, Andrew J.; Mong, Alvin C.; Manners, Bruce

    1998-01-01

    This paper describes analytical methods used in verification of large DC power systems with applications to the International Space Station (ISS). Large DC power systems contain many switching power converters with negative resistor characteristics. The ISS power system presents numerous challenges with respect to system stability such as complex sources and undefined loads. The Space Station program has developed impedance specifications for sources and loads. The overall approach to system stability consists of specific hardware requirements coupled with extensive system analysis and testing. Testing of large complex distributed power systems is not practical due to size and complexity of the system. Computer modeling has been extensively used to develop hardware specifications as well as to identify system configurations for lab testing. The statistical method of Design of Experiments (DoE) is used as an analysis tool for verification of these large systems. DOE reduces the number of computer runs which are necessary to analyze the performance of a complex power system consisting of hundreds of DC/DC converters. DoE also provides valuable information about the effect of changes in system parameters on the performance of the system. DoE provides information about various operating scenarios and identification of the ones with potential for instability. In this paper we will describe how we have used computer modeling to analyze a large DC power system. A brief description of DoE is given. Examples using applications of DoE to analysis and verification of the ISS power system are provided.

  18. Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons.

    PubMed

    Nicola, Wilten; Campbell, Sue Ann

    2013-01-01

    We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons.

  19. Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons

    PubMed Central

    Nicola, Wilten; Campbell, Sue Ann

    2013-01-01

    We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons. PMID:24416013

  20. Effects of group-size-floor space allowance during the nursery phase of production on growth, physiology, and hematology in replacement gilts.

    PubMed

    Callahan, S R; Cross, A J; DeDecker, A E; Lindemann, M D; Estienne, M J

    2017-01-01

    The objective was to determine effects of nursery group-size-floor space allowance on growth, physiology, and hematology of replacement gilts. A 3 × 3 factorial arrangement of treatments was used wherein gilts classified as large, medium, or small ( = 2537; BW = 5.6 ± 0.6 kg) from 13 groups of weaned pigs were placed in pens of 14, 11, or 8 pigs resulting in floor space allowances of 0.15, 0.19, or 0.27 m/pig, respectively. Pigs were weighed on d 0 (weaning) and d 46 (exit from nursery). The ADG was affected by group-size-floor space allowance × pig size ( = 0.04). Large- and medium-size gilts allowed the most floor space had greater ( < 0.05) ADG than similar size gilts allowed the least floor space but for small size gilts there was no effect ( > 0.05) of group size-floor space allowance. Mortality in the nursery was not affected ( > 0.05) by treatment, size, or treatment × size and overall was approximately 2.1%. Complete blood counts and blood chemistry analyses were performed on samples collected at d 6 and 43 from a subsample of gilts ( = 18/group-size-floor space allowance) within a single group. The concentration ( < 0.01) and percentage ( = 0.03) of reticulocytes was the least and red blood cell distribution width the greatest ( < 0.01) in gilts allowed 0.15 m floor space (effects of treatment). Blood calcium was affected by treatment ( = 0.02) and concentrations for gilts allowed the greatest and intermediate amounts of floor space were greater ( < 0.05) than for gilts allowed the least floor space. Serum concentrations of cortisol were not affected by treatment × day ( = 0.27). Cortisol concentrations increased from d 6 to d 43 in all groups and were affected by day ( < 0.01) but not treatment ( = 0.53). Greater space allowance achieved by placing fewer pigs per pen in the nursery affected blood parameters and resulted in large- and medium-size replacement gilts displaying increased ADG. Further study will determine if these effects influence lifetime reproductive capacity and sow longevity.

  1. Non-singular Brans-Dicke collapse in deformed phase space

    NASA Astrophysics Data System (ADS)

    Rasouli, S. M. M.; Ziaie, A. H.; Jalalzadeh, S.; Moniz, P. V.

    2016-12-01

    We study the collapse process of a homogeneous perfect fluid (in FLRW background) with a barotropic equation of state in Brans-Dicke (BD) theory in the presence of phase space deformation effects. Such a deformation is introduced as a particular type of non-commutativity between phase space coordinates. For the commutative case, it has been shown in the literature (Scheel, 1995), that the dust collapse in BD theory leads to the formation of a spacetime singularity which is covered by an event horizon. In comparison to general relativity (GR), the authors concluded that the final state of black holes in BD theory is identical to the GR case but differs from GR during the dynamical evolution of the collapse process. However, the presence of non-commutative effects influences the dynamics of the collapse scenario and consequently a non-singular evolution is developed in the sense that a bounce emerges at a minimum radius, after which an expanding phase begins. Such a behavior is observed for positive values of the BD coupling parameter. For large positive values of the BD coupling parameter, when non-commutative effects are present, the dynamics of collapse process differs from the GR case. Finally, we show that for negative values of the BD coupling parameter, the singularity is replaced by an oscillatory bounce occurring at a finite time, with the frequency of oscillation and amplitude being damped at late times.

  2. Neutrino mass matrices with two vanishing cofactors and Fritzsch texture for charged lepton mass matrix

    NASA Astrophysics Data System (ADS)

    Wang, Weijian; Guo, Shu-Yuan; Wang, Zhi-Gang

    2016-04-01

    In this paper, we study the cofactor 2 zero neutrino mass matrices with the Fritzsch-type structure in charged lepton mass matrix (CLMM). In the numerical analysis, we perform a scan over the parameter space of all the 15 possible patterns to get a large sample of viable scattering points. Among the 15 possible patterns, three of them can accommodate the latest lepton mixing and neutrino mass data. We compare the predictions of the allowed patterns with their counterparts with diagonal CLMM. In this case, the severe cosmology bound on the neutrino mass set a strong constraint on the parameter space, rendering two patterns only marginally allowed. The Fritzsch-type CLMM will have impact on the viable parameter space and give rise to different phenomenological predictions. Each allowed pattern predicts the strong correlations between physical variables, which is essential for model selection and can be probed in future experiments. It is found that under the no-diagonal CLMM, the cofactor zeros structure in neutrino mass matrix is unstable as the running of renormalization group (RG) from seesaw scale to the electroweak scale. A way out of the problem is to propose the flavor symmetry under the models with a TeV seesaw scale. The inverse seesaw model and a loop-induced model are given as two examples.

  3. Nested Sampling for Bayesian Model Comparison in the Context of Salmonella Disease Dynamics

    PubMed Central

    Dybowski, Richard; McKinley, Trevelyan J.; Mastroeni, Pietro; Restif, Olivier

    2013-01-01

    Understanding the mechanisms underlying the observed dynamics of complex biological systems requires the statistical assessment and comparison of multiple alternative models. Although this has traditionally been done using maximum likelihood-based methods such as Akaike's Information Criterion (AIC), Bayesian methods have gained in popularity because they provide more informative output in the form of posterior probability distributions. However, comparison between multiple models in a Bayesian framework is made difficult by the computational cost of numerical integration over large parameter spaces. A new, efficient method for the computation of posterior probabilities has recently been proposed and applied to complex problems from the physical sciences. Here we demonstrate how nested sampling can be used for inference and model comparison in biological sciences. We present a reanalysis of data from experimental infection of mice with Salmonella enterica showing the distribution of bacteria in liver cells. In addition to confirming the main finding of the original analysis, which relied on AIC, our approach provides: (a) integration across the parameter space, (b) estimation of the posterior parameter distributions (with visualisations of parameter correlations), and (c) estimation of the posterior predictive distributions for goodness-of-fit assessments of the models. The goodness-of-fit results suggest that alternative mechanistic models and a relaxation of the quasi-stationary assumption should be considered. PMID:24376528

  4. Synthesizing spatiotemporally sparse smartphone sensor data for bridge modal identification

    NASA Astrophysics Data System (ADS)

    Ozer, Ekin; Feng, Maria Q.

    2016-08-01

    Smartphones as vibration measurement instruments form a large-scale, citizen-induced, and mobile wireless sensor network (WSN) for system identification and structural health monitoring (SHM) applications. Crowdsourcing-based SHM is possible with a decentralized system granting citizens with operational responsibility and control. Yet, citizen initiatives introduce device mobility, drastically changing SHM results due to uncertainties in the time and the space domains. This paper proposes a modal identification strategy that fuses spatiotemporally sparse SHM data collected by smartphone-based WSNs. Multichannel data sampled with the time and the space independence is used to compose the modal identification parameters such as frequencies and mode shapes. Structural response time history can be gathered by smartphone accelerometers and converted into Fourier spectra by the processor units. Timestamp, data length, energy to power conversion address temporal variation, whereas spatial uncertainties are reduced by geolocation services or determining node identity via QR code labels. Then, parameters collected from each distributed network component can be extended to global behavior to deduce modal parameters without the need of a centralized and synchronous data acquisition system. The proposed method is tested on a pedestrian bridge and compared with a conventional reference monitoring system. The results show that the spatiotemporally sparse mobile WSN data can be used to infer modal parameters despite non-overlapping sensor operation schedule.

  5. Optics Program Simplifies Analysis and Design

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Engineers at Goddard Space Flight Center partnered with software experts at Mide Technology Corporation, of Medford, Massachusetts, through a Small Business Innovation Research (SBIR) contract to design the Disturbance-Optics-Controls-Structures (DOCS) Toolbox, a software suite for performing integrated modeling for multidisciplinary analysis and design. The DOCS Toolbox integrates various discipline models into a coupled process math model that can then predict system performance as a function of subsystem design parameters. The system can be optimized for performance; design parameters can be traded; parameter uncertainties can be propagated through the math model to develop error bounds on system predictions; and the model can be updated, based on component, subsystem, or system level data. The Toolbox also allows the definition of process parameters as explicit functions of the coupled model and includes a number of functions that analyze the coupled system model and provide for redesign. The product is being sold commercially by Nightsky Systems Inc., of Raleigh, North Carolina, a spinoff company that was formed by Mide specifically to market the DOCS Toolbox. Commercial applications include use by any contractors developing large space-based optical systems, including Lockheed Martin Corporation, The Boeing Company, and Northrup Grumman Corporation, as well as companies providing technical audit services, like General Dynamics Corporation

  6. Simulation of reflecting surface deviations of centimeter-band parabolic space radiotelescope (SRT) with the large-size mirror

    NASA Astrophysics Data System (ADS)

    Kotik, A.; Usyukin, V.; Vinogradov, I.; Arkhipov, M.

    2017-11-01

    he realization of astrophysical researches requires the development of high-sensitive centimeterband parabolic space radiotelescopes (SRT) with the large-size mirrors. Constructively such SRT with the mirror size more than 10 m can be realized as deployable rigid structures. Mesh-structures of such size do not provide the reflector reflecting surface accuracy which is necessary for the centimeter band observations. Now such telescope with the 10 m diameter mirror is developed in Russia in the frame of "SPECTR - R" program. External dimensions of the telescope is more than the size of existing thermo-vacuum chambers used to prove SRT reflecting surface accuracy parameters under the action of space environment factors. That's why the numerical simulation turns out to be the basis required to accept the taken designs. Such modeling should be based on experimental working of the basic constructive materials and elements of the future reflector. In the article computational modeling of reflecting surface deviations of a centimeter-band of a large-sized deployable space reflector at a stage of his orbital functioning is considered. The analysis of the factors that determines the deviations - both determined (temperatures fields) and not-determined (telescope manufacturing and installation faults; the deformations caused by features of composite materials behavior in space) is carried out. The finite-element model and complex of methods are developed. They allow to carry out computational modeling of reflecting surface deviations caused by influence of all factors and to take into account the deviations correction by space vehicle orientation system. The results of modeling for two modes of functioning (orientation at the Sun) SRT are presented.

  7. Smart Optical RAM for Fast Information Management and Analysis

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang

    1998-01-01

    Statement of Problem Instruments for high speed and high capacity in-situ data identification, classification and storage capabilities are needed by NASA for the information management and analysis of extremely large volume of data sets in future space exploration, space habitation and utilization, in addition to the various missions to planet-earth programs. Parameters such as communication delays, limited resources, and inaccessibility of human manipulation require more intelligent, compact, low power, and light weight information management and data storage techniques. New and innovative algorithms and architecture using photonics will enable us to meet these challenges. The technology has applications for other government and public agencies.

  8. Model error estimation for distributed systems described by elliptic equations

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1983-01-01

    A function space approach is used to develop a theory for estimation of the errors inherent in an elliptic partial differential equation model for a distributed parameter system. By establishing knowledge of the inevitable deficiencies in the model, the error estimates provide a foundation for updating the model. The function space solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for static shape determination of large flexible systems.

  9. Leptonic decay constants for D-mesons from 3-flavour CLS ensembles

    NASA Astrophysics Data System (ADS)

    Collins, Sara; Eckert, Kevin; Heitger, Jochen; Hofmann, Stefan; Söldner, Wolfgang

    2018-03-01

    e report on the status of an ongoing effort by the RQCD and ALPHA Collaborations, aimed at determining leptonic decay constants of charmed mesons. Our analysis is based on large-volume ensembles generated within the CLS effort, employing Nf = 2 + 1 non-perturbatively O(a) improved Wilson quarks, tree-level Symanzik-improved gauge action and open boundary conditions. The ensembles cover lattice spac-ings from a ≈ 0.09 fm to a ≈ 0.05 fm, with pion masses varied from 420 to 200 MeV. To extrapolate to the physical masses, we follow both the (2ml + ms) = const. and the ms = const. lines in parameter space.

  10. Experimental simulation of space plasma interactions with high voltage solar arrays

    NASA Technical Reports Server (NTRS)

    Stillwell, R. P.; Kaufman, H. R.; Robinson, R. S.

    1981-01-01

    Operating high voltage solar arrays in the space environment can result in anomalously large currents being collected through small insulation defects. Tests of simulated defects have been conducted in a 45-cm vacuum chamber with plasma densities of 100,000 to 1,000,000/cu cm. Plasmas were generated using an argon hollow cathode. The solar array elements were simulated by placing a thin sheet of polyimide (Kapton) insulation with a small hole in it over a conductor. Parameters tested were: hole size, adhesive, surface roughening, sample temperature, insulator thickness, insulator area. These results are discussed along with some preliminary empirical correlations.

  11. Efficient Spatiotemporal Clutter Rejection and Nonlinear Filtering-based Dim Resolved and Unresolved Object Tracking Algorithms

    NASA Astrophysics Data System (ADS)

    Tartakovsky, A.; Tong, M.; Brown, A. P.; Agh, C.

    2013-09-01

    We develop efficient spatiotemporal image processing algorithms for rejection of non-stationary clutter and tracking of multiple dim objects using non-linear track-before-detect methods. For clutter suppression, we include an innovative image alignment (registration) algorithm. The images are assumed to contain elements of the same scene, but taken at different angles, from different locations, and at different times, with substantial clutter non-stationarity. These challenges are typical for space-based and surface-based IR/EO moving sensors, e.g., highly elliptical orbit or low earth orbit scenarios. The algorithm assumes that the images are related via a planar homography, also known as the projective transformation. The parameters are estimated in an iterative manner, at each step adjusting the parameter vector so as to achieve improved alignment of the images. Operating in the parameter space rather than in the coordinate space is a new idea, which makes the algorithm more robust with respect to noise as well as to large inter-frame disturbances, while operating at real-time rates. For dim object tracking, we include new advancements to a particle non-linear filtering-based track-before-detect (TrbD) algorithm. The new TrbD algorithm includes both real-time full image search for resolved objects not yet in track and joint super-resolution and tracking of individual objects in closely spaced object (CSO) clusters. The real-time full image search provides near-optimal detection and tracking of multiple extremely dim, maneuvering objects/clusters. The super-resolution and tracking CSO TrbD algorithm provides efficient near-optimal estimation of the number of unresolved objects in a CSO cluster, as well as the locations, velocities, accelerations, and intensities of the individual objects. We demonstrate that the algorithm is able to accurately estimate the number of CSO objects and their locations when the initial uncertainty on the number of objects is large. We demonstrate performance of the TrbD algorithm both for satellite-based and surface-based EO/IR surveillance scenarios.

  12. Heat transfer measurements for Stirling machine cylinders

    NASA Technical Reports Server (NTRS)

    Kornhauser, Alan A.; Kafka, B. C.; Finkbeiner, D. L.; Cantelmi, F. C.

    1994-01-01

    The primary purpose of this study was to measure the effects of inflow-produced heat turbulence on heat transfer in Stirling machine cylinders. A secondary purpose was to provide new experimental information on heat transfer in gas springs without inflow. The apparatus for the experiment consisted of a varying-volume piston-cylinder space connected to a fixed volume space by an orifice. The orifice size could be varied to adjust the level of inflow-produced turbulence, or the orifice plate could be removed completely so as to merge the two spaces into a single gas spring space. Speed, cycle mean pressure, overall volume ratio, and varying volume space clearance ratio could also be adjusted. Volume, pressure in both spaces, and local heat flux at two locations were measured. The pressure and volume measurements were used to calculate area averaged heat flux, heat transfer hysteresis loss, and other heat transfer-related effects. Experiments in the one space arrangement extended the range of previous gas spring tests to lower volume ratio and higher nondimensional speed. The tests corroborated previous results and showed that analytic models for heat transfer and loss based on volume ratio approaching 1 were valid for volume ratios ranging from 1 to 2, a range covering most gas springs in Stirling machines. Data from experiments in the two space arrangement were first analyzed based on lumping the two spaces together and examining total loss and averaged heat transfer as a function of overall nondimensional parameter. Heat transfer and loss were found to be significantly increased by inflow-produced turbulence. These increases could be modeled by appropriate adjustment of empirical coefficients in an existing semi-analytic model. An attempt was made to use an inverse, parameter optimization procedure to find the heat transfer in each of the two spaces. This procedure was successful in retrieving this information from simulated pressure-volume data with artificially generated noise, but it failed with the actual experimental data. This is evidence that the models used in the parameter optimization procedure (and to generate the simulated data) were not correct. Data from the surface heat flux sensors indicated that the primary shortcoming of these models was that they assumed turbulence levels to be constant over the cycle. Sensor data in the varying volume space showed a large increase in heat flux, probably due to turbulence, during the expansion stroke.

  13. On the Stability of Collocated Controllers in the Presence or Uncertain Nonlinearities and Other Perils

    NASA Technical Reports Server (NTRS)

    Joshi, S. M.

    1985-01-01

    Robustness properties are investigated for two types of controllers for large flexible space structures, which use collocated sensors and actuators. The first type is an attitude controller which uses negative definite feedback of measured attitude and rate, while the second type is a damping enhancement controller which uses only velocity (rate) feedback. It is proved that collocated attitude controllers preserve closed loop global asymptotic stability when linear actuator/sensor dynamics satisfying certain phase conditions are present, or monotonic increasing nonlinearities are present. For velocity feedback controllers, the global asymptotic stability is proved under much weaker conditions. In particular, they have 90 phase margin and can tolerate nonlinearities belonging to the (0,infinity) sector in the actuator/sensor characteristics. The results significantly enhance the viability of both types of collocated controllers, especially when the available information about the large space structure (LSS) parameters is inadequate or inaccurate.

  14. Absorption and scattering coefficient dependence of laser-Doppler flowmetry models for large tissue volumes.

    PubMed

    Binzoni, T; Leung, T S; Rüfenacht, D; Delpy, D T

    2006-01-21

    Based on quasi-elastic scattering theory (and random walk on a lattice approach), a model of laser-Doppler flowmetry (LDF) has been derived which can be applied to measurements in large tissue volumes (e.g. when the interoptode distance is >30 mm). The model holds for a semi-infinite medium and takes into account the transport-corrected scattering coefficient and the absorption coefficient of the tissue, and the scattering coefficient of the red blood cells. The model holds for anisotropic scattering and for multiple scattering of the photons by the moving scatterers of finite size. In particular, it has also been possible to take into account the simultaneous presence of both Brownian and pure translational movements. An analytical and simplified version of the model has also been derived and its validity investigated, for the case of measurements in human skeletal muscle tissue. It is shown that at large optode spacing it is possible to use the simplified model, taking into account only a 'mean' light pathlength, to predict the blood flow related parameters. It is also demonstrated that the 'classical' blood volume parameter, derived from LDF instruments, may not represent the actual blood volume variations when the investigated tissue volume is large. The simplified model does not need knowledge of the tissue optical parameters and thus should allow the development of very simple and cost-effective LDF hardware.

  15. The Franco-American macaque experiment. [bone demineralization of monkeys on Space Shuttle

    NASA Technical Reports Server (NTRS)

    Cipriano, Leonard F.; Ballard, Rodney W.

    1988-01-01

    The details of studies to be carried out jointly by French and American teams on two rhesus monkeys prepared for future experiments aboard the Space Shuttle are discussed together with the equipment involved. Seven science discipline teams were formed, which will study the effects of flight and/or weightlessness on the bone and calcium metabolism, the behavior, the cardiovascular system, the fluid balance and electrolytes, the muscle system, the neurovestibular interactions, and the sleep/biorhythm cycles. New behavioral training techniques were developed, in which the animals were trained to respond to behavioral tasks in order to measure the parameters involving eye/hand coordination, the response time to target tracking, visual discrimination, and muscle forces used by the animals. A large data set will be obtained from different animals on the two to three Space Shuttle flights; the hardware technologies developed for these experiments will be applied for primate experiments on the Space Station.

  16. Population Synthesis of Radio & Gamma-Ray Millisecond Pulsars

    NASA Astrophysics Data System (ADS)

    Frederick, Sara; Gonthier, P. L.; Harding, A. K.

    2014-01-01

    In recent years, the number of known gamma-ray millisecond pulsars (MSPs) in the Galactic disk has risen substantially thanks to confirmed detections by Fermi Gamma-ray Space Telescope (Fermi). We have developed a new population synthesis of gamma-ray and radio MSPs in the galaxy which uses Markov Chain Monte Carlo techniques to explore the large and small worlds of the model parameter space and allows for comparisons of the simulated and detected MSP distributions. The simulation employs empirical radio and gamma-ray luminosity models that are dependent upon the pulsar period and period derivative with freely varying exponents. Parameters associated with the birth distributions are also free to vary. The computer code adjusts the magnitudes of the model luminosities to reproduce the number of MSPs detected by a group of ten radio surveys, thus normalizing the simulation and predicting the MSP birth rates in the Galaxy. Computing many Markov chains leads to preferred sets of model parameters that are further explored through two statistical methods. Marginalized plots define confidence regions in the model parameter space using maximum likelihood methods. A secondary set of confidence regions is determined in parallel using Kuiper statistics calculated from comparisons of cumulative distributions. These two techniques provide feedback to affirm the results and to check for consistency. Radio flux and dispersion measure constraints have been imposed on the simulated gamma-ray distributions in order to reproduce realistic detection conditions. The simulated and detected distributions agree well for both sets of radio and gamma-ray pulsar characteristics, as evidenced by our various comparisons.

  17. Big Data Analytics for Modelling and Forecasting of Geomagnetic Field Indices

    NASA Astrophysics Data System (ADS)

    Wei, H. L.

    2016-12-01

    A massive amount of data are produced and stored in research areas of space weather and space climate. However, the value of a vast majority of the data acquired every day may not be effectively or efficiently exploited in our daily practice when we try to forecast solar wind parameters and geomagnetic field indices using these recorded measurements or digital signals, probably due to the challenges stemming from the dealing with big data which are characterized by the 4V futures: volume (a massively large amount of data), variety (a great number of different types of data), velocity (a requirement of quick processing of the data), and veracity (the trustworthiness and usability of the data). In order to obtain more reliable and accurate predictive models for geomagnetic field indices, it requires that models should be developed from the big data analytics perspective (or it at least benefits from such a perspective). This study proposes a few data-based modelling frameworks which aim to produce more efficient predictive models for space weather parameters forecasting by means of system identification and big data analytics. More specifically, it aims to build more reliable mathematical models that characterise the relationship between solar wind parameters and geomagnetic filed indices, for example the dependent relationship of Dst and Kp indices on a few solar wind parameters and magnetic field indices, namely, solar wind velocity (V), southward interplanetary magnetic field (Bs), solar wind rectified electric field (VBs), and dynamic flow pressure (P). Examples are provided to illustrate how the proposed modelling approaches are applied to Dst and Kp index prediction.

  18. Asymptotic freedom in certain S O (N ) and S U (N ) models

    NASA Astrophysics Data System (ADS)

    Einhorn, Martin B.; Jones, D. R. Timothy

    2017-09-01

    We calculate the β -functions for S O (N ) and S U (N ) gauge theories coupled to adjoint and fundamental scalar representations, correcting longstanding, previous results. We explore the constraints on N resulting from requiring asymptotic freedom for all couplings. When we take into account the actual allowed behavior of the gauge coupling, the minimum value of N in both cases turns out to be larger than realized in earlier treatments. We also show that in the large N limit, both models have large regions of parameter space corresponding to total asymptotic freedom.

  19. Estimating Consequences of MMOD Penetrations on ISS

    NASA Technical Reports Server (NTRS)

    Evans, H.; Hyde, James; Christiansen, E.; Lear, D.

    2017-01-01

    The threat from micrometeoroid and orbital debris (MMOD) impacts on space vehicles is often quantified in terms of the probability of no penetration (PNP). However, for large spacecraft, especially those with multiple compartments, a penetration may have a number of possible outcomes. The extent of the damage (diameter of hole, crack length or penetration depth), the location of the damage relative to critical equipment or crew, crew response, and even the time of day of the penetration are among the many factors that can affect the outcome. For the International Space Station (ISS), a Monte-Carlo style software code called Manned Spacecraft Crew Survivability (MSCSurv) is used to predict the probability of several outcomes of an MMOD penetration-broadly classified as loss of crew (LOC), crew evacuation (Evac), loss of escape vehicle (LEV), and nominal end of mission (NEOM). By generating large numbers of MMOD impacts (typically in the billions) and tracking the consequences, MSCSurv allows for the inclusion of a large number of parameters and models as well as enabling the consideration of uncertainties in the models and parameters. MSCSurv builds upon the results from NASA's Bumper software (which provides the probability of penetration and critical input data to MSCSurv) to allow analysts to estimate the probability of LOC, Evac, LEV, and NEOM. This paper briefly describes the overall methodology used by NASA to quantify LOC, Evac, LEV, and NEOM with particular emphasis on describing in broad terms how MSCSurv works and its capabilities and most significant models.

  20. Trajectory-probed instability and statistics of desynchronization events in coupled chaotic systems

    NASA Astrophysics Data System (ADS)

    de Oliveira, Gilson F.; Chevrollier, Martine; Passerat de Silans, Thierry; Oriá, Marcos; de Souza Cavalcante, Hugo L. D.

    2015-11-01

    Complex systems, such as financial markets, earthquakes, and neurological networks, exhibit extreme events whose mechanisms of formation are not still completely understood. These mechanisms may be identified and better studied in simpler systems with dynamical features similar to the ones encountered in the complex system of interest. For instance, sudden and brief departures from the synchronized state observed in coupled chaotic systems were shown to display non-normal statistical distributions similar to events observed in the complex systems cited above. The current hypothesis accepted is that these desynchronization events are influenced by the presence of unstable object(s) in the phase space of the system. Here, we present further evidence that the occurrence of large events is triggered by the visitation of the system's phase-space trajectory to the vicinity of these unstable objects. In the system studied here, this visitation is controlled by a single parameter, and we exploit this feature to observe the effect of the visitation rate in the overall instability of the synchronized state. We find that the probability of escapes from the synchronized state and the size of those desynchronization events are enhanced in attractors whose shapes permit the chaotic trajectories to approach the region of strong instability. This result shows that the occurrence of large events requires not only a large local instability to amplify noise, or to amplify the effect of parameter mismatch between the coupled subsystems, but also that the trajectories of the system wander close to this local instability.

  1. Dreaming of Atmospheres

    NASA Astrophysics Data System (ADS)

    Waldmann, Ingo

    2016-10-01

    Radiative transfer retrievals have become the standard in modelling of exoplanetary transmission and emission spectra. Analysing currently available observations of exoplanetary atmospheres often invoke large and correlated parameter spaces that can be difficult to map or constrain.To address these issues, we have developed the Tau-REx (tau-retrieval of exoplanets) retrieval and the RobERt spectral recognition algorithms. Tau-REx is a bayesian atmospheric retrieval framework using Nested Sampling and cluster computing to fully map these large correlated parameter spaces. Nonetheless, data volumes can become prohibitively large and we must often select a subset of potential molecular/atomic absorbers in an atmosphere.In the era of open-source, automated and self-sufficient retrieval algorithms, such manual input should be avoided. User dependent input could, in worst case scenarios, lead to incomplete models and biases in the retrieval. The RobERt algorithm is build to address these issues. RobERt is a deep belief neural (DBN) networks trained to accurately recognise molecular signatures for a wide range of planets, atmospheric thermal profiles and compositions. Using these deep neural networks, we work towards retrieval algorithms that themselves understand the nature of the observed spectra, are able to learn from current and past data and make sensible qualitative preselections of atmospheric opacities to be used for the quantitative stage of the retrieval process.In this talk I will discuss how neural networks and Bayesian Nested Sampling can be used to solve highly degenerate spectral retrieval problems and what 'dreaming' neural networks can tell us about atmospheric characteristics.

  2. Current collection by high voltage anodes in near ionospheric conditions

    NASA Technical Reports Server (NTRS)

    Antoniades, John A.; Greaves, Rod G.; Boyd, D. A.; Ellis, R.

    1990-01-01

    The authors experimentally identified three distinct regimes with large differences in current collection in the presence of neutrals and weak magnetic fields. In magnetic field/anode voltage space the three regions are separated by very sharp transition boundaries. The authors performed a series of laboratory experiments to study the dependence of the region boundaries on several parameters, such as the ambient neutral density, plasma density, magnetic field strength, applied anode voltage, voltage pulsewidth, chamber material, chamber size and anode radius. The three observed regimes are: classical magnetic field limited collection; stable medium current toroidal discharge; and large scale, high current space glow discharge. There is as much as several orders of magnitude of difference in the amount of collected current upon any boundary crossing, particularly if one enters the space glow regime. They measured some of the properties of the plasma generated by the breakdown that is present in regimes II and III in the vicinity of the anode including the sheath modified electrostatic potential, I-V characteristics at high voltage as well as the local plasma density.

  3. Simultaneous structural and control optimization via linear quadratic regulator eigenstructure assignment

    NASA Technical Reports Server (NTRS)

    Becus, G. A.; Lui, C. Y.; Venkayya, V. B.; Tischler, V. A.

    1987-01-01

    A method for simultaneous structural and control design of large flexible space structures (LFSS) to reduce vibration generated by disturbances is presented. Desired natural frequencies and damping ratios for the closed loop system are achieved by using a combination of linear quadratic regulator (LQR) synthesis and numerical optimization techniques. The state and control weighing matrices (Q and R) are expressed in terms of structural parameters such as mass and stiffness. The design parameters are selected by numerical optimization so as to minimize the weight of the structure and to achieve the desired closed-loop eigenvalues. An illustrative example of the design of a two bar truss is presented.

  4. Dependence of atmospheric refractive index structure parameter (Cn2) on the residence time and vertical distribution of aerosols.

    PubMed

    Anand, N; Satheesh, S K; Krishna Moorthy, K

    2017-07-15

    Effects of absorbing atmospheric aerosols in modulating the tropospheric refractive index structure parameter (Cn2) are estimated using high resolution radiosonde and multi-satellite data along with a radiative transfer model. We report the influence of variations in residence time and vertical distribution of aerosols in modulating Cn2 and why the aerosol induced atmospheric heating needs to be considered while estimating a free space optical communication link budget. The results show that performance of the link is seriously affected if large concentrations of absorbing aerosols reside for a long time in the atmospheric path.

  5. Update on the NASA Glenn PSL Ice Crystal Cloud Characterization (2016)

    NASA Technical Reports Server (NTRS)

    Van Zante, J.; Bencic, T.; Ratvasky, Thomas P.; Struk, Peter M.

    2016-01-01

    NASA Glenn's Propulsion Systems Laboratory (PSL) is an altitude engine research test facility capable of producing ice-crystal and supercooled liquid clouds. The cloud characterization parameter space is fairly large and complex, but the phase of the cloud seems primarily governed by wet bulb temperature. The presentation will discuss some of the issues uncovered through four cloud characterization efforts to date, as well as some of instrumentation that has been used to characterize cloud parameters including cloud uniformity, bulk total water content, median volumetric diameter and max-diameter, percent freeze-out, relative humidity, and an update on the NASA Glenn PSL Ice Crystal Cloud Characterization (2016).

  6. Space Particle Hazard Specification, Forecasting, and Mitigation

    DTIC Science & Technology

    2007-11-30

    Automated FTP scripts permitted users to automatically update their global input parameter data set directly from the National Oceanic and...of CEASE capabilities. The angular field-of-view for CEASE is relatively large and will not allow for pitch angle resolved measurements. However... angular zones spanning 120° in the plane containing the magnetic field with an approximate 4° width in the direction perpendicular to the look-plane

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Churchill, R. Michael

    Apache Spark is explored as a tool for analyzing large data sets from the magnetic fusion simulation code XGCI. Implementation details of Apache Spark on the NERSC Edison supercomputer are discussed, including binary file reading, and parameter setup. Here, an unsupervised machine learning algorithm, k-means clustering, is applied to XGCI particle distribution function data, showing that highly turbulent spatial regions do not have common coherent structures, but rather broad, ring-like structures in velocity space.

  8. On-Line Data Reconstruction in Redundant Disk Arrays.

    DTIC Science & Technology

    1994-05-01

    each sale, - file servers that support a large number of clients with differing work schedules , and * automated teller networks in banking systems...24KB Head scheduling : FIFO User data layout: Sequential in address space of array Disk spindles: Synchronized Table 2.2: Default array parameters for...package and a set of scheduling and queueing routines. 2.3.3. Default workload This dissertation reports on many performance evaluations. In order to

  9. Improved initial guess with semi-subpixel level accuracy in digital image correlation by feature-based method

    NASA Astrophysics Data System (ADS)

    Zhang, Yunlu; Yan, Lei; Liou, Frank

    2018-05-01

    The quality initial guess of deformation parameters in digital image correlation (DIC) has a serious impact on convergence, robustness, and efficiency of the following subpixel level searching stage. In this work, an improved feature-based initial guess (FB-IG) scheme is presented to provide initial guess for points of interest (POIs) inside a large region. Oriented FAST and Rotated BRIEF (ORB) features are semi-uniformly extracted from the region of interest (ROI) and matched to provide initial deformation information. False matched pairs are eliminated by the novel feature guided Gaussian mixture model (FG-GMM) point set registration algorithm, and nonuniform deformation parameters of the versatile reproducing kernel Hilbert space (RKHS) function are calculated simultaneously. Validations on simulated images and real-world mini tensile test verify that this scheme can robustly and accurately compute initial guesses with semi-subpixel level accuracy in cases with small or large translation, deformation, or rotation.

  10. Finite-temperature phase transitions of third and higher order in gauge theories at large N

    DOE PAGES

    Nishimura, Hiromichi; Pisarski, Robert D.; Skokov, Vladimir V.

    2018-02-15

    We study phase transitions in SU(∞) gauge theories at nonzero temperature using matrix models. Our basic assumption is that the effective potential is dominated by double trace terms for the Polyakov loops. As a function of the various parameters, related to terms linear, quadratic, and quartic in the Polyakov loop, the phase diagram exhibits a universal structure. In a large region of this parameter space, there is a continuous phase transition whose order is larger than second. This is a generalization of the phase transition of Gross, Witten, and Wadia (GWW). Depending upon the detailed form of the matrix model,more » the eigenvalue density and the behavior of the specific heat near the transition differ drastically. Here, we speculate that in the pure gauge theory, that although the deconfining transition is thermodynamically of first order, it can be nevertheless conformally symmetric at infnite N.« less

  11. Finite-temperature phase transitions of third and higher order in gauge theories at large N

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishimura, Hiromichi; Pisarski, Robert D.; Skokov, Vladimir V.

    We study phase transitions in SU(∞) gauge theories at nonzero temperature using matrix models. Our basic assumption is that the effective potential is dominated by double trace terms for the Polyakov loops. As a function of the various parameters, related to terms linear, quadratic, and quartic in the Polyakov loop, the phase diagram exhibits a universal structure. In a large region of this parameter space, there is a continuous phase transition whose order is larger than second. This is a generalization of the phase transition of Gross, Witten, and Wadia (GWW). Depending upon the detailed form of the matrix model,more » the eigenvalue density and the behavior of the specific heat near the transition differ drastically. Here, we speculate that in the pure gauge theory, that although the deconfining transition is thermodynamically of first order, it can be nevertheless conformally symmetric at infnite N.« less

  12. Slow dynamics and regularization phenomena in ensembles of chaotic neurons

    NASA Astrophysics Data System (ADS)

    Rabinovich, M. I.; Varona, P.; Torres, J. J.; Huerta, R.; Abarbanel, H. D. I.

    1999-02-01

    We have explored the role of calcium concentration dynamics in the generation of chaos and in the regularization of the bursting oscillations using a minimal neural circuit of two coupled model neurons. In regions of the control parameter space where the slowest component, namely the calcium concentration in the endoplasmic reticulum, weakly depends on the other variables, this model is analogous to three dimensional systems as found in [1] or [2]. These are minimal models that describe the fundamental characteristics of the chaotic spiking-bursting behavior observed in real neurons. We have investigated different regimes of cooperative behavior in large assemblies of such units using lattice of non-identical Hindmarsh-Rose neurons electrically coupled with parameters chosen randomly inside the chaotic region. We study the regularization mechanisms in large assemblies and the development of several spatio-temporal patterns as a function of the interconnectivity among nearest neighbors.

  13. Optimizing detection and analysis of slow waves in sleep EEG.

    PubMed

    Mensen, Armand; Riedner, Brady; Tononi, Giulio

    2016-12-01

    Analysis of individual slow waves in EEG recording during sleep provides both greater sensitivity and specificity compared to spectral power measures. However, parameters for detection and analysis have not been widely explored and validated. We present a new, open-source, Matlab based, toolbox for the automatic detection and analysis of slow waves; with adjustable parameter settings, as well as manual correction and exploration of the results using a multi-faceted visualization tool. We explore a large search space of parameter settings for slow wave detection and measure their effects on a selection of outcome parameters. Every choice of parameter setting had some effect on at least one outcome parameter. In general, the largest effect sizes were found when choosing the EEG reference, type of canonical waveform, and amplitude thresholding. Previously published methods accurately detect large, global waves but are conservative and miss the detection of smaller amplitude, local slow waves. The toolbox has additional benefits in terms of speed, user-interface, and visualization options to compare and contrast slow waves. The exploration of parameter settings in the toolbox highlights the importance of careful selection of detection METHODS: The sensitivity and specificity of the automated detection can be improved by manually adding or deleting entire waves and or specific channels using the toolbox visualization functions. The toolbox standardizes the detection procedure, sets the stage for reliable results and comparisons and is easy to use without previous programming experience. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Effects of crustal layering on source parameter inversion from coseismic geodetic data

    NASA Astrophysics Data System (ADS)

    Amoruso, A.; Crescentini, L.; Fidani, C.

    2004-10-01

    We study the effect of a superficial layer overlying a half-space on the surface displacements caused by uniform slipping of a dip-slip normal rectangular fault. We compute static coseismic displacements using a 3-D analytical code for different characteristics of the layered medium, different fault geometries and different configurations of bench marks to simulate different kinds of geodetic data (GPS, Synthetic Aperture Radar, and levellings). We perform both joint and separate inversions of the three components of synthetic displacement without constraining fault parameters, apart from strike and rake, and using a non-linear global inversion technique under the assumption of homogeneous half-space. Differences between synthetic displacements computed in the presence of the superficial soft layer and in a homogeneous half-space do not show a simple regular behaviour, even if a few features can be identified. Consequently, also retrieved parameters of the homogeneous equivalent fault obtained by unconstrained inversion of surface displacements do not show a simple regular behaviour. We point out that the presence of a superficial layer may lead to misestimating several fault parameters both using joint and separate inversions of the three components of synthetic displacement and that the effects of the presence of the superficial layer can change whether all fault parameters are left free in the inversions or not. In the inversion of any kind of coseismic geodetic data, fault size and slip can be largely misestimated, but the product (fault length) × (fault width) × slip, which is proportional to the seismic moment for a given rigidity modulus, is often well determined (within a few per cent). Because inversion of coseismic geodetic data assuming a layered medium is impracticable, we suggest that only a case-to-case study involving some kind of recursive determination of fault parameters through data correction seems to give the proper approach when layering is important.

  15. The HelCat dual-source plasma device.

    PubMed

    Lynn, Alan G; Gilmore, Mark; Watts, Christopher; Herrea, Janis; Kelly, Ralph; Will, Steve; Xie, Shuangwei; Yan, Lincan; Zhang, Yue

    2009-10-01

    The HelCat (Helicon-Cathode) device has been constructed to support a broad range of basic plasma science experiments relevant to the areas of solar physics, laboratory astrophysics, plasma nonlinear dynamics, and turbulence. These research topics require a relatively large plasma source capable of operating over a broad region of parameter space with a plasma duration up to at least several milliseconds. To achieve these parameters a novel dual-source system was developed utilizing both helicon and thermionic cathode sources. Plasma parameters of n(e) approximately 0.5-50 x 10(18) m(-3) and T(e) approximately 3-12 eV allow access to a wide range of collisionalities important to the research. The HelCat device and initial characterization of plasma behavior during dual-source operation are described.

  16. End Effects and Load Diffusion in Composite Structures

    NASA Technical Reports Server (NTRS)

    Horgan, Cornelius O.; Ambur, D. (Technical Monitor); Nemeth, M. P. (Technical Monitor)

    2002-01-01

    The research carried out here builds on our previous NASA supported research on the general topic of edge effects and load diffusion in composite structures. Further fundamental solid mechanics studies were carried out to provide a basis for assessing the complicated modeling necessary for large scale structures used by NASA. An understanding of the fundamental mechanisms of load diffusion in composite subcomponents is essential in developing primary composite structures. Specific problems recently considered were focussed on end effects in sandwich structures and for functionally graded materials. Both linear and nonlinear (geometric and material) problems have been addressed. Our goal is the development of readily applicable design formulas for the decay lengths in terms of non-dimensional material and geometric parameters. Analytical models of load diffusion behavior are extremely valuable in building an intuitive base for developing refined modeling strategies and assessing results from finite element analyses. The decay behavior of stresses and other field quantities provides a significant aid towards this process. The analysis is also amenable to parameter study with a large parameter space and should be useful in structural tailoring studies.

  17. Hazard assessment of long-period ground motions for the Nankai Trough earthquakes

    NASA Astrophysics Data System (ADS)

    Maeda, T.; Morikawa, N.; Aoi, S.; Fujiwara, H.

    2013-12-01

    We evaluate a seismic hazard for long-period ground motions associated with the Nankai Trough earthquakes (M8~9) in southwest Japan. Large interplate earthquakes occurring around the Nankai Trough have caused serious damages due to strong ground motions and tsunami; most recent events were in 1944 and 1946. Such large interplate earthquake potentially causes damages to high-rise and large-scale structures due to long-period ground motions (e.g., 1985 Michoacan earthquake in Mexico, 2003 Tokachi-oki earthquake in Japan). The long-period ground motions are amplified particularly on basins. Because major cities along the Nankai Trough have developed on alluvial plains, it is therefore important to evaluate long-period ground motions as well as strong motions and tsunami for the anticipated Nankai Trough earthquakes. The long-period ground motions are evaluated by the finite difference method (FDM) using 'characterized source models' and the 3-D underground structure model. The 'characterized source model' refers to a source model including the source parameters necessary for reproducing the strong ground motions. The parameters are determined based on a 'recipe' for predicting strong ground motion (Earthquake Research Committee (ERC), 2009). We construct various source models (~100 scenarios) giving the various case of source parameters such as source region, asperity configuration, and hypocenter location. Each source region is determined by 'the long-term evaluation of earthquakes in the Nankai Trough' published by ERC. The asperity configuration and hypocenter location control the rupture directivity effects. These parameters are important because our preliminary simulations are strongly affected by the rupture directivity. We apply the system called GMS (Ground Motion Simulator) for simulating the seismic wave propagation based on 3-D FDM scheme using discontinuous grids (Aoi and Fujiwara, 1999) to our study. The grid spacing for the shallow region is 200 m and 100 m in horizontal and vertical, respectively. The grid spacing for the deep region is three times coarser. The total number of grid points is about three billion. The 3-D underground structure model used in the FD simulation is the Japan integrated velocity structure model (ERC, 2012). Our simulation is valid for period more than two seconds due to the lowest S-wave velocity and grid spacing. However, because the characterized source model may not sufficiently support short period components, we should be interpreted the reliable period of this simulation with caution. Therefore, we consider the period more than five seconds instead of two seconds for further analysis. We evaluate the long-period ground motions using the velocity response spectra for the period range between five and 20 second. The preliminary simulation shows a large variation of response spectra at a site. This large variation implies that the ground motion is very sensitive to different scenarios. And it requires studying the large variation to understand the seismic hazard. Our further study will obtain the hazard curves for the Nankai Trough earthquake (M 8~9) by applying the probabilistic seismic hazard analysis to the simulation results.

  18. Ultracool dwarf benchmarks with Gaia primaries

    NASA Astrophysics Data System (ADS)

    Marocco, F.; Pinfield, D. J.; Cook, N. J.; Zapatero Osorio, M. R.; Montes, D.; Caballero, J. A.; Gálvez-Ortiz, M. C.; Gromadzki, M.; Jones, H. R. A.; Kurtev, R.; Smart, R. L.; Zhang, Z.; Cabrera Lavers, A. L.; García Álvarez, D.; Qi, Z. X.; Rickard, M. J.; Dover, L.

    2017-10-01

    We explore the potential of Gaia for the field of benchmark ultracool/brown dwarf companions, and present the results of an initial search for metal-rich/metal-poor systems. A simulated population of resolved ultracool dwarf companions to Gaia primary stars is generated and assessed. Of the order of ˜24 000 companions should be identifiable outside of the Galactic plane (|b| > 10 deg) with large-scale ground- and space-based surveys including late M, L, T and Y types. Our simulated companion parameter space covers 0.02 ≤ M/M⊙ ≤ 0.1, 0.1 ≤ age/Gyr ≤ 14 and -2.5 ≤ [Fe/H] ≤ 0.5, with systems required to have a false alarm probability <10-4, based on projected separation and expected constraints on common distance, common proper motion and/or common radial velocity. Within this bulk population, we identify smaller target subsets of rarer systems whose collective properties still span the full parameter space of the population, as well as systems containing primary stars that are good age calibrators. Our simulation analysis leads to a series of recommendations for candidate selection and observational follow-up that could identify ˜500 diverse Gaia benchmarks. As a test of the veracity of our methodology and simulations, our initial search uses UKIRT Infrared Deep Sky Survey and Sloan Digital Sky Survey to select secondaries, with the parameters of primaries taken from Tycho-2, Radial Velocity Experiment, Large sky Area Multi-Object fibre Spectroscopic Telescope and Tycho-Gaia Astrometric Solution. We identify and follow up 13 new benchmarks. These include M8-L2 companions, with metallicity constraints ranging in quality, but robust in the range -0.39 ≤ [Fe/H] ≤ +0.36, and with projected physical separation in the range 0.6 < s/kau < 76. Going forward, Gaia offers a very high yield of benchmark systems, from which diverse subsamples may be able to calibrate a range of foundational ultracool/sub-stellar theory and observation.

  19. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  20. Antagonistic and synergistic interactions among predators.

    PubMed

    Huxel, Gary R

    2007-08-01

    The structure and dynamics of food webs are largely dependent upon interactions among consumers and their resources. However, interspecific interactions such as intraguild predation and interference competition can also play a significant role in the stability of communities. The role of antagonistic/synergistic interactions among predators has been largely ignored in food web theory. These mechanisms influence predation rates, which is one of the key factors regulating food web structure and dynamics, thus ignoring them can potentially limit understanding of food webs. Using nonlinear models, it is shown that critical aspects of multiple predator food web dynamics are antagonistic/synergistic interactions among predators. The influence of antagonistic/synergistic interactions on coexistence of predators depended largely upon the parameter set used and the degree of feeding niche differentiation. In all cases when there was no effect of antagonism or synergism (a ( ij )=1.00), the predators coexisted. Using the stable parameter set, coexistence occurred across the range of antagonism/synergism used. However, using the chaotic parameter strong antagonism resulted in the extinction of one or both species, while strong synergism tended to coexistence. Whereas using the limit cycle parameter set, coexistence was strongly dependent on the degree of feeding niche overlap. Additionally increasing the degree of feeding specialization of the predators on the two prey species increased the amount of parameter space in which coexistence of the two predators occurred. Bifurcation analyses supported the general pattern of increased stability when the predator interaction was synergistic and decreased stability when it was antagonistic. Thus, synergistic interactions should be more common than antagonistic interactions in ecological systems.

  1. Massive data compression for parameter-dependent covariance matrices

    NASA Astrophysics Data System (ADS)

    Heavens, Alan F.; Sellentin, Elena; de Mijolla, Damien; Vianello, Alvise

    2017-12-01

    We show how the massive data compression algorithm MOPED can be used to reduce, by orders of magnitude, the number of simulated data sets which are required to estimate the covariance matrix required for the analysis of Gaussian-distributed data. This is relevant when the covariance matrix cannot be calculated directly. The compression is especially valuable when the covariance matrix varies with the model parameters. In this case, it may be prohibitively expensive to run enough simulations to estimate the full covariance matrix throughout the parameter space. This compression may be particularly valuable for the next generation of weak lensing surveys, such as proposed for Euclid and Large Synoptic Survey Telescope, for which the number of summary data (such as band power or shear correlation estimates) is very large, ∼104, due to the large number of tomographic redshift bins which the data will be divided into. In the pessimistic case where the covariance matrix is estimated separately for all points in an Monte Carlo Markov Chain analysis, this may require an unfeasible 109 simulations. We show here that MOPED can reduce this number by a factor of 1000, or a factor of ∼106 if some regularity in the covariance matrix is assumed, reducing the number of simulations required to a manageable 103, making an otherwise intractable analysis feasible.

  2. Skin friction drag reduction on a flat plate turbulent boundary layer using synthetic jets

    NASA Astrophysics Data System (ADS)

    Belanger, Randy; Boom, Pieter D.; Hanson, Ronald E.; Lavoie, Philippe; Zingg, David W.

    2017-11-01

    In these studies, we investigate the effect of mild synthetic jet actuation on a flat plate turbulent boundary layer with the goal of interacting with the large scales in the log region of the boundary layer and manipulating the overall skin friction. Results will be presented from both large eddy simulations (LES) and wind tunnel experiments. In the experiments, a large parameter space of synthetic jet frequency and amplitude was studied with hot film sensors at select locations behind a pair of synthetic jets to identify the parameters that produce the greatest changes in the skin friction. The LES simulations were performed for a selected set of parameters and provide a more complete evaluation of the interaction between the boundary layer and synthetic jets. Five boundary layer thicknesses downstream, the skin friction between the actuators is generally found to increase, while regions of reduced skin friction persist downstream of the actuators. This pattern is reversed for forcing at low frequency. Overall, the spanwise-averaged skin friction is increased by the forcing, except when forcing at high frequency and low amplitude, for which a net skin friction reduction persists downstream. The physical interpretation of these results will be discussed. The financial support of Airbus is gratefully acknowledged.

  3. HiPEP Ion Optics System Evaluation Using Gridlets

    NASA Technical Reports Server (NTRS)

    Willliams, John D.; Farnell, Cody C.; Laufer, D. Mark; Martinez, Rafael A.

    2004-01-01

    Experimental measurements are presented for sub-scale ion optics systems comprised of 7 and 19 aperture pairs with geometrical features that are similar to the HiPEP ion optics system. Effects of hole diameter and grid-to-grid spacing are presented as functions of applied voltage and beamlet current. Recommendations are made for the beamlet current range where the ion optics system can be safely operated without experiencing direct impingement of high energy ions on the accelerator grid surface. Measurements are also presented of the accelerator grid voltage where beam plasma electrons backstream through the ion optics system. Results of numerical simulations obtained with the ffx code are compared to both the impingement limit and backstreaming measurements. An emphasis is placed on identifying differences between measurements and simulation predictions to highlight areas where more research is needed. Relatively large effects are observed in simulations when the discharge chamber plasma properties and ion optics geometry are varied. Parameters investigated using simulations include the applied voltages, grid spacing, hole-to-hole spacing, doubles-to-singles ratio, plasma potential, and electron temperature; and estimates are provided for the sensitivity of impingement limits on these parameters.

  4. TURTLE with MAD input (Trace Unlimited Rays Through Lumped Elements) -- A computer program for simulating charged particle beam transport systems and DECAY TURTLE including decay calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carey, D.C.

    1999-12-09

    TURTLE is a computer program useful for determining many characteristics of a particle beam once an initial design has been achieved, Charged particle beams are usually designed by adjusting various beam line parameters to obtain desired values of certain elements of a transfer or beam matrix. Such beam line parameters may describe certain magnetic fields and their gradients, lengths and shapes of magnets, spacings between magnetic elements, or the initial beam accepted into the system. For such purposes one typically employs a matrix multiplication and fitting program such as TRANSPORT. TURTLE is designed to be used after TRANSPORT. For conveniencemore » of the user, the input formats of the two programs have been made compatible. The use of TURTLE should be restricted to beams with small phase space. The lumped element approximation, described below, precludes the inclusion of the effect of conventional local geometric aberrations (due to large phase space) or fourth and higher order. A reading of the discussion below will indicate clearly the exact uses and limitations of the approach taken in TURTLE.« less

  5. Cubesat in-situ degradation detector (CIDD)

    NASA Astrophysics Data System (ADS)

    Rievers, Benny; Milke, Alexander; Salden, Daniel

    2015-07-01

    The design of the thermal control and management system (TCS) is a central task in satellite design. In order to evaluate and dimensionize the properties of the TCS, material parameters specifying the conductive and radiative properties of the different TCS components have to be known including their respective variations within the mission lifetime. In particular the thermo-optical properties of the outer surfaces including critical TCS components such as radiators and thermal insulation are subject to degradation caused by interaction with the space environment. The evaluation of these material parameters by means of ground testing is a time-consuming and expensive endeavor. Long-term in-situ measurements on board the ISS or large satellites not only realize a better implementation of the influence of the space environment but also imply high costs. Motivated by this we propose the utilization of low-cost nano-satellite systems to realize material tests within space at a considerably reduced cost. We present a nanosat-scale degradation sensor concept which realizes low power consumption and data rates compatible with nanosat boundaries at UHF radio. By means of a predefined measurement and messaging cycle temperature curves are measured and evaluated on ground to extract the change of absorptivity and emissivity over mission lifetime.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagos, Samson M.; Feng, Zhe; Burleyson, Casey D.

    Regional cloud permitting model simulations of cloud populations observed during the 2011 ARM Madden Julian Oscillation Investigation Experiment/ Dynamics of Madden-Julian Experiment (AMIE/DYNAMO) field campaign are evaluated against radar and ship-based measurements. Sensitivity of model simulated surface rain rate statistics to parameters and parameterization of hydrometeor sizes in five commonly used WRF microphysics schemes are examined. It is shown that at 2 km grid spacing, the model generally overestimates rain rate from large and deep convective cores. Sensitivity runs involving variation of parameters that affect rain drop or ice particle size distribution (more aggressive break-up process etc) generally reduce themore » bias in rain-rate and boundary layer temperature statistics as the smaller particles become more vulnerable to evaporation. Furthermore significant improvement in the convective rain-rate statistics is observed when the horizontal grid-spacing is reduced to 1 km and 0.5 km, while it is worsened when run at 4 km grid spacing as increased turbulence enhances evaporation. The results suggest modulation of evaporation processes, through parameterization of turbulent mixing and break-up of hydrometeors may provide a potential avenue for correcting cloud statistics and associated boundary layer temperature biases in regional and global cloud permitting model simulations.« less

  7. Parameter redundancy in discrete state-space and integrated models.

    PubMed

    Cole, Diana J; McCrea, Rachel S

    2016-09-01

    Discrete state-space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state-space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state-space models using discrete analogues of methods for continuous state-space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Weak-signal Phase Calibration Strategies for Large DSN Arrays

    NASA Technical Reports Server (NTRS)

    Jones, Dayton L.

    2005-01-01

    The NASA Deep Space Network (DSN) is studying arrays of large numbers of small, mass-produced radio antennas as a cost-effective way to increase downlink sensitivity and data rates for future missions. An important issue for the operation of large arrays is the accuracy with which signals from hundreds of small antennas can be combined. This is particularly true at Ka band (32 GHz) where atmospheric phase variations can be large and rapidly changing. A number of algorithms exist to correct the phases of signals from individual antennas in the case where a spacecraft signal provides a useful signal-to-noise ratio (SNR) on time scales shorter than the atmospheric coherence time. However, for very weak spacecraft signals it will be necessary to rely on background natural radio sources to maintain array phasing. Very weak signals could result from a spacecraft emergency or by design, such as direct-to-Earth data transmissions from distant planetary atmospheric or surface probes using only low gain antennas. This paper considers the parameter space where external real-time phase calibration will be necessary, and what this requires in terms of array configuration and signal processing. The inherent limitations of this technique are also discussed.

  9. A robust momentum management and attitude control system for the space station

    NASA Technical Reports Server (NTRS)

    Speyer, J. L.; Rhee, Ihnseok

    1991-01-01

    A game theoretic controller is synthesized for momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Full state information is assumed since attitude rates are assumed to be very assurately measured. By an input-output decomposition of the uncertainty in the system matrices, the parameter uncertainties in the dynamic system are represented as an unknown gain associated with an internal feedback loop (IFL). The input and output matrices associated with the IFL form directions through which the uncertain parameters affect system response. If the quadratic form of the IFL output augments the cost criterion, then enhanced parameter robustness is anticipated. By considering the input and the input disturbance from the IFL as two noncooperative players, a linear-quadratic differential game is constructed. The solution in the form of a linear controller is used for synthesis. Inclusion of the external disturbance torques results in a dynamic feedback controller which consists of conventional PID (proportional integral derivative) control and cyclic disturbance rejection filters. It is shown that the game theoretic design allows large variations in the inertias in directions of importance.

  10. Model-Based Thermal System Design Optimization for the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-01-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  11. Exponential Sum-Fitting of Dwell-Time Distributions without Specifying Starting Parameters

    PubMed Central

    Landowne, David; Yuan, Bin; Magleby, Karl L.

    2013-01-01

    Fitting dwell-time distributions with sums of exponentials is widely used to characterize histograms of open- and closed-interval durations recorded from single ion channels, as well as for other physical phenomena. However, it can be difficult to identify the contributing exponential components. Here we extend previous methods of exponential sum-fitting to present a maximum-likelihood approach that consistently detects all significant exponentials without the need for user-specified starting parameters. Instead of searching for exponentials, the fitting starts with a very large number of initial exponentials with logarithmically spaced time constants, so that none are missed. Maximum-likelihood fitting then determines the areas of all the initial exponentials keeping the time constants fixed. In an iterative manner, with refitting after each step, the analysis then removes exponentials with negligible area and combines closely spaced adjacent exponentials, until only those exponentials that make significant contributions to the dwell-time distribution remain. There is no limit on the number of significant exponentials and no starting parameters need be specified. We demonstrate fully automated detection for both experimental and simulated data, as well as for classical exponential-sum-fitting problems. PMID:23746510

  12. Model-based thermal system design optimization for the James Webb Space Telescope

    NASA Astrophysics Data System (ADS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-10-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  13. A Deeper Understanding of Stability in the Solar Wind: Applying Nyquist's Instability Criterion to Wind Faraday Cup Data

    NASA Astrophysics Data System (ADS)

    Alterman, B. L.; Klein, K. G.; Verscharen, D.; Stevens, M. L.; Kasper, J. C.

    2017-12-01

    Long duration, in situ data sets enable large-scale statistical analysis of free-energy-driven instabilities in the solar wind. The plasma beta and temperature anisotropy plane provides a well-defined parameter space in which a single-fluid plasma's stability can be represented. Because this reduced parameter space can only represent instability thresholds due to the free energy of one ion species - typically the bulk protons - the true impact of instabilities on the solar wind is under estimated. Nyquist's instability criterion allows us to systematically account for other sources of free energy including beams, drifts, and additional temperature anisotropies. Utilizing over 20 years of Wind Faraday cup and magnetic field observations, we have resolved the bulk parameters for three ion populations: the bulk protons, beam protons, and alpha particles. Applying Nyquist's criterion, we calculate the number of linearly growing modes supported by each spectrum and provide a more nuanced consideration of solar wind stability. Using collisional age measurements, we predict the stability of the solar wind close to the sun. Accounting for the free-energy from the three most common ion populations in the solar wind, our approach provides a more complete characterization of solar wind stability.

  14. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling

    PubMed Central

    Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-01-01

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464

  15. Top-philic dark matter within and beyond the WIMP paradigm

    NASA Astrophysics Data System (ADS)

    Garny, Mathias; Heisig, Jan; Hufnagel, Marco; Lülf, Benedikt

    2018-04-01

    We present a comprehensive analysis of top-philic Majorana dark matter that interacts via a colored t -channel mediator. Despite the simplicity of the model—introducing three parameters only—it provides an extremely rich phenomenology allowing us to accommodate the relic density for a large range of coupling strengths spanning over 6 orders of magnitude. This model features all "exceptional" mechanisms for dark matter freeze-out, including the recently discovered conversion-driven freeze-out mode, with interesting signatures of long-lived colored particles at colliders. We constrain the cosmologically allowed parameter space with current experimental limits from direct, indirect and collider searches, with special emphasis on light dark matter below the top mass. In particular, we explore the interplay between limits from Xenon1T, Fermi-LAT and AMS-02 as well as limits from stop, monojet and Higgs invisible decay searches at the LHC. We find that several blind spots for light dark matter evade current constraints. The region in parameter space where the relic density is set by the mechanism of conversion-driven freeze-out can be conclusively tested by R -hadron searches at the LHC with 300 fb-1 .

  16. Robust momentum management and attitude control system for the Space Station

    NASA Technical Reports Server (NTRS)

    Rhee, Ihnseok; Speyer, Jason L.

    1992-01-01

    A game theoretic controller is synthesized for momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Full state information is assumed since attitude rates are assumed to be very accurately measured. By an input-output decomposition of the uncertainty in the system matrices, the parameter uncertainties in the dynamic system are represented as an unknown gain associated with an internal feedback loop (IFL). The input and output matrices associated with the IFL form directions through which the uncertain parameters affect system response. If the quadratic form of the IFL output augments the cost criterion, then enhanced parameter robustness is anticipated. By considering the input and the input disturbance from the IFL as two noncooperative players, a linear-quadratic differential game is constructed. The solution in the form of a linear controller is used for synthesis. Inclusion of the external disturbance torques results in a dynamic feedback controller which consists of conventional PID (proportional integral derivative) control and cyclic disturbance rejection filters. It is shown that the game theoretic design allows large variations in the inertias in directions of importance.

  17. Computational exploration of neuron and neural network models in neurobiology.

    PubMed

    Prinz, Astrid A

    2007-01-01

    The electrical activity of individual neurons and neuronal networks is shaped by the complex interplay of a large number of non-linear processes, including the voltage-dependent gating of ion channels and the activation of synaptic receptors. These complex dynamics make it difficult to understand how individual neuron or network parameters-such as the number of ion channels of a given type in a neuron's membrane or the strength of a particular synapse-influence neural system function. Systematic exploration of cellular or network model parameter spaces by computational brute force can overcome this difficulty and generate comprehensive data sets that contain information about neuron or network behavior for many different combinations of parameters. Searching such data sets for parameter combinations that produce functional neuron or network output provides insights into how narrowly different neural system parameters have to be tuned to produce a desired behavior. This chapter describes the construction and analysis of databases of neuron or neuronal network models and describes some of the advantages and downsides of such exploration methods.

  18. Getting super-excited with modified dispersion relations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ashoorioon, Amjad; Casadio, Roberto; Geshnizjani, Ghazal

    We demonstrate that in some regions of parameter space, modified dispersion relations can lead to highly populated excited states, which we dub as 'super-excited' states. In order to prepare such super-excited states, we invoke dispersion relations that have negative slope in an interim sub-horizon phase at high momenta. This behaviour of quantum fluctuations can lead to large corrections relative to the Bunch-Davies power spectrum, which mimics highly excited initial conditions. We identify the Bogolyubov coefficients that can yield these power spectra. In the course of this computation, we also point out the shortcomings of the gluing method for evaluating themore » power spectrum and the Bogolyubov coefficients. As we discuss, there are other regions of parameter space, where the power spectrum does not get modified. Therefore, modified dispersion relations can also lead to so-called 'calm excited states'. We conclude by commenting on the possibility of obtaining these modified dispersion relations within the Effective Field Theory of Inflation.« less

  19. Chance of Vulnerability Reduction in Application-Specific NoC through Distance Aware Mapping Algorithm

    NASA Astrophysics Data System (ADS)

    Janidarmian, Majid; Fekr, Atena Roshan; Bokharaei, Vahhab Samadi

    2011-08-01

    Mapping algorithm which means which core should be linked to which router is one of the key issues in the design flow of network-on-chip. To achieve an application-specific NoC design procedure that minimizes the communication cost and improves the fault tolerant property, first a heuristic mapping algorithm that produces a set of different mappings in a reasonable time is presented. This algorithm allows the designers to identify the set of most promising solutions in a large design space, which has low communication costs while yielding optimum communication costs in some cases. Another evaluated parameter, vulnerability index, is then considered as a principle of estimating the fault-tolerance property in all produced mappings. Finally, in order to yield a mapping which considers trade-offs between these two parameters, a linear function is defined and introduced. It is also observed that more flexibility to prioritize solutions within the design space is possible by adjusting a set of if-then rules in fuzzy logic.

  20. Regional Differences in Tropical Lightning Distributions.

    NASA Astrophysics Data System (ADS)

    Boccippio, Dennis J.; Goodman, Steven J.; Heckman, Stan

    2000-12-01

    Observations from the National Aeronautics and Space Administration Optical Transient Detector (OTD) and Tropical Rainfall Measuring Mission (TRMM)-based Lightning Imaging Sensor (LIS) are analyzed for variability between land and ocean, various geographic regions, and different (objectively defined) convective `regimes.' The bulk of the order-of-magnitude differences between land and ocean regional flash rates are accounted for by differences in storm spacing (density) and/or frequency of occurrence, rather than differences in storm instantaneous flash rates, which only vary by a factor of 2 on average. Regional variability in cell density and cell flash rates closely tracks differences in 85-GHz microwave brightness temperatures. Monotonic relationships are found with the gross moist stability of the tropical atmosphere, a large-scale `adjusted state' parameter. This result strongly suggests that it will be possible, using TRMM observations, to objectively test numerical or theoretical predictions of how mesoscale convective organization interacts with the larger-scale environment. Further parameters are suggested for a complete objective definition of tropical convective regimes.

  1. A method and data for video monitor sizing. [human CRT viewing requirements

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, M., III; Shields, N. L., Jr.; Malone, T. B.; Guerin, E. G.

    1976-01-01

    The paper outlines an approach consisting of using analytical methods and empirical data to determine monitor size constraints based on the human operator's CRT viewing requirements in a context where panel space and volume considerations for the Space Shuttle aft cabin constrain the size of the monitor to be used. Two cases are examined: remote scene imaging and alphanumeric character display. The central parameter used to constrain monitor size is the ratio M/L where M is the monitor dimension and L the viewing distance. The study is restricted largely to 525 line video systems having an SNR of 32 db and bandwidth of 4.5 MHz. Degradation in these parameters would require changes in the empirically determined visual angle constants presented. The data and methods described are considered to apply to cases where operators are required to view via TV target objects which are well differentiated from the background and where the background is relatively sparse. It is also necessary to identify the critical target dimensions and cues.

  2. Lepton flavor violating B meson decays via a scalar leptoquark

    NASA Astrophysics Data System (ADS)

    Sahoo, Suchismita; Mohanta, Rukmani

    2016-06-01

    We study the effect of scalar leptoquarks in the lepton flavor violating B meson decays induced by the flavor-changing transitions b →q li+lj- with q =s , d . In the standard model, these transitions are extremely rare as they are either two-loop suppressed or proceed via box diagrams with tiny neutrino masses in the loop. However, in the leptoquark model, they can occur at tree level and are expected to have significantly large branching ratios. The leptoquark parameter space is constrained using the experimental limits on the branching ratios of Bq→l+l- processes. Using such constrained parameter space, we predict the branching ratios of LFV semileptonic B meson decays, such as B+→K+(π+)li+lj-, B+→(K*+,ρ+)li+lj-, and Bs→ϕ li+lj-, which are found to be within the experimental reach of LHCb and the upcoming Belle II experiments. We also investigate the rare leptonic KL ,S→μ+μ-(e+e-) and KL→μ∓e± decays in the leptoquark model.

  3. Make dark matter charged again

    NASA Astrophysics Data System (ADS)

    Agrawal, Prateek; Cyr-Racine, Francis-Yan; Randall, Lisa; Scholtz, Jakub

    2017-05-01

    We revisit constraints on dark matter that is charged under a U(1) gauge group in the dark sector, decoupled from Standard Model forces. We find that the strongest constraints in the literature are subject to a number of mitigating factors. For instance, the naive dark matter thermalization timescale in halos is corrected by saturation effects that slow down isotropization for modest ellipticities. The weakened bounds uncover interesting parameter space, making models with weak-scale charged dark matter viable, even with electromagnetic strength interaction. This also leads to the intriguing possibility that dark matter self-interactions within small dwarf galaxies are extremely large, a relatively unexplored regime in current simulations. Such strong interactions suppress heat transfer over scales larger than the dark matter mean free path, inducing a dynamical cutoff length scale above which the system appears to have only feeble interactions. These effects must be taken into account to assess the viability of darkly-charged dark matter. Future analyses and measurements should probe a promising region of parameter space for this model.

  4. Generic precise augmented reality guiding system and its calibration method based on 3D virtual model.

    PubMed

    Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua

    2016-05-30

    Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique.

  5. Robust Likelihoods for Inflationary Gravitational Waves from Maps of Cosmic Microwave Background Polarization

    NASA Technical Reports Server (NTRS)

    Switzer, Eric Ryan; Watts, Duncan J.

    2016-01-01

    The B-mode polarization of the cosmic microwave background provides a unique window into tensor perturbations from inflationary gravitational waves. Survey effects complicate the estimation and description of the power spectrum on the largest angular scales. The pixel-space likelihood yields parameter distributions without the power spectrum as an intermediate step, but it does not have the large suite of tests available to power spectral methods. Searches for primordial B-modes must rigorously reject and rule out contamination. Many forms of contamination vary or are uncorrelated across epochs, frequencies, surveys, or other data treatment subsets. The cross power and the power spectrum of the difference of subset maps provide approaches to reject and isolate excess variance. We develop an analogous joint pixel-space likelihood. Contamination not modeled in the likelihood produces parameter-dependent bias and complicates the interpretation of the difference map. We describe a null test that consistently weights the difference map. Excess variance should either be explicitly modeled in the covariance or be removed through reprocessing the data.

  6. Effect of the asymmetry of the coupling of the redox molecule to the electrodes in the one-level electrochemical bridged tunneling contact on the Coulomb blockade and the operation of molecular transistor.

    PubMed

    Medvedev, Igor G

    2014-09-28

    Effect of the asymmetry of the redox molecule (RM) coupling to the working electrodes on the Coulomb blockade and the operation of molecular transistor is considered under ambient conditions for the case of the non-adiabatic tunneling through the electrochemical contact having a one-level RM. The expressions for the tunnel current, the positions of the peaks of the tunnel current/overpotential dependencies, and their full widths at the half maximum are obtained for arbitrary values of the parameter d describing the coupling asymmetry of the tunneling contact and the effect of d on the different characteristics of the tunneling contact is studied. The tunnel current/overpotential and the differential conductance/bias voltage dependencies are calculated and interpreted. In particular, it is shown that the effect of the Coulomb blockade on the tunnel current and the differential conductance has a number of new features in the case of the large coupling asymmetry. It is also shown that, for rather large values of the solvent reorganization energy, the coupling asymmetry enhanced strongly amplification and rectification of the tunnel current in the most of the regions of the parameter space specifying the tunneling contact. The regions of the parameter space where both strong amplification and strong rectification take place are also revealed. The obtained results allow us to prove the possibility of the realization of the effective electrochemical transistor based on the one-level RM.

  7. Latent degradation indicators estimation and prediction: A Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Zhou, Yifan; Sun, Yong; Mathew, Joseph; Wolff, Rodney; Ma, Lin

    2011-01-01

    Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.

  8. Large-scale high density 3D AMT for mineral exploration — A case history from volcanic massive sulfide Pb-Zn deposit with 2000 AMT sites

    NASA Astrophysics Data System (ADS)

    Chen, R.; Chen, S.; He, L.; Yao, H.; Li, H.; Xi, X.; Zhao, X.

    2017-12-01

    EM method plays a key role in volcanic massive sulfide (VMS) deposit which is with high grade and high economic value. However, the performance of high density 3D AMT in detecting deep concealed VMS targets is not clear. The size of a typical VMS target is less than 100 m x 100 m x 50 m, it's a challenge task to find it with large depth. We carried a test in a VMS Pb-Zn deposit using high density 3D AMT with site spacing as 20 m and profile spacing as 40 - 80 m. About 2000 AMT sites were acquired in an area as 2000 m x 1500 m. Then we used a sever with 8 CPUs (Intel Xeon E7-8880 v3, 2.3 GHz, 144 cores), 2048 GB RAM, and 40 TB disk array to invert above 3D AMT sites using integral equation forward modeling and re-weighted conjugated-gradient inversion. The depth of VMS ore body is about 600 m and the size of the ore body is about 100 x 100 x 20m with dip angle about 45 degree. We finds that it's very hard to recover the location and shape of the ore body by 3D AMT inversion even using the data of all AMT sites and frequencies. However, it's possible to recover the location and shape of the deep concealed ore body if we adjust the inversion parameters carefully. A new set of inversion parameter needs to be find for high density 3D AMT data set and the inversion parameters working good for Dublin Secret Model II (DSM 2) is not suitable for our real data. This problem may be caused by different data density and different number of frequency. We find a set of good inversion parameter by comparing the shape and location of ore body with inversion result and trying different inversion parameters. And the application of new inversion parameter in nearby area with high density AMT sites shows that the inversion result is improved greatly.

  9. Accounting for baryonic effects in cosmic shear tomography: Determining a minimal set of nuisance parameters using PCA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eifler, Tim; Krause, Elisabeth; Dodelson, Scott

    2014-05-28

    Systematic uncertainties that have been subdominant in past large-scale structure (LSS) surveys are likely to exceed statistical uncertainties of current and future LSS data sets, potentially limiting the extraction of cosmological information. Here we present a general framework (PCA marginalization) to consistently incorporate systematic effects into a likelihood analysis. This technique naturally accounts for degeneracies between nuisance parameters and can substantially reduce the dimension of the parameter space that needs to be sampled. As a practical application, we apply PCA marginalization to account for baryonic physics as an uncertainty in cosmic shear tomography. Specifically, we use CosmoLike to run simulatedmore » likelihood analyses on three independent sets of numerical simulations, each covering a wide range of baryonic scenarios differing in cooling, star formation, and feedback mechanisms. We simulate a Stage III (Dark Energy Survey) and Stage IV (Large Synoptic Survey Telescope/Euclid) survey and find a substantial bias in cosmological constraints if baryonic physics is not accounted for. We then show that PCA marginalization (employing at most 3 to 4 nuisance parameters) removes this bias. Our study demonstrates that it is possible to obtain robust, precise constraints on the dark energy equation of state even in the presence of large levels of systematic uncertainty in astrophysical processes. We conclude that the PCA marginalization technique is a powerful, general tool for addressing many of the challenges facing the precision cosmology program.« less

  10. CEOS Land Surface Imaging Constellation Mid-Resolution Optical Guidelines

    NASA Technical Reports Server (NTRS)

    Keyes, Jennifer P.; Killough, B.

    2011-01-01

    The LSI community of users is large and varied. To reach all these users as well as potential instrument contributors this document has been organized by measurement parameters of interest such as Leaf Area Index and Land Surface Temperature. These measurement parameters and the data presented in this document are drawn from multiple sources, listed at the end of the document, although the two primary ones are "The Space-Based Global Observing System in 2010 (GOS-2010)" that was compiled for the World Meteorological Organization (WMO) by Bizzarro Bizzarri, and the CEOS Missions, Instruments, and Measurements online database (CEOS MIM). For each measurement parameter the following topics will be discussed: (1) measurement description, (2) applications, (3) measurement spectral bands, and (4) example instruments and mission information. The description of each measurement parameter starts with a definition and includes a graphic displaying the relationships to four general land surface imaging user communities: vegetation, water, earth, and geo-hazards, since the LSI community of users is large and varied. The vegetation community uses LSI data to assess factors related to topics such as agriculture, forest management, crop type, chlorophyll, vegetation land cover, and leaf or canopy differences. The water community analyzes snow and lake cover, water properties such as clarity, and body of water delineation. The earth community focuses on minerals, soils, and sediments. The geo-hazards community is designed to address and aid in emergencies such as volcanic eruptions, forest fires, and large-scale damaging weather-related events.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bajc, Borut; Di Luzio, Luca

    We show that judiciously chosen R-parity violating terms in the minimal renormalizable supersymmetric SU(5) are able to correct all the phenomenologically wrong mass relations between down quarks and charged leptons. The model can accommodate neutrino masses as well. One of the most striking consequences is a large mixing between the electron and the Higgsino. Finally, we show that this can still be in accord with data in some regions of the parameter space and possibly falsified in future experiments.

  12. A theoretical study of microwave beam absorption by a rectenna, introduction. [solar power satellites

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The conditions required for a large rectenna array (i.e., reference design) to absorb nearly 100% of transmitted energy were studied. Design parameters including element spacing, and the manner in which these affect scatter were formulated. Amplitudes and directions of scatter and development of strategies for mitigation were also investigated. The effects on rectenna behavior of external factors such as weather and aircraft overflights were determined.

  13. USSR and Eastern Europe Scientific Abstracts, Geophysics, Astronomy and Space, Number 392.

    DTIC Science & Technology

    1977-03-15

    evaluation of the parameters of the observed field. It is proposed that for models formed from a set of elements as described that the problem of...the differential energy spectra for protons during the time of large flares on the sun. [303] IMPROVEMENT OF AES ORBITAL ELEMENTS Moscow...Leningrad, ULUSHSHENIYE ORBITAL’NYKH ELEMENTOV ISZ (Improvement in the Orbital Elements of an Artificial Earth Satellite), Leningrad Forestry Academy

  14. Large distance expansion of mutual information for disjoint disks in a free scalar theory

    DOE PAGES

    Agón, Cesar A.; Cohen-Abbo, Isaac; Schnitzer, Howard J.

    2016-11-11

    We compute the next-to-leading order term in the long-distance expansion of the mutual information for free scalars in three space-time dimensions. The geometry considered is two disjoint disks separated by a distance r between their centers. No evidence for non-analyticity in the Rényi parameter n for the continuation n → 1 in the next-to-leading order term is found.

  15. Gravitational wave, collider and dark matter signals from a scalar singlet electroweak baryogenesis

    DOE PAGES

    Beniwal, Ankit; Lewicki, Marek; Wells, James D.; ...

    2017-08-23

    We analyse a simple extension of the SM with just an additional scalar singlet coupled to the Higgs boson. Here, we discuss the possible probes for electroweak baryogenesis in this model including collider searches, gravitational wave and direct dark matter detection signals. We show that a large portion of the model parameter space exists where the observation of gravitational waves would allow detection while the indirect collider searches would not.

  16. Gravitational wave, collider and dark matter signals from a scalar singlet electroweak baryogenesis

    NASA Astrophysics Data System (ADS)

    Beniwal, Ankit; Lewicki, Marek; Wells, James D.; White, Martin; Williams, Anthony G.

    2017-08-01

    We analyse a simple extension of the SM with just an additional scalar singlet coupled to the Higgs boson. We discuss the possible probes for electroweak baryogenesis in this model including collider searches, gravitational wave and direct dark matter detection signals. We show that a large portion of the model parameter space exists where the observation of gravitational waves would allow detection while the indirect collider searches would not.

  17. Gravitational wave, collider and dark matter signals from a scalar singlet electroweak baryogenesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beniwal, Ankit; Lewicki, Marek; Wells, James D.

    We analyse a simple extension of the SM with just an additional scalar singlet coupled to the Higgs boson. Here, we discuss the possible probes for electroweak baryogenesis in this model including collider searches, gravitational wave and direct dark matter detection signals. We show that a large portion of the model parameter space exists where the observation of gravitational waves would allow detection while the indirect collider searches would not.

  18. QUANTIFYING OBSERVATIONAL PROJECTION EFFECTS USING MOLECULAR CLOUD SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beaumont, Christopher N.; Offner, Stella S.R.; Shetty, Rahul

    2013-11-10

    The physical properties of molecular clouds are often measured using spectral-line observations, which provide the only probes of the clouds' velocity structure. It is hard, though, to assess whether and to what extent intensity features in position-position-velocity (PPV) space correspond to 'real' density structures in position-position-position (PPP) space. In this paper, we create synthetic molecular cloud spectral-line maps of simulated molecular clouds, and present a new technique for measuring the reality of individual PPV structures. Using a dendrogram algorithm, we identify hierarchical structures in both PPP and PPV space. Our procedure projects density structures identified in PPP space into correspondingmore » intensity structures in PPV space and then measures the geometric overlap of the projected structures with structures identified from the synthetic observation. The fractional overlap between a PPP and PPV structure quantifies how well the synthetic observation recovers information about the three-dimensional structure. Applying this machinery to a set of synthetic observations of CO isotopes, we measure how well spectral-line measurements recover mass, size, velocity dispersion, and virial parameter for a simulated star-forming region. By disabling various steps of our analysis, we investigate how much opacity, chemistry, and gravity affect measurements of physical properties extracted from PPV cubes. For the simulations used here, which offer a decent, but not perfect, match to the properties of a star-forming region like Perseus, our results suggest that superposition induces a ∼40% uncertainty in masses, sizes, and velocity dispersions derived from {sup 13}CO (J = 1-0). As would be expected, superposition and confusion is worst in regions where the filling factor of emitting material is large. The virial parameter is most affected by superposition, such that estimates of the virial parameter derived from PPV and PPP information typically disagree by a factor of ∼2. This uncertainty makes it particularly difficult to judge whether gravitational or kinetic energy dominate a given region, since the majority of virial parameter measurements fall within a factor of two of the equipartition level α ∼ 2.« less

  19. Experimental Analysis of File Transfer Rates over Wide-Area Dedicated Connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Liu, Qiang; Sen, Satyabrata

    2016-12-01

    File transfers over dedicated connections, supported by large parallel file systems, have become increasingly important in high-performance computing and big data workflows. It remains a challenge to achieve peak rates for such transfers due to the complexities of file I/O, host, and network transport subsystems, and equally importantly, their interactions. We present extensive measurements of disk-to-disk file transfers using Lustre and XFS file systems mounted on multi-core servers over a suite of 10 Gbps emulated connections with 0-366 ms round trip times. Our results indicate that large buffer sizes and many parallel flows do not always guarantee high transfer rates.more » Furthermore, large variations in the measured rates necessitate repeated measurements to ensure confidence in inferences based on them. We propose a new method to efficiently identify the optimal joint file I/O and network transport parameters using a small number of measurements. We show that for XFS and Lustre with direct I/O, this method identifies configurations achieving 97% of the peak transfer rate while probing only 12% of the parameter space.« less

  20. Machine-assisted discovery of relationships in astronomy

    NASA Astrophysics Data System (ADS)

    Graham, Matthew J.; Djorgovski, S. G.; Mahabal, Ashish A.; Donalek, Ciro; Drake, Andrew J.

    2013-05-01

    High-volume feature-rich data sets are becoming the bread-and-butter of 21st century astronomy but present significant challenges to scientific discovery. In particular, identifying scientifically significant relationships between sets of parameters is non-trivial. Similar problems in biological and geosciences have led to the development of systems which can explore large parameter spaces and identify potentially interesting sets of associations. In this paper, we describe the application of automated discovery systems of relationships to astronomical data sets, focusing on an evolutionary programming technique and an information-theory technique. We demonstrate their use with classical astronomical relationships - the Hertzsprung-Russell diagram and the Fundamental Plane of elliptical galaxies. We also show how they work with the issue of binary classification which is relevant to the next generation of large synoptic sky surveys, such as the Large Synoptic Survey Telescope (LSST). We find that comparable results to more familiar techniques, such as decision trees, are achievable. Finally, we consider the reality of the relationships discovered and how this can be used for feature selection and extraction.

  1. Charting the parameter space of the global 21-cm signal

    NASA Astrophysics Data System (ADS)

    Cohen, Aviad; Fialkov, Anastasia; Barkana, Rennan; Lotem, Matan

    2017-12-01

    The early star-forming Universe is still poorly constrained, with the properties of high-redshift stars, the first heating sources and reionization highly uncertain. This leaves observers planning 21-cm experiments with little theoretical guidance. In this work, we explore the possible range of high-redshift parameters including the star formation efficiency and the minimal mass of star-forming haloes; the efficiency, spectral energy distribution and redshift evolution of the first X-ray sources; and the history of reionization. These parameters are only weakly constrained by available observations, mainly the optical depth to the cosmic microwave background. We use realistic semi-numerical simulations to produce the global 21-cm signal over the redshift range z = 6-40 for each of 193 different combinations of the astrophysical parameters spanning the allowed range. We show that the expected signal fills a large parameter space, but with a fixed general shape for the global 21-cm curve. Even with our wide selection of models, we still find clear correlations between the key features of the global 21-cm signal and underlying astrophysical properties of the high-redshift Universe, namely the Ly α intensity, the X-ray heating rate and the production rate of ionizing photons. These correlations can be used to directly link future measurements of the global 21-cm signal to astrophysical quantities in a mostly model-independent way. We identify additional correlations that can be used as consistency checks.

  2. On the Essence of Space

    NASA Astrophysics Data System (ADS)

    Kalanov, Temur Z.

    2003-04-01

    A new theory of space is suggested. It represents the new point of view which has arisen from the critical analysis of the foundations of physics (in particular the theory of relativity and quantum mechanics), mathematics, cosmology and philosophy. The main idea following from the analysis is that the concept of movement represents a key to understanding of the essence of space. The starting-point of the theory is represented by the following philosophical (dialectical materialistic) principles. (a) The principle of the materiality (of the objective reality) of the Nature: the Nature (the Universe) is a system (a set) of material objects (particles, bodies, fields); each object has properties, features, and the properties, the features are inseparable characteristics of material object and belong only to material object. (b) The principle of the existence of material object: an object exists as the objective reality, and movement is a form of existence of object. (c) The principle (definition) of movement of object: the movement is change (i.e. transition of some states into others) in general; the movement determines a direction, and direction characterizes the movement. (d) The principle of existence of time: the time exists as the parameter of the system of reference. These principles lead to the following statements expressing the essence of space. (1) There is no space in general, and there exist space only as a form of existence of the properties and features of the object. It means that the space is a set of the measures of the object (the measure is the philosophical category meaning unity of the qualitative and quantitative determinacy of the object). In other words, the space of the object is a set of the states of the object. (2) The states of the object are manifested only in a system of reference. The main informational property of the unitary system researched physical object + system of reference is that the system of reference determines (measures, calculates) the parameters of the subsystem researched physical object (for example, the coordinates of the object M); the parameters characterize the system of reference (for example, the system of coordinates S). (3) Each parameter of the object is its measure. Total number of the mutually independent parameters of the object is called dimension of the space of the object. (4) The set of numerical values (i.e. the range, the spectrum) of each parameter is the subspace of the object. (The coordinate space, the momentum space and the energy space are examples of the subspaces of the object). (5) The set of the parameters of the object is divided into two non intersecting (opposite) classes: the class of the internal parameters and the class of the non internal (i.e. external) parameters. The class of the external parameters is divided into two non intersecting (opposite) subclasses: the subclass of the absolute parameters (characterizing the form, the sizes of the object) and the subclass of the non absolute (relative) parameters (characterizing the position, the coordinates of the object). (6) Set of the external parameters forms the external space of object. It is called geometrical space of object. (7) Since a macroscopic object has three mutually independent sizes, the dimension of its external absolute space is equal to three. Consequently, the dimension of its external relative space is also equal to three. Thus, the total dimension of the external space of the macroscopic object is equal to six. (8) In general case, the external absolute space (i.e. the form, the sizes) and the external relative space (i.e. the position, the coordinates) of any object are mutually dependent because of influence of a medium. The geometrical space of such object is called non Euclidean space. If the external absolute space and the external relative space of some object are mutually independent, then the external relative space of such object is the homogeneous and isotropic geometrical space. It is called Euclidean space of the object. Consequences: (i) the question of true geometry of the Universe is incorrect; (ii) the theory of relativity has no physical meaning.

  3. Space station dynamic modeling, disturbance accommodation, and adaptive control

    NASA Technical Reports Server (NTRS)

    Wang, S. J.; Ih, C. H.; Lin, Y. H.; Metter, E.

    1985-01-01

    Dynamic models for two space station configurations were derived. Space shuttle docking disturbances and their effects on the station and solar panels are quantified. It is shown that hard shuttle docking can cause solar panel buckling. Soft docking and berthing can substantially reduce structural loads at the expense of large shuttle and station attitude excursions. It is found predocking shuttle momentum reduction is necessary to achieve safe and routine operations. A direct model reference adaptive control is synthesized and evaluated for the station model parameter errors and plant dynamics truncations. The rigid body and the flexible modes are treated. It is shown that convergence of the adaptive algorithm can be achieved in 100 seconds with reasonable performance even during shuttle hard docking operations in which station mass and inertia are instantaneously changed by more than 100%.

  4. Preliminary control/structure interaction study of coupled Space Station Freedom/Assembly Work Platform/orbiter

    NASA Technical Reports Server (NTRS)

    Singh, Sudeep K.; Lindenmoyer, Alan J.

    1989-01-01

    Results are presented from a preliminary control/structure interaction study of the Space Station, the Assembly Work Platform, and the STS orbiter dynamics coupled with the orbiter and station control systems. The first three Space Station assembly flight configurations and their finite element representations are illustrated. These configurations are compared in terms of control authority in each axis and propellant usage. The control systems design parameters during assembly are computed. Although the rigid body response was acceptable with the orbiter Primary Reaction Control System, the flexible body response showed large structural deflections and loads. It was found that severe control/structure interaction occurred if the stiffness of the Assembly Work Platform was equal to that of the station truss. Also, the response of the orbiter Vernier Reaction Control System to small changes in inertia properties is examined.

  5. The geometric field (gravity) as an electro-chemical potential in a Ginzburg-Landau theory of superconductivity

    NASA Astrophysics Data System (ADS)

    Atanasov, Victor

    2017-07-01

    We extend the superconductor's free energy to include an interaction of the order parameter with the curvature of space-time. This interaction leads to geometry dependent coherence length and Ginzburg-Landau parameter which suggests that the curvature of space-time can change the superconductor's type. The curvature of space-time doesn't affect the ideal diamagnetism of the superconductor but acts as chemical potential. In a particular circumstance, the geometric field becomes order-parameter dependent, therefore the superconductor's order parameter dynamics affects the curvature of space-time and electrical or internal quantum mechanical energy can be channelled into the curvature of space-time. Experimental consequences are discussed.

  6. Towards large volume big divisor D3/D7 " μ-split supersymmetry" and Ricci-flat Swiss-cheese metrics, and dimension-six neutrino mass operators

    NASA Astrophysics Data System (ADS)

    Dhuria, Mansi; Misra, Aalok

    2012-02-01

    We show that it is possible to realize a " μ-split SUSY" scenario (Cheng and Cheng, 2005) [1] in the context of large volume limit of type IIB compactifications on Swiss-cheese Calabi-Yau orientifolds in the presence of a mobile space-time filling D3-brane and a (stack of) D7-brane(s) wrapping the "big" divisor. For this, we investigate the possibility of getting one Higgs to be light while other to be heavy in addition to a heavy higgsino mass parameter. Further, we examine the existence of long lived gluino that manifests one of the major consequences of μ-split SUSY scenario, by computing its decay width as well as lifetime corresponding to the three-body decays of the gluino into either a quark, a squark and a neutralino or a quark, squark and goldstino, as well as two-body decays of the gluino into either a neutralino and a gluon or a goldstino and a gluon. Guided by the geometric Kähler potential for Σ obtained in Misra and Shukla (2010) [2] based on GLSM techniques, and the Donaldson's algorithm (Barun et al., 2008) [3] for obtaining numerically a Ricci-flat metric, we give details of our calculation in Misra and Shukla (2011) [4] pertaining to our proposed metric for the full Swiss-cheese Calabi-Yau (the geometric Kähler potential being needed to be included in the full moduli space Kähler potential in the presence of the mobile space-time filling D3-brane), but for simplicity of calculation, close to the big divisor, which is Ricci-flat in the large volume limit. Also, as an application of the one-loop RG flow solution for the higgsino mass parameter, we show that the contribution to the neutrino masses at the EW scale from dimension-six operators arising from the Kähler potential, is suppressed relative to the Weinberg-type dimension-five operators.

  7. Radius Determination of Solar-type Stars Using Asteroseismology: What to Expect from the Kepler Mission

    NASA Astrophysics Data System (ADS)

    Stello, Dennis; Chaplin, William J.; Bruntt, Hans; Creevey, Orlagh L.; García-Hernández, Antonio; Monteiro, Mario J. P. F. G.; Moya, Andrés; Quirion, Pierre-Olivier; Sousa, Sergio G.; Suárez, Juan-Carlos; Appourchaux, Thierry; Arentoft, Torben; Ballot, Jerome; Bedding, Timothy R.; Christensen-Dalsgaard, Jørgen; Elsworth, Yvonne; Fletcher, Stephen T.; García, Rafael A.; Houdek, Günter; Jiménez-Reyes, Sebastian J.; Kjeldsen, Hans; New, Roger; Régulo, Clara; Salabert, David; Toutain, Thierry

    2009-08-01

    For distant stars, as observed by the NASA Kepler satellite, parallax information is currently of fairly low quality and is not complete. This limits the precision with which the absolute sizes of the stars and their potential transiting planets can be determined by traditional methods. Asteroseismology will be used to aid the radius determination of stars observed during NASA's Kepler mission. We report on the recent asteroFLAG hare-and-hounds Exercise#2, where a group of "hares" simulated data of F-K main-sequence stars that a group of "hounds" sought to analyze, aimed at determining the stellar radii. We investigated stars in the range 9 < V < 15, both with and without parallaxes. We further test different uncertainties in T eff, and compare results with and without using asteroseismic constraints. Based on the asteroseismic large frequency spacing, obtained from simulations of 4 yr time series data from the Kepler mission, we demonstrate that the stellar radii can be correctly and precisely determined, when combined with traditional stellar parameters from the Kepler Input Catalogue. The radii found by the various methods used by each independent hound generally agree with the true values of the artificial stars to within 3%, when the large frequency spacing is used. This is 5-10 times better than the results where seismology is not applied. These results give strong confidence that radius estimation can be performed to better than 3% for solar-like stars using automatic pipeline reduction. Even when the stellar distance and luminosity are unknown we can obtain the same level of agreement. Given the uncertainties used for this exercise we find that the input log g and parallax do not help to constrain the radius, and that T eff and metallicity are the only parameters we need in addition to the large frequency spacing. It is the uncertainty in the metallicity that dominates the uncertainty in the radius.

  8. Apodized Pupil Lyot Coronagraphs designs for future segmented space telescopes

    NASA Astrophysics Data System (ADS)

    St. Laurent, Kathryn; Fogarty, Kevin; Zimmerman, Neil; N’Diaye, Mamadou; Stark, Chris; Sivaramakrishnan, Anand; Pueyo, Laurent; Vanderbei, Robert; Soummer, Remi

    2018-01-01

    A coronagraphic starlight suppression system situated on a future flagship space observatory offers a promising avenue to image Earth-like exoplanets and search for biomarkers in their atmospheric spectra. One NASA mission concept that could serve as the platform to realize this scientific breakthrough is the Large UV/Optical/IR Surveyor (LUVOIR). Such a mission would also address a broad range of topics in astrophysics with a multi-wavelength suite of instruments.In support of the community’s assessment of the scientific capability of a LUVOIR mission, the Exoplanet Exploration Program (ExEP) has launched a multi-team technical study: Segmented Coronagraph Design and Analysis (SCDA). The goal of this study is to develop viable coronagraph instrument concepts for a LUVOIR-type mission. Results of the SCDA effort will directly inform the mission concept evaluation being carried out by the LUVOIR Science and Technology Definition Team. The apodized pupil Lyot coronagraph (APLC) is one of several coronagraph design families that the SCDA study is assessing. The APLC is a Lyot-style coronagraph that suppresses starlight through a series of amplitude operations on the on-axis field. Given a suite of seven plausible segmented telescope apertures, we have developed an object-oriented software toolkit to automate the exploration of thousands of APLC design parameter combinations. In the course of exploring this parameter space we have established relationships between APLC throughput and telescope aperture geometry, Lyot stop, inner working angle, bandwidth, and contrast level. In parallel with the parameter space exploration, we have investigated several strategies to improve the robustness of APLC designs to fabrication and alignment errors and integrated a Design Reference Mission framework to evaluate designs with scientific yield metrics.

  9. Bell's theorem and the problem of decidability between the views of Einstein and Bohr.

    PubMed

    Hess, K; Philipp, W

    2001-12-04

    Einstein, Podolsky, and Rosen (EPR) have designed a gedanken experiment that suggested a theory that was more complete than quantum mechanics. The EPR design was later realized in various forms, with experimental results close to the quantum mechanical prediction. The experimental results by themselves have no bearing on the EPR claim that quantum mechanics must be incomplete nor on the existence of hidden parameters. However, the well known inequalities of Bell are based on the assumption that local hidden parameters exist and, when combined with conflicting experimental results, do appear to prove that local hidden parameters cannot exist. This fact leaves only instantaneous actions at a distance (called "spooky" by Einstein) to explain the experiments. The Bell inequalities are based on a mathematical model of the EPR experiments. They have no experimental confirmation, because they contradict the results of all EPR experiments. In addition to the assumption that hidden parameters exist, Bell tacitly makes a variety of other assumptions; for instance, he assumes that the hidden parameters are governed by a single probability measure independent of the analyzer settings. We argue that the mathematical model of Bell excludes a large set of local hidden variables and a large variety of probability densities. Our set of local hidden variables includes time-like correlated parameters and a generalized probability density. We prove that our extended space of local hidden variables does permit derivation of the quantum result and is consistent with all known experiments.

  10. Enhanced solar energy options using earth-orbiting mirrors

    NASA Technical Reports Server (NTRS)

    Gilbreath, W. P.; Billman, K. W.; Bowen, S. W.

    1978-01-01

    A system of orbiting space reflectors is described, analyzed, and shown to economically provide nearly continuous insolation to preselected ground sites, producing benefits hitherto lacking in conventional solar farms and leading to large reductions in energy costs for such installations. Free-flying planar mirrors of about 1 sq km are shown to be optimum and can be made at under 10 g/sq m of surface, thus minimizing material needs and space transportation costs. Models are developed for both the design of such mirrors and for the analysis of expected ground insolation as a function of orbital parameters, time, and site location. Various applications (agricultural, solar-electric production, weather enhancement, etc.) are described.

  11. Abundant stable gauge field hair for black holes in anti-de Sitter space.

    PubMed

    Baxter, J E; Helbling, Marc; Winstanley, Elizabeth

    2008-01-11

    We present new hairy black hole solutions of SU(N) Einstein-Yang-Mills (EYM) theory in asymptotically anti-de Sitter (AdS) space. These black holes are described by N+1 independent parameters and have N-1 independent gauge field degrees of freedom. Solutions in which all gauge field functions have no zeros exist for all N, and for a sufficiently large (and negative) cosmological constant. At least some of these solutions are shown to be stable under classical, linear, spherically symmetric perturbations. Therefore there is no upper bound on the amount of stable gauge field hair with which a black hole in AdS can be endowed.

  12. Plasma layers near the electrodes of a cesium diode - Anode layer

    NASA Astrophysics Data System (ADS)

    Oganezov, Z. A.; Timoshenko, L. S.; Tskhakaya, V. K.

    1982-08-01

    A planar electron beam probe is used to study the plasma layer in contact with a nonemitting electrode. It is found that the field distribution in the space-charge region of the layer adjacent to a nonemitting electrode is linear and obeys a specific empirical relation over a large range of variation in the plasma parameters, while the potential distribution has a corresponding parabolic form. In order for these values to be consistent, it is necessary to assume that the potential at the boundary between the quasi-neutral plasma and the space-charge is equal to a value which is substantially larger than the theoretically permitted potential drop in a quasi-neutral plasma.

  13. Enhanced secure 4-D modulation space optical multi-carrier system based on joint constellation and Stokes vector scrambling.

    PubMed

    Liu, Bo; Zhang, Lijia; Xin, Xiangjun

    2018-03-19

    This paper proposes and demonstrates an enhanced secure 4-D modulation optical generalized filter bank multi-carrier (GFBMC) system based on joint constellation and Stokes vector scrambling. The constellation and Stokes vectors are scrambled by using different scrambling parameters. A multi-scroll Chua's circuit map is adopted as the chaotic model. Large secure key space can be obtained due to the multi-scroll attractors and independent operability of subcarriers. A 40.32Gb/s encrypted optical GFBMC signal with 128 parallel subcarriers is successfully demonstrated in the experiment. The results show good resistance against the illegal receiver and indicate a potential way for the future optical multi-carrier system.

  14. NASA Space Geodesy Program: GSFC data analysis, 1993. VLBI geodetic results 1979 - 1992

    NASA Technical Reports Server (NTRS)

    Ma, Chopo; Ryan, James W.; Caprette, Douglas S.

    1994-01-01

    The Goddard VLBI group reports the results of analyzing Mark 3 data sets acquired from 110 fixed and mobile observing sites through the end of 1992 and available to the Space Geodesy Program. Two large solutions were used to obtain site positions, site velocities, baseline evolution for 474 baselines, earth rotation parameters, nutation offsets, and radio source positions. Site velocities are presented in both geocentric Cartesian and topocentric coordinates. Baseline evolution is plotted for the 89 baselines that were observed in 1992 and positions at 1988.0 are presented for all fixed stations and mobile sites. Positions are also presented for quasar radio sources used in the solutions.

  15. Collider probes of axion-like particles

    NASA Astrophysics Data System (ADS)

    Bauer, Martin; Neubert, Matthias; Thamm, Andrea

    2017-12-01

    Axion-like particles (ALPs), which are gauge-singlets under the Standard Model (SM), appear in many well-motivated extensions of the SM. Describing the interactions of ALPs with SM fields by means of an effective Lagrangian, we discuss ALP decays into SM particles at one-loop order, including for the first time a calculation of the a → πππ decay rates for ALP masses below a few GeV. We argue that, if the ALP couples to at least some SM particles with couplings of order (0.01 - 1) TeV-1, its mass must be above 1 MeV. Taking into account the possibility of a macroscopic ALP decay length, we show that large regions of so far unconstrained parameter space can be explored by searches for the exotic, on-shell Higgs and Z decays h → Za, h → aa and Z → γa in Run-2 of the LHC with an integrated luminosity of 300 fb-1. This includes the parameter space in which ALPs can explain the anomalous magnetic moment of the muon. Considering subsequent ALP decays into photons and charged leptons, we show that the LHC provides unprecedented sensitivity to the ALP-photon and ALP-lepton couplings in the mass region above a few MeV, even if the relevant ALP couplings are loop suppressed and the a → γγ and a → ℓ+ℓ- branching ratios are significantly less than 1. We also discuss constraints on the ALP parameter space from electroweak precision tests.

  16. Non-singular Brans–Dicke collapse in deformed phase space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rasouli, S.M.M., E-mail: mrasouli@ubi.pt; Centro de Matemática e Aplicações; Physics Group, Qazvin Branch, Islamic Azad University, Qazvin

    2016-12-15

    We study the collapse process of a homogeneous perfect fluid (in FLRW background) with a barotropic equation of state in Brans–Dicke (BD) theory in the presence of phase space deformation effects. Such a deformation is introduced as a particular type of non-commutativity between phase space coordinates. For the commutative case, it has been shown in the literature (Scheel, 1995), that the dust collapse in BD theory leads to the formation of a spacetime singularity which is covered by an event horizon. In comparison to general relativity (GR), the authors concluded that the final state of black holes in BD theorymore » is identical to the GR case but differs from GR during the dynamical evolution of the collapse process. However, the presence of non-commutative effects influences the dynamics of the collapse scenario and consequently a non-singular evolution is developed in the sense that a bounce emerges at a minimum radius, after which an expanding phase begins. Such a behavior is observed for positive values of the BD coupling parameter. For large positive values of the BD coupling parameter, when non-commutative effects are present, the dynamics of collapse process differs from the GR case. Finally, we show that for negative values of the BD coupling parameter, the singularity is replaced by an oscillatory bounce occurring at a finite time, with the frequency of oscillation and amplitude being damped at late times.« less

  17. Systematic Improvement of Potential-Derived Atomic Multipoles and Redundancy of the Electrostatic Parameter Space.

    PubMed

    Jakobsen, Sofie; Jensen, Frank

    2014-12-09

    We assess the accuracy of force field (FF) electrostatics at several levels of approximation from the standard model using fixed partial charges to conformational specific multipole fits including up to quadrupole moments. Potential-derived point charges and multipoles are calculated using least-squares methods for a total of ∼1000 different conformations of the 20 natural amino acids. Opposed to standard charge fitting schemes the procedure presented in the current work employs fitting points placed on a single isodensity surface, since the electrostatic potential (ESP) on such a surface determines the ESP at all points outside this surface. We find that the effect of multipoles beyond partial atomic charges is of the same magnitude as the effect due to neglecting conformational dependency (i.e., polarizability), suggesting that the two effects should be included at the same level in FF development. The redundancy at both the partial charge and multipole levels of approximation is quantified. We present an algorithm which stepwise reduces or increases the dimensionality of the charge or multipole parameter space and provides an upper limit of the ESP error that can be obtained at a given truncation level. Thereby, we can identify a reduced set of multipole moments corresponding to ∼40% of the total number of multipoles. This subset of parameters provides a significant improvement in the representation of the ESP compared to the simple point charge model and close to the accuracy obtained using the complete multipole parameter space. The selection of the ∼40% most important multipole sites is highly transferable among different conformations, and we find that quadrupoles are of high importance for atoms involved in π-bonding, since the anisotropic electric field generated in such regions requires a large degree of flexibility.

  18. Design Space Toolbox V2: Automated Software Enabling a Novel Phenotype-Centric Modeling Strategy for Natural and Synthetic Biological Systems

    PubMed Central

    Lomnitz, Jason G.; Savageau, Michael A.

    2016-01-01

    Mathematical models of biochemical systems provide a means to elucidate the link between the genotype, environment, and phenotype. A subclass of mathematical models, known as mechanistic models, quantitatively describe the complex non-linear mechanisms that capture the intricate interactions between biochemical components. However, the study of mechanistic models is challenging because most are analytically intractable and involve large numbers of system parameters. Conventional methods to analyze them rely on local analyses about a nominal parameter set and they do not reveal the vast majority of potential phenotypes possible for a given system design. We have recently developed a new modeling approach that does not require estimated values for the parameters initially and inverts the typical steps of the conventional modeling strategy. Instead, this approach relies on architectural features of the model to identify the phenotypic repertoire and then predict values for the parameters that yield specific instances of the system that realize desired phenotypic characteristics. Here, we present a collection of software tools, the Design Space Toolbox V2 based on the System Design Space method, that automates (1) enumeration of the repertoire of model phenotypes, (2) prediction of values for the parameters for any model phenotype, and (3) analysis of model phenotypes through analytical and numerical methods. The result is an enabling technology that facilitates this radically new, phenotype-centric, modeling approach. We illustrate the power of these new tools by applying them to a synthetic gene circuit that can exhibit multi-stability. We then predict values for the system parameters such that the design exhibits 2, 3, and 4 stable steady states. In one example, inspection of the basins of attraction reveals that the circuit can count between three stable states by transient stimulation through one of two input channels: a positive channel that increases the count, and a negative channel that decreases the count. This example shows the power of these new automated methods to rapidly identify behaviors of interest and efficiently predict parameter values for their realization. These tools may be applied to understand complex natural circuitry and to aid in the rational design of synthetic circuits. PMID:27462346

  19. The Gaia-ESO Survey: Calibration strategy

    NASA Astrophysics Data System (ADS)

    Pancino, E.; Lardo, C.; Altavilla, G.; Marinoni, S.; Ragaini, S.; Cocozza, G.; Bellazzini, M.; Sabbi, E.; Zoccali, M.; Donati, P.; Heiter, U.; Koposov, S. E.; Blomme, R.; Morel, T.; Símon-Díaz, S.; Lobel, A.; Soubiran, C.; Montalban, J.; Valentini, M.; Casey, A. R.; Blanco-Cuaresma, S.; Jofré, P.; Worley, C. C.; Magrini, L.; Hourihane, A.; François, P.; Feltzing, S.; Gilmore, G.; Randich, S.; Asplund, M.; Bonifacio, P.; Drew, J. E.; Jeffries, R. D.; Micela, G.; Vallenari, A.; Alfaro, E. J.; Allende Prieto, C.; Babusiaux, C.; Bensby, T.; Bragaglia, A.; Flaccomio, E.; Hambly, N.; Korn, A. J.; Lanzafame, A. C.; Smiljanic, R.; Van Eck, S.; Walton, N. A.; Bayo, A.; Carraro, G.; Costado, M. T.; Damiani, F.; Edvardsson, B.; Franciosini, E.; Frasca, A.; Lewis, J.; Monaco, L.; Morbidelli, L.; Prisinzano, L.; Sacco, G. G.; Sbordone, L.; Sousa, S. G.; Zaggia, S.; Koch, A.

    2017-02-01

    The Gaia-ESO survey (GES) is now in its fifth and last year of observations and has produced tens of thousands of high-quality spectra of stars in all Milky Way components. This paper presents the strategy behind the selection of astrophysical calibration targets, ensuring that all GES results on radial velocities, atmospheric parameters, and chemical abundance ratios will be both internally consistent and easily comparable with other literature results, especially from other large spectroscopic surveys and from Gaia. The calibration of GES is particularly delicate because of (I) the large space of parameters covered by its targets, ranging from dwarfs to giants, from O to M stars; these targets have a large wide of metallicities and also include fast rotators, emission line objects, and stars affected by veiling; (II) the variety of observing setups, with different wavelength ranges and resolution; and (III) the choice of analyzing the data with many different state-of-the-art methods, each stronger in a different region of the parameter space, which ensures a better understanding of systematic uncertainties. An overview of the GES calibration and homogenization strategy is also given, along with some examples of the usage and results of calibrators in GES iDR4, which is the fourth internal GES data release and will form the basis of the next GES public data release. The agreement between GES iDR4 recommended values and reference values for the calibrating objects are very satisfactory. The average offsets and spreads are generally compatible with the GES measurement errors, which in iDR4 data already meet the requirements set by the main GES scientific goals. Based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory under programme IDs 188.B-3002 and 193.B-0936.Full Table 2 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/598/A5

  20. Constraining screened fifth forces with the electron magnetic moment

    NASA Astrophysics Data System (ADS)

    Brax, Philippe; Davis, Anne-Christine; Elder, Benjamin; Wong, Leong Khim

    2018-04-01

    Chameleon and symmetron theories serve as archetypal models for how light scalar fields can couple to matter with gravitational strength or greater, yet evade the stringent constraints from classical tests of gravity on Earth and in the Solar System. They do so by employing screening mechanisms that dynamically alter the scalar's properties based on the local environment. Nevertheless, these do not hide the scalar completely, as screening leads to a distinct phenomenology that can be well constrained by looking for specific signatures. In this work, we investigate how a precision measurement of the electron magnetic moment places meaningful constraints on both chameleons and symmetrons. Two effects are identified: First, virtual chameleons and symmetrons run in loops to generate quantum corrections to the intrinsic value of the magnetic moment—a common process widely considered in the literature for many scenarios beyond the Standard Model. A second effect, however, is unique to scalar fields that exhibit screening. A scalar bubblelike profile forms inside the experimental vacuum chamber and exerts a fifth force on the electron, leading to a systematic shift in the experimental measurement. In quantifying this latter effect, we present a novel approach that combines analytic arguments and a small number of numerical simulations to solve for the bubblelike profile quickly for a large range of model parameters. Taken together, both effects yield interesting constraints in complementary regions of parameter space. While the constraints we obtain for the chameleon are largely uncompetitive with those in the existing literature, this still represents the tightest constraint achievable yet from an experiment not originally designed to search for fifth forces. We break more ground with the symmetron, for which our results exclude a large and previously unexplored region of parameter space. Central to this achievement are the quantum correction terms, which are able to constrain symmetrons with masses in the range μ ∈[10-3.88,108] eV , whereas other experiments have hitherto only been sensitive to 1 or 2 orders of magnitude at a time.

  1. Distinguishing boson stars from black holes and neutron stars from tidal interactions in inspiraling binary systems

    NASA Astrophysics Data System (ADS)

    Sennett, Noah; Hinderer, Tanja; Steinhoff, Jan; Buonanno, Alessandra; Ossokine, Serguei

    2017-07-01

    Binary systems containing boson stars—self-gravitating configurations of a complex scalar field—can potentially mimic black holes or neutron stars as gravitational-wave sources. We investigate the extent to which tidal effects in the gravitational-wave signal can be used to discriminate between these standard sources and boson stars. We consider spherically symmetric boson stars within two classes of scalar self-interactions: an effective-field-theoretically motivated quartic potential and a solitonic potential constructed to produce very compact stars. We compute the tidal deformability parameter characterizing the dominant tidal imprint in the gravitational-wave signals for a large span of the parameter space of each boson star model, covering the entire space in the quartic case, and an extensive portion of interest in the solitonic case. We find that the tidal deformability for boson stars with a quartic self-interaction is bounded below by Λmin≈280 and for those with a solitonic interaction by Λmin≈1.3 . We summarize our results as ready-to-use fits for practical applications. Employing a Fisher matrix analysis, we estimate the precision with which Advanced LIGO and third-generation detectors can measure these tidal parameters using the inspiral portion of the signal. We discuss a novel strategy to improve the distinguishability between black holes/neutrons stars and boson stars by combining tidal deformability measurements of each compact object in a binary system, thereby eliminating the scaling ambiguities in each boson star model. Our analysis shows that current-generation detectors can potentially distinguish boson stars with quartic potentials from black holes, as well as from neutron-star binaries if they have either a large total mass or a large (asymmetric) mass ratio. Discriminating solitonic boson stars from black holes using only tidal effects during the inspiral will be difficult with Advanced LIGO, but third-generation detectors should be able to distinguish between binary black holes and these binary boson stars.

  2. Reference analysis of the signal + background model in counting experiments II. Approximate reference prior

    NASA Astrophysics Data System (ADS)

    Casadei, D.

    2014-10-01

    The objective Bayesian treatment of a model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered. It is shown that the reference prior for the parameter of interest (the signal intensity) can be well approximated by the widely (ab)used flat prior only when the expected background is very high. On the other hand, a very simple approximation (the limiting form of the reference prior for perfect prior background knowledge) can be safely used over a large portion of the background parameters space. The resulting approximate reference posterior is a Gamma density whose parameters are related to the observed counts. This limiting form is simpler than the result obtained with a flat prior, with the additional advantage of representing a much closer approximation to the reference posterior in all cases. Hence such limiting prior should be considered a better default or conventional prior than the uniform prior. On the computing side, it is shown that a 2-parameter fitting function is able to reproduce extremely well the reference prior for any background prior. Thus, it can be useful in applications requiring the evaluation of the reference prior for a very large number of times.

  3. Anisotropic magnification distortion of the 3D galaxy correlation. II. Fourier and redshift space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hui Lam; Department of Physics, Columbia University, New York, New York 10027; Institute of Theoretical Physics, Chinese University of Hong Kong

    2008-03-15

    In paper I of this series we discuss how magnification bias distorts the 3D correlation function by enhancing the observed correlation in the line-of-sight (LOS) orientation, especially on large scales. This lensing anisotropy is distinctive, making it possible to separately measure the galaxy-galaxy, galaxy-magnification and magnification-magnification correlations. Here we extend the discussion to the power spectrum and also to redshift space. In real space, pairs oriented close to the LOS direction are not protected against nonlinearity even if the pair separation is large; this is because nonlinear fluctuations can enter through gravitational lensing at a small transverse separation (or i.e.more » impact parameter). The situation in Fourier space is different: by focusing on a small wave number k, as is usually done, linearity is guaranteed because both the LOS and transverse wave numbers must be small. This is why magnification distortion of the galaxy correlation appears less severe in Fourier space. Nonetheless, the effect is non-negligible, especially for the transverse Fourier modes, and should be taken into account in interpreting precision measurements of the galaxy power spectrum, for instance those that focus on the baryon oscillations. The lensing induced anisotropy of the power spectrum has a shape that is distinct from the more well-known redshift space anisotropies due to peculiar motions and the Alcock-Paczynski effect. The lensing anisotropy is highly localized in Fourier space while redshift space distortions are more spread out. This means that one could separate the magnification bias component in real observations, implying that potentially it is possible to perform a gravitational lensing measurement without measuring galaxy shapes.« less

  4. CP4 miracle: shaping Yukawa sector with CP symmetry of order four

    NASA Astrophysics Data System (ADS)

    Ferreira, P. M.; Ivanov, Igor P.; Jiménez, Enrique; Pasechnik, Roman; Serôdio, Hugo

    2018-01-01

    We explore the phenomenology of a unique three-Higgs-doublet model based on the single CP symmetry of order 4 (CP4) without any accidental symmetries. The CP4 symmetry is imposed on the scalar potential and Yukawa interactions, strongly shaping both sectors of the model and leading to a very characteristic phenomenology. The scalar sector is analyzed in detail, and in the Yukawa sector we list all possible CP4-symmetric structures which do not run into immediate conflict with experiment, namely, do not lead to massless or mass-degenerate quarks nor to insufficient mixing or CP -violation in the CKM matrix. We show that the parameter space of the model, although very constrained by CP4, is large enough to comply with the electroweak precision data and the LHC results for the 125 GeV Higgs boson phenomenology, as well as to perfectly reproduce all fermion masses, mixing, and CP violation. Despite the presence of flavor changing neutral currents mediated by heavy Higgs scalars, we find through a parameter space scan many points which accurately reproduce the kaon CP -violating parameter ɛ K as well as oscillation parameters in K and B ( s) mesons. Thus, CP4 offers a novel minimalistic framework for building models with very few assumptions, sufficient predictive power, and rich phenomenology yet to be explored.

  5. Estimation of gloss from rough surface parameters

    NASA Astrophysics Data System (ADS)

    Simonsen, Ingve; Larsen, Åge G.; Andreassen, Erik; Ommundsen, Espen; Nord-Varhaug, Katrin

    2005-12-01

    Gloss is a quantity used in the optical industry to quantify and categorize materials according to how well they scatter light specularly. With the aid of phase perturbation theory, we derive an approximate expression for this quantity for a one-dimensional randomly rough surface. It is demonstrated that gloss depends in an exponential way on two dimensionless quantities that are associated with the surface randomness: the root-mean-square roughness times the perpendicular momentum transfer for the specular direction, and a correlation function dependent factor times a lateral momentum variable associated with the collection angle. Rigorous Monte Carlo simulations are used to access the quality of this approximation, and good agreement is observed over large regions of parameter space.

  6. Three-meter telescope study

    NASA Technical Reports Server (NTRS)

    Wissinger, A.; Scott, R. M.; Peters, W.; Augustyn, W., Jr.; Arnold, R.; Offner, A.; Damast, M.; Boyce, B.; Kinnaird, R.; Mangus, J. D.

    1971-01-01

    A means is presented whereby the effect of various changes in the most important parameters of a three meter aperature space astronomy telescope can be evaluated to determine design trends and to optimize the optical design configuration. Methods are defined for evaluating the theoretical optical performance of axisymmetric, centrally obscured telescopes based upon the intended astronomy research usage. A series of design parameter variations is presented to determine the optimum telescope configuration. The design optimum requires very fast primary mirrors, so the study also examines the current state of the art in fabricating large, fast primary mirrors. The conclusion is that a 3-meter primary mirror having a focal ratio as low as f/2 is feasible using currently established techniques.

  7. The Infobiotics Workbench: an integrated in silico modelling platform for Systems and Synthetic Biology.

    PubMed

    Blakes, Jonathan; Twycross, Jamie; Romero-Campero, Francisco Jose; Krasnogor, Natalio

    2011-12-01

    The Infobiotics Workbench is an integrated software suite incorporating model specification, simulation, parameter optimization and model checking for Systems and Synthetic Biology. A modular model specification allows for straightforward creation of large-scale models containing many compartments and reactions. Models are simulated either using stochastic simulation or numerical integration, and visualized in time and space. Model parameters and structure can be optimized with evolutionary algorithms, and model properties calculated using probabilistic model checking. Source code and binaries for Linux, Mac and Windows are available at http://www.infobiotics.org/infobiotics-workbench/; released under the GNU General Public License (GPL) version 3. Natalio.Krasnogor@nottingham.ac.uk.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marekova, Elisaveta

    Series of relatively large earthquakes in different regions of the Earth are studied. The regions chooses are of a high seismic activity and has a good contemporary network for recording of the seismic events along them. The main purpose of this investigation is the attempt to describe analytically the seismic process in the space and time. We are considering the statistical distributions the distances and the times between consecutive earthquakes (so called pair analysis). Studies conducted on approximating the statistical distribution of the parameters of consecutive seismic events indicate the existence of characteristic functions that describe them best. Such amore » mathematical description allows the distributions of the examined parameters to be compared to other model distributions.« less

  9. Coupled Boltzmann computation of mixed axion neutralino dark matter in the SUSY DFSZ axion model

    NASA Astrophysics Data System (ADS)

    Bae, Kyu Jung; Baer, Howard; Lessa, Andre; Serce, Hasan

    2014-10-01

    The supersymmetrized DFSZ axion model is highly motivated not only because it offers solutions to both the gauge hierarchy and strong CP problems, but also because it provides a solution to the SUSY μ-problem which naturally allows for a Little Hierarchy. We compute the expected mixed axion-neutralino dark matter abundance for the SUSY DFSZ axion model in two benchmark cases—a natural SUSY model with a standard neutralino underabundance (SUA) and an mSUGRA/CMSSM model with a standard overabundance (SOA). Our computation implements coupled Boltzmann equations which track the radiation density along with neutralino, axion, axion CO (produced via coherent oscillations), saxion, saxion CO, axino and gravitino densities. In the SUSY DFSZ model, axions, axinos and saxions go through the process of freeze-in—in contrast to freeze-out or out-of-equilibrium production as in the SUSY KSVZ model—resulting in thermal yields which are largely independent of the re-heat temperature. We find the SUA case with suppressed saxion-axion couplings (ξ=0) only admits solutions for PQ breaking scale falesssim 6× 1012 GeV where the bulk of parameter space tends to be axion-dominated. For SUA with allowed saxion-axion couplings (ξ =1), then fa values up to ~ 1014 GeV are allowed. For the SOA case, almost all of SUSY DFSZ parameter space is disallowed by a combination of overproduction of dark matter, overproduction of dark radiation or violation of BBN constraints. An exception occurs at very large fa~ 1015-1016 GeV where large entropy dilution from CO-produced saxions leads to allowed models.

  10. Statistical analysis of mesoscale rainfall: Dependence of a random cascade generator on large-scale forcing

    NASA Technical Reports Server (NTRS)

    Over, Thomas, M.; Gupta, Vijay K.

    1994-01-01

    Under the theory of independent and identically distributed random cascades, the probability distribution of the cascade generator determines the spatial and the ensemble properties of spatial rainfall. Three sets of radar-derived rainfall data in space and time are analyzed to estimate the probability distribution of the generator. A detailed comparison between instantaneous scans of spatial rainfall and simulated cascades using the scaling properties of the marginal moments is carried out. This comparison highlights important similarities and differences between the data and the random cascade theory. Differences are quantified and measured for the three datasets. Evidence is presented to show that the scaling properties of the rainfall can be captured to the first order by a random cascade with a single parameter. The dependence of this parameter on forcing by the large-scale meteorological conditions, as measured by the large-scale spatial average rain rate, is investigated for these three datasets. The data show that this dependence can be captured by a one-to-one function. Since the large-scale average rain rate can be diagnosed from the large-scale dynamics, this relationship demonstrates an important linkage between the large-scale atmospheric dynamics and the statistical cascade theory of mesoscale rainfall. Potential application of this research to parameterization of runoff from the land surface and regional flood frequency analysis is briefly discussed, and open problems for further research are presented.

  11. Reconnaissance of the β Pictoris system down to 1.75 AU with the L' - b and vector vortex coronagraph on VLT/NACO

    NASA Astrophysics Data System (ADS)

    Milli, J.; Absil, O.; Mawet, D.; Lagrange, A.-M.

    2013-09-01

    High contrast imaging has thoroughly combed through the limited parameter space accessible with first-generation ground-based adaptive optics instruments and the HST. Only a few objects were discovered, and many non-detections reported and statistically interpreted. The field is now in need of a technological breakthrough. We aim at opening a new parameter space with first-generation systems such as NACO at the Very Large Telescope, by providing ground-breaking inner working angle (IWA) capabilities in the L' band. This mid-infrared wavelength range is a sweet spot for high contrast coronagraphy since the planets-to-star brightness ratio is favorable, while Strehl ratio is naturally higher. An annular groove phase mask (AGPM) vector vortex coronagraph optimized for the L' band, made out of diamond subwavelength gratings has been manufactured and qualified in the lab. The AGPM enables high contrast imaging at very small IWA (here 0".09), potentially being the key to a new parameter space. Here we present the results of the installation and successful commissioning of an L'- band AGPM on VLT/NACO. During a recent science verification run, we imaged the inner regions of Beta Pictoris down to the previously unexplored projected radius of 1.75 AU with unprecedented point source sensitivity. The disk was also clearly resolved down to its inner truncation . The new NACO mode is an opportunity to introduce a more rigorous framework for deriving detection limits at very small angles, which is also relevant for SPHERE and GPI and every high contrast imaging instrument with small IWA ambitions. Indeed, classical tools assuming Gaussian statistics, perfectly valid at large separations, loose significance close to the center simply because the sample size decreases dramatically (fewer resolution elements at a given radius). Moreover, the probability density function (PDF) of speckle noise and associated confidence level for detection depend on radius. ADI was shown to transform speckles'modified Rician PDF into quasi-Gaussian PDF at large separations, but it is expected that this property of ADI does not hold true at small angles. Finally, the flux attenuation induced by ADI, potentially significant at small angles, does not scale linearly with the companion brightness, which makes its calibration more difficult.

  12. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  13. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  14. 14 CFR 1214.813 - Computation of sharing and pricing parameters.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Computation of sharing and pricing parameters. 1214.813 Section 1214.813 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION SPACE FLIGHT Reimbursement for Spacelab Services § 1214.813 Computation of sharing and pricing...

  15. On the analysis of large data sets

    NASA Astrophysics Data System (ADS)

    Ruch, Gerald T., Jr.

    We present a set of tools and techniques for performing detailed comparisons between computational models with high dimensional parameter spaces and large sets of archival data. By combining a principal component analysis of a large grid of samples from the model with an artificial neural network, we create a powerful data visualization tool as well as a way to robustly recover physical parameters from a large set of experimental data. Our techniques are applied in the context of circumstellar disks, the likely sites of planetary formation. An analysis is performed applying the two layer approximation of Chiang et al. (2001) and Dullemond et al. (2001) to the archive created by the Spitzer Space Telescope Cores to Disks Legacy program. We find two populations of disk sources. The first population is characterized by the lack of a puffed up inner rim while the second population appears to contain an inner rim which casts a shadow across the disk. The first population also exhibits a trend of increasing spectral index while the second population exhibits a decreasing trend in the strength of the 20 mm silicate emission feature. We also present images of the giant molecular cloud W3 obtained with the Infrared Array Camera (IRAC) and the Multiband Imaging Photometer (MIPS) on board the Spitzer Space Telescope. The images encompass the star forming regions W3 Main, W3(OH), and a region that we refer to as the Central Cluster which encloses the emission nebula IC 1795. We present a star count analysis of the point sources detected in W3. The star count analysis shows that the stellar population of the Central Cluster, when compared to that in the background, contains an over density of sources. The Central Cluster also contains an excess of sources with colors consistent with Class II Young Stellar Objects (YSOs). A analysis of the color-color diagrams also reveals a large number of Class II YSOs in the Central Cluster. Our results suggest that an earlier epoch of star formation created the Central Cluster, created a cavity, and triggered the active star formation in the W3 Main and W3(OH) regions. We also detect a new outflow and its candidate exciting star.

  16. Manufacture of multi-layer woven preforms

    NASA Technical Reports Server (NTRS)

    Mohamed, M. H.; Zhang, Z.; Dickinson, L.

    1988-01-01

    This paper reviews current three-dimensional weaving processes and discusses a process developed at the Mars Mission Research Center of North Carolina State University to weave three-dimensional multilayer fabrics. The fabrics may vary in size and complexity from simple panels to T-section or I-section beams to large stiffened panels. Parameters such as fiber orientation, volume fraction of the fiber required in each direction, yarn spacings or density, etc., which determine the physical properties of the composites are discussed.

  17. R-parity violation in SU(5)

    DOE PAGES

    Bajc, Borut; Di Luzio, Luca

    2015-07-23

    We show that judiciously chosen R-parity violating terms in the minimal renormalizable supersymmetric SU(5) are able to correct all the phenomenologically wrong mass relations between down quarks and charged leptons. The model can accommodate neutrino masses as well. One of the most striking consequences is a large mixing between the electron and the Higgsino. Finally, we show that this can still be in accord with data in some regions of the parameter space and possibly falsified in future experiments.

  18. Direct search for charged higgs bosons in decays of top quarks.

    PubMed

    Abazov, V M; Abbott, B; Abdesselam, A; Abolins, M; Abramov, V; Acharya, B S; Adams, D L; Adams, M; Ahmed, S N; Alexeev, G D; Alves, G A; Amos, N; Anderson, E W; Baarmand, M M; Babintsev, V V; Babukhadia, L; Bacon, T C; Baden, A; Baldin, B; Balm, P W; Banerjee, S; Barberis, E; Baringer, P; Barreto, J; Bartlett, J F; Bassler, U; Bauer, D; Bean, A; Begel, M; Belyaev, A; Beri, S B; Bernardi, G; Bertram, I; Besson, A; Beuselinck, R; Bezzubov, V A; Bhat, P C; Bhatnagar, V; Bhattacharjee, M; Blazey, G; Blessing, S; Boehnlein, A; Bojko, N I; Borcherding, F; Bos, K; Brandt, A; Breedon, R; Briskin, G; Brock, R; Brooijmans, G; Bross, A; Buchholz, D; Buehler, M; Buescher, V; Burtovoi, V S; Butler, J M; Canelli, F; Carvalho, W; Casey, D; Casilum, Z; Castilla-Valdez, H; Chakraborty, D; Chan, K M; Chekulaev, S V; Cho, D K; Choi, S; Chopra, S; Christenson, J H; Chung, M; Claes, D; Clark, A R; Cochran, J; Coney, L; Connolly, B; Cooper, W E; Coppage, D; Cummings, M A C; Cutts, D; Davis, G A; Davis, K; De, K; de Jong, S J; Del Signore, K; Demarteau, M; Demina, R; Demine, P; Denisov, D; Denisov, S P; Desai, S; Diehl, H T; Diesburg, M; Di Loreto, G; Doulas, S; Draper, P; Ducros, Y; Dudko, L V; Duensing, S; Duflot, L; Dugad, S R; Dyshkant, A; Edmunds, D; Ellison, J; Elvira, V D; Engelmann, R; Eno, S; Eppley, G; Ermolov, P; Eroshin, O V; Estrada, J; Evans, H; Evdokimov, V N; Fahland, T; Feher, S; Fein, D; Ferbel, T; Filthaut, F; Fisk, H E; Fisyak, Y; Flattum, E; Fleuret, F; Fortner, M; Frame, K C; Fuess, S; Gallas, E; Galyaev, A N; Gao, M; Gavrilov, V; Genik, R J; Genser, K; Gerber, C E; Gershtein, Y; Gilmartin, R; Ginther, G; Gómez, B; Gómez, G; Goncharov, P I; González Solís, J L; Gordon, H; Goss, L T; Gounder, K; Goussiou, A; Graf, N; Graham, G; Grannis, P D; Green, J A; Greenlee, H; Grinstein, S; Groer, L; Grünendahl, S; Gupta, A; Gurzhiev, S N; Gutierrez, G; Gutierrez, P; Hadley, N J; Haggerty, H; Hagopian, S; Hagopian, V; Hall, R E; Hanlet, P; Hansen, S; Hauptman, J M; Hays, C; Hebert, C; Hedin, D; Heinson, A P; Heintz, U; Heuring, T; Hildreth, M D; Hirosky, R; Hobbs, J D; Hoeneisen, B; Huang, Y; Illingworth, R; Ito, A S; Jaffré, M; Jain, S; Jesik, R; Johns, K; Johnson, M; Jonckheere, A; Jones, M; Jöstlein, H; Juste, A; Kahn, S; Kajfasz, E; Kalinin, A M; Karmanov, D; Karmgard, D; Kehoe, R; Kharchilava, A; Kim, S K; Klima, B; Knuteson, B; Ko, W; Kohli, J M; Kostritskiy, A V; Kotcher, J; Kotwal, A V; Kozelov, A V; Kozlovsky, E A; Krane, J; Krishnaswamy, M R; Krivkova, P; Krzywdzinski, S; Kubantsev, M; Kuleshov, S; Kulik, Y; Kunori, S; Kupco, A; Kuznetsov, V E; Landsberg, G; Leflat, A; Leggett, C; Lehner, F; Li, J; Li, Q Z; Lima, J G R; Lincoln, D; Linn, S L; Linnemann, J; Lipton, R; Lucotte, A; Lueking, L; Lundstedt, C; Luo, C; Maciel, A K A; Madaras, R J; Malyshev, V L; Manankov, V; Mao, H S; Marshall, T; Martin, M I; Martin, R D; Mauritz, K M; May, B; Mayorov, A A; McCarthy, R; McDonald, J; McMahon, T; Melanson, H L; Merkin, M; Merritt, K W; Miao, C; Miettinen, H; Mihalcea, D; Mishra, C S; Mokhov, N; Mondal, N K; Montgomery, H E; Moore, R W; Mostafa, M; da Motta, H; Nagy, E; Nang, F; Narain, M; Narasimham, V S; Neal, H A; Negret, J P; Negroni, S; Nunnemann, T; O'Neil, D; Oguri, V; Olivier, B; Oshima, N; Padley, P; Pan, L J; Papageorgiou, K; Para, A; Parashar, N; Partridge, R; Parua, N; Paterno, M; Patwa, A; Pawlik, B; Perkins, J; Peters, M; Peters, O; Pétroff, P; Piegaia, R; Piekarz, H; Pope, B G; Popkov, E; Prosper, H B; Protopopescu, S; Qian, J; Raja, R; Rajagopalan, S; Ramberg, E; Rapidis, P A; Reay, N W; Reucroft, S; Rha, J; Ridel, M; Rijssenbeek, M; Rockwell, T; Roco, M; Rubinov, P; Ruchti, R; Rutherfoord, J; Sabirov, B M; Santoro, A; Sawyer, L; Schamberger, R D; Schellman, H; Schwartzman, A; Sen, N; Shabalina, E; Shivpuri, R K; Shpakov, D; Shupe, M; Sidwell, R A; Simak, V; Singh, H; Singh, J B; Sirotenko, V; Slattery, P; Smith, E; Smith, R P; Snihur, R; Snow, G R; Snow, J; Snyder, S; Solomon, J; Sorín, V; Sosebee, M; Sotnikova, N; Soustruznik, K; Souza, M; Stanton, N R; Steinbrück, G; Stephens, R W; Stichelbaut, F; Stoker, D; Stolin, V; Stoyanova, D A; Strauss, M; Strovink, M; Stutte, L; Sznajder, A; Taylor, W; Tentindo-Repond, S; Tripathi, S M; Trippe, T G; Turcot, A S; Tuts, P M; van Gemmeren, P; Vaniev, V; Van Kooten, R; Varelas, N; Vertogradov, L S; Volkov, A A; Vorobiev, A P; Wahl, H D; Wang, H; Wang, Z-M; Warchol, J; Watts, G; Wayne, M; Weerts, H; White, A; White, J T; Whiteson, D; Wightman, J A; Wijngaarden, D A; Willis, S; Wimpenny, S J; Womersley, J; Wood, D R; Yamada, R; Yamin, P; Yasuda, T; Yatsunenko, Y A; Yip, K; Youssef, S; Yu, J; Yu, Z; Zanabria, M; Zheng, H; Zhou, Z; Zielinski, M; Zieminska, D; Zieminski, A; Zutshi, V; Zverev, E G; Zylberstejn, A

    2002-04-15

    We present a search for charged Higgs bosons in decays of pair-produced top quarks in pp collisions at sqrt[s] = 1.8 TeV recorded by the D0 detector at the Fermilab Tevatron collider. With no evidence for signal, we exclude most regions of the ( M(H+/-),tan(beta)) parameter space where the decay t--> H(+)b has a branching fraction >0.36 and B(H+/--->tau(nu)(tau)) is large.

  19. Artificial Intelligence in planetary spectroscopy

    NASA Astrophysics Data System (ADS)

    Waldmann, Ingo

    2017-10-01

    The field of exoplanetary spectroscopy is as fast moving as it is new. Analysing currently available observations of exoplanetary atmospheres often invoke large and correlated parameter spaces that can be difficult to map or constrain. This is true for both: the data analysis of observations as well as the theoretical modelling of their atmospheres.Issues of low signal-to-noise data and large, non-linear parameter spaces are nothing new and commonly found in many fields of engineering and the physical sciences. Recent years have seen vast improvements in statistical data analysis and machine learning that have revolutionised fields as diverse as telecommunication, pattern recognition, medical physics and cosmology.In many aspects, data mining and non-linearity challenges encountered in other data intensive fields are directly transferable to the field of extrasolar planets. In this conference, I will discuss how deep neural networks can be designed to facilitate solving said issues both in exoplanet atmospheres as well as for atmospheres in our own solar system. I will present a deep belief network, RobERt (Robotic Exoplanet Recognition), able to learn to recognise exoplanetary spectra and provide artificial intelligences to state-of-the-art atmospheric retrieval algorithms. Furthermore, I will present a new deep convolutional network that is able to map planetary surface compositions using hyper-spectral imaging and demonstrate its uses on Cassini-VIMS data of Saturn.

  20. Deconstructed transverse mass variables

    DOE PAGES

    Ismail, Ahmed; Schwienhorst, Reinhard; Virzi, Joseph S.; ...

    2015-04-02

    Traditional searches for R-parity conserving natural supersymmetry (SUSY) require large transverse mass and missing energy cuts to separate the signal from large backgrounds. SUSY models with compressed spectra inherently produce signal events with small amounts of missing energy that are hard to explore. We use this difficulty to motivate the construction of "deconstructed" transverse mass variables which are designed preserve information on both the norm and direction of the missing momentum. Here, we demonstrate the effectiveness of these variables in searches for the pair production of supersymmetric top-quark partners which subsequently decay into a final state with an isolated lepton,more » jets and missing energy. We show that the use of deconstructed transverse mass variables extends the accessible compressed spectra parameter space beyond the region probed by traditional methods. The parameter space can further be expanded to neutralino masses that are larger than the difference between the stop and top masses. In addition, we also discuss how these variables allow for novel searches of single stop production, in order to directly probe unconstrained stealth stops in the small stop-and neutralino-mass regime. We also demonstrate the utility of these variables for generic gluino and stop searches in all-hadronic final states. Overall, we demonstrate that deconstructed transverse variables are essential to any search wanting to maximize signal separation from the background when the signal has undetected particles in the final state.« less

  1. Real space channelization for generic DBT system image quality evaluation with channelized Hotelling observer

    NASA Astrophysics Data System (ADS)

    Petrov, Dimitar; Cockmartin, Lesley; Marshall, Nicholas; Vancoillie, Liesbeth; Young, Kenneth; Bosmans, Hilde

    2017-03-01

    Digital breast tomosynthesis (DBT) is a relatively new 3D mammography technique that promises better detection of low contrast masses than conventional 2D mammography. The parameter space for DBT is large however and finding an optimal balance between dose and image quality remains challenging. Given the large number of conditions and images required in optimization studies, the use of human observers (HO) is time consuming and certainly not feasible for the tuning of all degrees of freedom. Our goal was to develop a model observer (MO) that could predict human detectability for clinically relevant details embedded within a newly developed structured phantom for DBT applications. DBT series were acquired on GE SenoClaire 3D, Giotto Class, Fujifilm AMULET Innovality and Philips MicroDose systems at different dose levels, Siemens Inspiration DBT acquisitions were reconstructed with different algorithms, while a larger set of DBT series was acquired on Hologic Dimensions system for first reproducibility testing. A channelized Hotelling observer (CHO) with Gabor channels was developed The parameters of the Gabor channels were tuned on all systems at standard scanning conditions and the candidate that produced the best fit for all systems was chosen. After tuning, the MO was applied to all systems and conditions. Linear regression lines between MO and HO scores were calculated, giving correlation coefficients between 0.87 and 0.99 for all tested conditions.

  2. Efficient receiver tuning using differential evolution strategies

    NASA Astrophysics Data System (ADS)

    Wheeler, Caleb H.; Toland, Trevor G.

    2016-08-01

    Differential evolution (DE) is a powerful and computationally inexpensive optimization strategy that can be used to search an entire parameter space or to converge quickly on a solution. The Kilopixel Array Pathfinder Project (KAPPa) is a heterodyne receiver system delivering 5 GHz of instantaneous bandwidth in the tuning range of 645-695 GHz. The fully automated KAPPa receiver test system finds optimal receiver tuning using performance feedback and DE. We present an adaptation of DE for use in rapid receiver characterization. The KAPPa DE algorithm is written in Python 2.7 and is fully integrated with the KAPPa instrument control, data processing, and visualization code. KAPPa develops the technologies needed to realize heterodyne focal plane arrays containing 1000 pixels. Finding optimal receiver tuning by investigating large parameter spaces is one of many challenges facing the characterization phase of KAPPa. This is a difficult task via by-hand techniques. Characterizing or tuning in an automated fashion without need for human intervention is desirable for future large scale arrays. While many optimization strategies exist, DE is ideal for time and performance constraints because it can be set to converge to a solution rapidly with minimal computational overhead. We discuss how DE is utilized in the KAPPa system and discuss its performance and look toward the future of 1000 pixel array receivers and consider how the KAPPa DE system might be applied.

  3. Analysis of large optical ground stations for deep-space optical communications

    NASA Astrophysics Data System (ADS)

    Garcia-Talavera, M. Reyes; Rivera, C.; Murga, G.; Montilla, I.; Alonso, A.

    2017-11-01

    Inter-satellite and ground to satellite optical communications have been successfully demonstrated over more than a decade with several experiments, the most recent being NASA's lunar mission Lunar Atmospheric Dust Environment Explorer (LADEE). The technology is in a mature stage that allows to consider optical communications as a high-capacity solution for future deep-space communications [1][2], where there is an increasing demand on downlink data rate to improve science return. To serve these deep-space missions, suitable optical ground stations (OGS) have to be developed providing large collecting areas. The design of such OGSs must face both technical and cost constraints in order to achieve an optimum implementation. To that end, different approaches have already been proposed and analyzed, namely, a large telescope based on a segmented primary mirror, telescope arrays, and even the combination of RF and optical receivers in modified versions of existing Deep-Space Network (DSN) antennas [3][4][5]. Array architectures have been proposed to relax some requirements, acting as one of the key drivers of the present study. The advantages offered by the array approach are attained at the expense of adding subsystems. Critical issues identified for each implementation include their inherent efficiency and losses, as well as its performance under high-background conditions, and the acquisition, pointing, tracking, and synchronization capabilities. It is worth noticing that, due to the photon-counting nature of detection, the system performance is not solely given by the signal-to-noise ratio parameter. To start with the analysis, first the main implications of the deep space scenarios are summarized, since they are the driving requirements to establish the technical specifications for the large OGS. Next, both the main characteristics of the OGS and the potential configuration approaches are presented, getting deeper in key subsystems with strong impact in the performance. The different configurations are compared from the technical point of view, taking into account the effect of atmospheric conditions. Finally a very preliminary cost analysis for a large aperture OGS is presented.

  4. Controls for space structures

    NASA Astrophysics Data System (ADS)

    Balas, Mark

    1991-11-01

    Assembly and operation of large space structures (LSS) in orbit will require robot-assisted docking and berthing of partially-assembled structures. These operations require new solutions to the problems of controls. This is true because of large transient and persistent disturbances, controller-structure interaction with unmodeled modes, poorly known structure parameters, slow actuator/sensor dynamical behavior, and excitation of nonlinear structure vibrations during control and assembly. For on-orbit assembly, controllers must start with finite element models of LSS and adapt on line to the best operating points, without compromising stability. This is not easy to do, since there are often unmodeled dynamic interactions between the controller and the structure. The indirect adaptive controllers are based on parameter estimation. Due to the large number of modes in LSS, this approach leads to very high-order control schemes with consequent poor stability and performance. In contrast, direct model reference adaptive controllers operate to force the LSS to track the desirable behavior of a chosen model. These schemes produce simple control algorithms which are easy to implement on line. One problem with their use for LSS has been that the model must be the same dimension as the LSS - i.e., quite large. A control theory based on the command generator tracker (CGT) ideas of Sobel, Mabins, Kaufman and Wen, Balas to obtain very low-order models based on adaptive algorithms was developed. Closed-loop stability for both finite element models and distributed parameter models of LSS was proved. In addition, successful numerical simulations on several LSS databases were obtained. An adaptive controller based on our theory was also implemented on a flexible robotic manipulator at Martin Marietta Astronautics. Computation schemes for controller-structure interaction with unmodeled modes, the residual mode filters or RMF, were developed. The RMF theory was modified to compensate slow actuator/sensor dynamics. These new ideas are being applied to LSS simulations to demonstrate the ease with which one can incorporate slow actuator/sensor effects into our design. It was also shown that residual mode filter compensation can be modified for small nonlinearities to produce exponentially stable closed-loop control.

  5. Controls for space structures

    NASA Technical Reports Server (NTRS)

    Balas, Mark

    1991-01-01

    Assembly and operation of large space structures (LSS) in orbit will require robot-assisted docking and berthing of partially-assembled structures. These operations require new solutions to the problems of controls. This is true because of large transient and persistent disturbances, controller-structure interaction with unmodeled modes, poorly known structure parameters, slow actuator/sensor dynamical behavior, and excitation of nonlinear structure vibrations during control and assembly. For on-orbit assembly, controllers must start with finite element models of LSS and adapt on line to the best operating points, without compromising stability. This is not easy to do, since there are often unmodeled dynamic interactions between the controller and the structure. The indirect adaptive controllers are based on parameter estimation. Due to the large number of modes in LSS, this approach leads to very high-order control schemes with consequent poor stability and performance. In contrast, direct model reference adaptive controllers operate to force the LSS to track the desirable behavior of a chosen model. These schemes produce simple control algorithms which are easy to implement on line. One problem with their use for LSS has been that the model must be the same dimension as the LSS - i.e., quite large. A control theory based on the command generator tracker (CGT) ideas of Sobel, Mabins, Kaufman and Wen, Balas to obtain very low-order models based on adaptive algorithms was developed. Closed-loop stability for both finite element models and distributed parameter models of LSS was proved. In addition, successful numerical simulations on several LSS databases were obtained. An adaptive controller based on our theory was also implemented on a flexible robotic manipulator at Martin Marietta Astronautics. Computation schemes for controller-structure interaction with unmodeled modes, the residual mode filters or RMF, were developed. The RMF theory was modified to compensate slow actuator/sensor dynamics. These new ideas are being applied to LSS simulations to demonstrate the ease with which one can incorporate slow actuator/sensor effects into our design. It was also shown that residual mode filter compensation can be modified for small nonlinearities to produce exponentially stable closed-loop control. A theory for disturbance accommodating controllers based on reduced order models of structures was developed, and stability results for these controllers in closed-loop with large-scale finite element models of structures were obtained.

  6. Calibration Laboratory Capabilities Listing as of April 2009

    NASA Technical Reports Server (NTRS)

    Kennedy, Gary W.

    2009-01-01

    This document reviews the Calibration Laboratory capabilities for various NASA centers (i.e., Glenn Research Center and Plum Brook Test Facility Kennedy Space Center Marshall Space Flight Center Stennis Space Center and White Sands Test Facility.) Some of the parameters reported are: Alternating current, direct current, dimensional, mass, force, torque, pressure and vacuum, safety, and thermodynamics parameters. Some centers reported other parameters.

  7. Space weather monitoring and forecasting in South America: products from the user requests to the development of regional magnetic indices and GNSS vertical error maps

    NASA Astrophysics Data System (ADS)

    Denardini, Clezio Marcos; Padilha, Antonio; Takahashi, Hisao; Souza, Jonas; Mendes, Odim; Batista, Inez S.; SantAnna, Nilson; Gatto, Rubens; Costa, D. Joaquim

    On August 2007 the National Institute for Space Research started a task force to develop and operate a space weather program, which is kwon by the acronyms Embrace that stands for the Portuguese statement “Estudo e Monitoramento BRAasileiro de Clima Espacial” Program (Brazilian Space Weather Study and Monitoring program). The main purpose of the Embrace Program is to monitor the space climate and weather from sun, interplanetary space, magnetosphere and ionosphere-atmosphere, and to provide useful information to space related communities, technological, industrial and academic areas. Since then we have being visiting several different space weather costumers and we have host two workshops of Brazilian space weather users at the Embrace facilities. From the inputs and requests collected from the users the Embrace Program decided to monitored several physical parameters of the sun-earth environment through a large ground base network of scientific sensors and under collaboration with space weather centers partners. Most of these physical parameters are daily published on the Brazilian space weather program web portal, related to the entire network sensors available. A comprehensive data bank and an interface layer are under development to allow an easy and direct access to the useful information. Nowadays, the users will count on products derived from a GNSS monitor network that covers most of the South American territory; a digisonde network that monitors the ionospheric profiles in two equatorial sites and in one low latitude site; several solar radio telescopes to monitor solar activity, and a magnetometer network, besides a global ionospheric physical model. Regarding outreach, we publish a daily bulletin in Portuguese with the status of the space weather environment on the Sun, in the Interplanetary Medium and close to the Earth. Since December 2011, all these activities are carried out at the Embrace Headquarter, a building located at the INPE's main campus. Recently, we have release brand new products, among them, some regional magnetic indices and the GNSS vertical error map over South America. Contacting Author: C. M. Denardini (clezio.denardin@inpe.br)

  8. Fusion of AIRSAR and TM Data for Parameter Classification and Estimation in Dense and Hilly Forests

    NASA Technical Reports Server (NTRS)

    Moghaddam, Mahta; Dungan, J. L.; Coughlan, J. C.

    2000-01-01

    The expanded remotely sensed data space consisting of coincident radar backscatter and optical reflectance data provides for a more complete description of the Earth surface. This is especially useful where many parameters are needed to describe a certain scene, such as in the presence of dense and complex-structured vegetation or where there is considerable underlying topography. The goal of this paper is to use a combination of radar and optical data to develop a methodology for parameter classification for dense and hilly forests, and further, class-specific parameter estimation. The area to be used in this study is the H. J. Andrews Forest in Oregon, one of the Long-Term Ecological Research (LTER) sites in the US. This area consists of various dense old-growth conifer stands, and contains significant topographic relief. The Andrews forest has been the subject of many ecological studies over several decades, resulting in an abundance of ground measurements. Recently, biomass and leaf-area index (LAI) values for approximately 30 reference stands have also become available which span a large range of those parameters. The remote sensing data types to be used are the C-, L-, and P-band polarimetric radar data from the JPL airborne SAR (AIRSAR), the C-band single-polarization data from the JPL topographic SAR (TOPSAR), and the Thematic Mapper (TM) data from Landsat, all acquired in late April 1998. The total number of useful independent data channels from the AIRSAR is 15 (three frequencies, each with three unique polarizations and amplitude and phase of the like-polarized correlation), from the TOPSAR is 2 (amplitude and phase of the interferometric correlation), and from the TM is 6 (the thermal band is not used). The range pixel spacing of the AIRSAR is 3.3m for C- and L-bands and 6.6m for P-band. The TOPSAR pixel spacing is 10m, and the TM pixel size is 30m. To achieve parameter classification, first a number of parameters are defined which are of interest to ecologists for forest process modeling. These parameters include total biomass, leaf biomass, LAI, and tree height. The remote sensing data from radar and TM are used to formulate a multivariate analysis problem given the ground measurements of the parameters. Each class of each parameter is defined by a probability density function (pdf), the spread of which defines the range of that class. High classification accuracy results from situations in which little overlap occurs between pdfs. Classification results provide the basis for the future work of class-specific parameter estimation using radar and optical data. This work was performed in part by the Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, and in part by the NASA Ames Research Center, Moffett Field, CA, both under contract from the National Aeronautics and Space Administration.

  9. Fast computation of quadrupole and hexadecapole approximations in microlensing with a single point-source evaluation

    NASA Astrophysics Data System (ADS)

    Cassan, Arnaud

    2017-07-01

    The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.

  10. Characteristics of the flow around tandem flapping wings

    NASA Astrophysics Data System (ADS)

    Muscutt, Luke; Ganapathisubramani, Bharathram; Weymouth, Gabriel; The University of Southampton Team

    2014-11-01

    Vortex recapture is a fundamental fluid mechanics phenomenon which is important to many fields. Any large scale vorticity contained within a freestream flow may affect the aerodynamic properties of a downstream body. In the case of tandem flapping wings, the front wing generates strong large scale vorticity which impinges on the hind wing. The characteristics of this interaction are greatly affected by the spacing, and the phase of flapping between the front and rear wings. The interaction of the vorticity of the rear wing with the shed vorticity of the front wing may be constructive or destructive, increasing thrust or efficiency of the hind wing when compared to a wing operating in isolation. Knowledge of the parameter space where the maximum increases in these are obtained is important for the development of tandem wing unmanned air and underwater vehicles, commercial aerospace and renewable energy applications. This question is addressed with a combined computational and experimental approach, and a discussion of these is presented.

  11. Magneto-thermal reconnection of significance to space and astrophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coppi, B., E-mail: coppi@psfc.mit.edu

    Magnetic reconnection processes that can be excited in collisionless plasma regimes are of interest to space and astrophysics to the extent that the layers in which reconnection takes place are not rendered unrealistically small by their unfavorable dependence on relevant macroscopic distances. The equations describing new modes producing magnetic reconnection over relatively small but significant distances, unlike tearing types of mode, even when dealing with large macroscopic scale lengths, are given. The considered modes are associated with a finite electron temperature gradient and have a phase velocity in the direction of the electron diamagnetic velocity that can reverse to themore » opposite direction as relevant parameters are varied over a relatively wide range. The electron temperature perturbation has a primary role in the relevant theory. In particular, when referring to regimes in which the longitudinal (to the magnetic field) electron thermal conductivity is relatively large, the electron temperature perturbation becomes singular if the ratio of the transverse to the longitudinal electron thermal conductivity becomes negligible.« less

  12. Kepler eclipsing binaries with δ Scuti components and tidally induced heartbeat stars

    NASA Astrophysics Data System (ADS)

    Guo, Zhao; Gies, Douglas R.; Matson, Rachel A.

    δ Scuti stars are generally fast rotators and their pulsations are not in the asymptotic regime, so the interpretation of their pulsation spectra is a very difficult task. Binary stars, especially eclipsing systems, offer us the opportunity to constrain the space of fundamental stellar parameters. Firstly, we show the results of KIC9851944 and KIC4851217 as two case studies. We found the signature of the large frequency separation in the pulsational spectrum of both stars. The observed mean stellar density and the large frequency separation obey the linear relation in the log-log space as found by Suarez et al. (2014) and García Hernández et al. (2015). Second, we apply the simple `one-layer model' of Moreno & Koenigsberger (1999) to the prototype heartbeat star KOI-54. The model naturally reproduces the tidally induced high frequency oscillations and their frequencies are very close to the observed frequency at 90 and 91 times the orbital frequency.

  13. Variability in mutational fitness effects prevents full lethal transitions in large quasispecies populations

    NASA Astrophysics Data System (ADS)

    Sardanyés, Josep; Simó, Carles; Martínez, Regina; Solé, Ricard V.; Elena, Santiago F.

    2014-04-01

    The distribution of mutational fitness effects (DMFE) is crucial to the evolutionary fate of quasispecies. In this article we analyze the effect of the DMFE on the dynamics of a large quasispecies by means of a phenotypic version of the classic Eigen's model that incorporates beneficial, neutral, deleterious, and lethal mutations. By parameterizing the model with available experimental data on the DMFE of Vesicular stomatitis virus (VSV) and Tobacco etch virus (TEV), we found that increasing mutation does not totally push the entire viral quasispecies towards deleterious or lethal regions of the phenotypic sequence space. The probability of finding regions in the parameter space of the general model that results in a quasispecies only composed by lethal phenotypes is extremely small at equilibrium and in transient times. The implications of our findings can be extended to other scenarios, such as lethal mutagenesis or genomically unstable cancer, where increased mutagenesis has been suggested as a potential therapy.

  14. Free energy surface of an intrinsically disordered protein: comparison between temperature replica exchange molecular dynamics and bias-exchange metadynamics.

    PubMed

    Zerze, Gül H; Miller, Cayla M; Granata, Daniele; Mittal, Jeetain

    2015-06-09

    Intrinsically disordered proteins (IDPs), which are expected to be largely unstructured under physiological conditions, make up a large fraction of eukaryotic proteins. Molecular dynamics simulations have been utilized to probe structural characteristics of these proteins, which are not always easily accessible to experiments. However, exploration of the conformational space by brute force molecular dynamics simulations is often limited by short time scales. Present literature provides a number of enhanced sampling methods to explore protein conformational space in molecular simulations more efficiently. In this work, we present a comparison of two enhanced sampling methods: temperature replica exchange molecular dynamics and bias exchange metadynamics. By investigating both the free energy landscape as a function of pertinent order parameters and the per-residue secondary structures of an IDP, namely, human islet amyloid polypeptide, we found that the two methods yield similar results as expected. We also highlight the practical difference between the two methods by describing the path that we followed to obtain both sets of data.

  15. Effects of cosmic acceleration on black hole thermodynamics

    NASA Astrophysics Data System (ADS)

    Mandal, Abhijit

    2016-07-01

    Direct local impacts of cosmic acceleration upon a black hole are matters of interest. Babichev et. al. had published before that the Friedmann equations which are prevailing the part of fluid filled up in the universe to lead (or to be very specific, `dominate') the other constituents of universe and are forcing the universe to undergo present-day accelerating phase (or to lead to violate the strong energy condition and latter the week energy condition), will themselves tell that the rate of change of mass of the central black hole due to such exotic fluid's accretion will essentially shrink the mass of the black hole. But this is a global impact indeed. The local changes in the space time geometry next to the black hole can be analysed from a modified metric governing the surrounding space time of a black hole. A charged deSitter black hole solution encircled by quintessence field is chosen for this purpose. Different thermodynamic parameters are analysed for different values of quintessence equation of state parameter, ω_q. Specific jumps in the nature of the thermodynamic space near to the quintessence or phantom barrier are noted and physically interpreted as far as possible. Nature of phase transitions and the situations at which these transitions are taking place are also explored. It is determined that before quintessence starts to work (ω_q=-0.33>-1/3) it was preferable to have a small unstable black hole followed by a large stable one. But in quintessence (-1/3>ω_q>-1), black holes are destined to be unstable large ones pre-quelled by stable/ unstable small/ intermediate mass black holes.

  16. Effects of two successive parity-invariant point interactions on one-dimensional quantum transmission: Resonance conditions for the parameter space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konno, Kohkichi, E-mail: kohkichi@tomakomai-ct.ac.jp; Nagasawa, Tomoaki, E-mail: nagasawa@tomakomai-ct.ac.jp; Takahashi, Rohta, E-mail: takahashi@tomakomai-ct.ac.jp

    We consider the scattering of a quantum particle by two independent, successive parity-invariant point interactions in one dimension. The parameter space for the two point interactions is given by the direct product of two tori, which is described by four parameters. By investigating the effects of the two point interactions on the transmission probability of plane wave, we obtain the conditions for the parameter space under which perfect resonant transmission occur. The resonance conditions are found to be described by symmetric and anti-symmetric relations between the parameters.

  17. Mapping an operator's perception of a parameter space

    NASA Technical Reports Server (NTRS)

    Pew, R. W.; Jagacinski, R. J.

    1972-01-01

    Operators monitored the output of two versions of the crossover model having a common random input. Their task was to make discrete, real-time adjustments of the parameters k and tau of one of the models to make its output time history converge to that of the other, fixed model. A plot was obtained of the direction of parameter change as a function of position in the (tau, k) parameter space relative to the nominal value. The plot has a great deal of structure and serves as one form of representation of the operator's perception of the parameter space.

  18. Relationship of spaced antenna and Doppler techniques for velocity measurements (keynote paper), part 3

    NASA Technical Reports Server (NTRS)

    Vincent, R. A.

    1984-01-01

    The Doppler, spaced-antenna and interferometric methods of measuring wind velocities all use the same basic information, the Doppler shifts imposed on backscattered radio waves, but they process it in different ways. The Doppler technique is most commonly used at VHF since the narrow radar beams are readily available. However, the spaced antenna (SA) method has been successfully used with the SOUSY and Adelaide radars. At MF/HF the spaced antenna method is widely used since the large antenna arrays (diameter 1 km) required to generate narrow beams are expensive to construct. Where such arrays of this size are available then the Doppler method has been successfully used (e.g., Adelaide and Brisbane). In principle, the factors which influence the choice of beam pointing angle, the optimum antenna spacing will be the same whether operation is at MF or VHF. Many of the parameters which govern the efficient use of wind measuring systems have been discussed at previous MST workshops. Some of the points raised by these workshops are summarized.

  19. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  20. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  1. Validation of geometric models for fisheye lenses

    NASA Astrophysics Data System (ADS)

    Schneider, D.; Schwalbe, E.; Maas, H.-G.

    The paper focuses on the photogrammetric investigation of geometric models for different types of optical fisheye constructions (equidistant, equisolid-angle, sterographic and orthographic projection). These models were implemented and thoroughly tested in a spatial resection and a self-calibrating bundle adjustment. For this purpose, fisheye images were taken with a Nikkor 8 mm fisheye lens on a Kodak DSC 14n Pro digital camera in a hemispherical calibration room. Both, the spatial resection and the bundle adjustment resulted in a standard deviation of unit weight of 1/10 pixel with a suitable set of simultaneous calibration parameters introduced into the camera model. The camera-lens combination was treated with all of the four basic models mentioned above. Using the same set of additional lens distortion parameters, the differences between the models can largely be compensated, delivering almost the same precision parameters. The relative object space precision obtained from the bundle adjustment was ca. 1:10 000 of the object dimensions. This value can be considered as a very satisfying result, as fisheye images generally have a lower geometric resolution as a consequence of their large field of view and also have a inferior imaging quality in comparison to most central perspective lenses.

  2. Dynamics of a modified Hindmarsh-Rose neural model with random perturbations: Moment analysis and firing activities

    NASA Astrophysics Data System (ADS)

    Mondal, Argha; Upadhyay, Ranjit Kumar

    2017-11-01

    In this paper, an attempt has been made to understand the activity of mean membrane voltage and subsidiary system variables with moment equations (i.e., mean, variance and covariance's) under noisy environment. We consider a biophysically plausible modified Hindmarsh-Rose (H-R) neural system injected by an applied current exhibiting spiking-bursting phenomenon. The effects of predominant parameters on the dynamical behavior of a modified H-R system are investigated. Numerically, it exhibits period-doubling, period halving bifurcation and chaos phenomena. Further, a nonlinear system has been analyzed for the first and second order moments with additive stochastic perturbations. It has been solved using fourth order Runge-Kutta method and noisy systems by Euler's scheme. It has been demonstrated that the firing properties of neurons to evoke an action potential in a certain parameter space of the large exact systems can be estimated using an approximated model. Strong stimulation can cause a change in increase or decrease of the firing patterns. Corresponding to a fixed set of parameter values, the firing behavior and dynamical differences of the collective variables of a large, exact and approximated systems are investigated.

  3. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    NASA Astrophysics Data System (ADS)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.

    2018-03-01

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  4. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE PAGES

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; ...

    2018-02-09

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  5. Deployment simulation of a deployable reflector for earth science application

    NASA Astrophysics Data System (ADS)

    Wang, Xiaokai; Fang, Houfei; Cai, Bei; Ma, Xiaofei

    2015-10-01

    A novel mission concept namely NEXRAD-In-Space (NIS) has been developed for monitoring hurricanes, cyclones and other severe storms from a geostationary orbit. It requires a space deployable 35-meter diameter Ka-band (35 GHz) reflector. NIS can measure hurricane precipitation intensity, dynamics and its life cycle. These information is necessary for predicting the track, intensity, rain rate and hurricane-induced floods. To meet the requirements of the radar system, a Membrane Shell Reflector Segment (MSRS) reflector technology has been developed and several technologies have been evaluated. However, the deployment analysis of this large size and high-precision reflector has not been investigated. For a pre-studies, a scaled tetrahedral truss reflector with spring driving deployment system has been made and tested, deployment dynamics analysis of this scaled reflector has been performed using ADAMS to understand its deployment dynamic behaviors. Eliminating the redundant constraints in the reflector system with a large number of moving parts is a challenging issue. A primitive joint and flexible struts were introduced to the analytical model and they can effectively eliminate over constraints of the model. By using a high-speed camera and a force transducer, a deployment experiment of a single-bay tetrahedral module has been conducted. With the tested results, an optimization process has been performed by using the parameter optimization module of ADAMS to obtain the parameters of the analytical model. These parameters were incorporated to the analytical model of the whole reflector. It is observed from the analysis results that the deployment process of the reflector with a fixed boundary experiences three stages. These stages are rapid deployment stage, slow deployment stage and impact stage. The insight of the force peak distributions of the reflector can help the optimization design of the structure.

  6. Nonextensive Entropy Approach to Space Plasma Fluctuations and Turbulence

    NASA Astrophysics Data System (ADS)

    Leubner, M. P.; Vörös, Z.; Baumjohann, W.

    Spatial intermittency in fully developed turbulence is an established feature of astrophysical plasma fluctuations and in particular apparent in the interplanetary medium by in situ observations. In this situation, the classical Boltzmann— Gibbs extensive thermo-statistics, applicable when microscopic interactions and memory are short ranged and the environment is a continuous and differentiable manifold, fails. Upon generalization of the entropy function to nonextensivity, accounting for long-range interactions and thus for correlations in the system, it is demonstrated that the corresponding probability distribution functions (PDFs) are members of a family of specific power-law distributions. In particular, the resulting theoretical bi-κ functional reproduces accurately the observed global leptokurtic, non-Gaussian shape of the increment PDFs of characteristic solar wind variables on all scales, where nonlocality in turbulence is controlled via a multiscale coupling parameter. Gradual decoupling is obtained by enhancing the spatial separation scale corresponding to increasing κ-values in case of slow solar wind conditions where a Gaussian is approached in the limit of large scales. Contrary, the scaling properties in the high speed solar wind are predominantly governed by the mean energy or variance of the distribution, appearing as second parameter in the theory. The PDFs of solar wind scalar field differences are computed from WIND and ACE data for different time-lags and bulk speeds and analyzed within the nonextensive theory, where also a particular nonlinear dependence of the coupling parameter and variance with scale arises for best fitting theoretical PDFs. Consequently, nonlocality in fluctuations, related to both, turbulence and its large scale driving, should be related to long-range interactions in the context of nonextensive entropy generalization, providing fundamentally the physical background of the observed scale dependence of fluctuations in intermittent space plasmas.

  7. Instabilities in large economies: aggregate volatility without idiosyncratic shocks

    NASA Astrophysics Data System (ADS)

    Bonart, Julius; Bouchaud, Jean-Philippe; Landier, Augustin; Thesmar, David

    2014-10-01

    We study a dynamical model of interconnected firms which allows for certain market imperfections and frictions, restricted here to be myopic price forecasts and slow adjustment of production. Whereas the standard rational equilibrium is still formally a stationary solution of the dynamics, we show that this equilibrium becomes linearly unstable in a whole region of parameter space. When agents attempt to reach the optimal production target too quickly, coordination breaks down and the dynamics becomes chaotic. In the unstable, ‘turbulent’ phase, the aggregate volatility of the total output remains substantial even when the amplitude of idiosyncratic shocks goes to zero or when the size of the economy becomes large. In other words, crises become endogenous. This suggests an interesting resolution of the ‘small shocks, large business cycles’ puzzle.

  8. A Generalized Simple Formulation of Convective Adjustment ...

    EPA Pesticide Factsheets

    Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a prescribed value or ad hoc representation of τ is used in most global and regional climate/weather models making it a tunable parameter and yet still resulting in uncertainties in convective precipitation simulations. In this work, a generalized simple formulation of τ for use in any convection parameterization for shallow and deep clouds is developed to reduce convective precipitation biases at different grid spacing. Unlike existing other methods, our new formulation can be used with field campaign measurements to estimate τ as demonstrated by using data from two different special field campaigns. Then, we implemented our formulation into a regional model (WRF) for testing and evaluation. Results indicate that our simple τ formulation can give realistic temporal and spatial variations of τ across continental U.S. as well as grid-scale and subgrid scale precipitation. We also found that as the grid spacing decreases (e.g., from 36 to 4-km grid spacing), grid-scale precipitation dominants over subgrid-scale precipitation. The generalized τ formulation works for various types of atmospheric conditions (e.g., continental clouds due to heating and large-scale forcing over la

  9. Design of the 1.5 MW, 30-96 MHz ultra-wideband 3 dB high power hybrid coupler for Ion Cyclotron Resonance Frequency (ICRF) heating in fusion grade reactor.

    PubMed

    Yadav, Rana Pratap; Kumar, Sunil; Kulkarni, S V

    2016-01-01

    Design and developmental procedure of strip-line based 1.5 MW, 30-96 MHz, ultra-wideband high power 3 dB hybrid coupler has been presented and its applicability in ion cyclotron resonance heating (ICRH) in tokamak is discussed. For the high power handling capability, spacing between conductors and ground need to very high. Hence other structural parameters like strip-width, strip thickness coupling gap, and junction also become large which can be gone upto optimum limit where various constrains like fabrication tolerance, discontinuities, and excitation of higher TE and TM modes become prominent and significantly deteriorates the desired parameters of the coupled lines system. In designed hybrid coupler, two 8.34 dB coupled lines are connected in tandem to get desired coupling of 3 dB and air is used as dielectric. The spacing between ground and conductors are taken as 0.164 m for 1.5 MW power handling capability. To have the desired spacing, each of 8.34 dB segments are designed with inner dimension of 3.6 × 1.0 × 40 cm where constraints have been significantly realized, compensated, and applied in designing of 1.5 MW hybrid coupler and presented in paper.

  10. Improved Anomaly Detection using Integrated Supervised and Unsupervised Processing

    NASA Astrophysics Data System (ADS)

    Hunt, B.; Sheppard, D. G.; Wetterer, C. J.

    There are two broad technologies of signal processing applicable to space object feature identification using nonresolved imagery: supervised processing analyzes a large set of data for common characteristics that can be then used to identify, transform, and extract information from new data taken of the same given class (e.g. support vector machine); unsupervised processing utilizes detailed physics-based models that generate comparison data that can then be used to estimate parameters presumed to be governed by the same models (e.g. estimation filters). Both processes have been used in non-resolved space object identification and yield similar results yet arrived at using vastly different processes. The goal of integrating the results of the two is to seek to achieve an even greater performance by building on the process diversity. Specifically, both supervised processing and unsupervised processing will jointly operate on the analysis of brightness (radiometric flux intensity) measurements reflected by space objects and observed by a ground station to determine whether a particular day conforms to a nominal operating mode (as determined from a training set) or exhibits anomalous behavior where a particular parameter (e.g. attitude, solar panel articulation angle) has changed in some way. It is demonstrated in a variety of different scenarios that the integrated process achieves a greater performance than each of the separate processes alone.

  11. Constraining axion-like-particles with hard X-ray emission from magnetars

    NASA Astrophysics Data System (ADS)

    Fortin, Jean-François; Sinha, Kuver

    2018-06-01

    Axion-like particles (ALPs) produced in the core of a magnetar will convert to photons in the magnetosphere, leading to possible signatures in the hard X-ray band. We perform a detailed calculation of the ALP-to-photon conversion probability in the magnetosphere, recasting the coupled differential equations that describe ALP-photon propagation into a form that is efficient for large scale numerical scans. We show the dependence of the conversion probability on the ALP energy, mass, ALP-photon coupling, magnetar radius, surface magnetic field, and the angle between the magnetic field and direction of propagation. Along the way, we develop an analytic formalism to perform similar calculations in more general n-state oscillation systems. Assuming ALP emission rates from the core that are just subdominant to neutrino emission, we calculate the resulting constraints on the ALP mass versus ALP-photon coupling space, taking SGR 1806-20 as an example. In particular, we take benchmark values for the magnetar radius and core temperature, and constrain the ALP parameter space by the requirement that the luminosity from ALP-to-photon conversion should not exceed the total observed luminosity from the magnetar. The resulting constraints are competitive with constraints from helioscope experiments in the relevant part of ALP parameter space.

  12. Design of the 1.5 MW, 30-96 MHz ultra-wideband 3 dB high power hybrid coupler for Ion Cyclotron Resonance Frequency (ICRF) heating in fusion grade reactor

    NASA Astrophysics Data System (ADS)

    Yadav, Rana Pratap; Kumar, Sunil; Kulkarni, S. V.

    2016-01-01

    Design and developmental procedure of strip-line based 1.5 MW, 30-96 MHz, ultra-wideband high power 3 dB hybrid coupler has been presented and its applicability in ion cyclotron resonance heating (ICRH) in tokamak is discussed. For the high power handling capability, spacing between conductors and ground need to very high. Hence other structural parameters like strip-width, strip thickness coupling gap, and junction also become large which can be gone upto optimum limit where various constrains like fabrication tolerance, discontinuities, and excitation of higher TE and TM modes become prominent and significantly deteriorates the desired parameters of the coupled lines system. In designed hybrid coupler, two 8.34 dB coupled lines are connected in tandem to get desired coupling of 3 dB and air is used as dielectric. The spacing between ground and conductors are taken as 0.164 m for 1.5 MW power handling capability. To have the desired spacing, each of 8.34 dB segments are designed with inner dimension of 3.6 × 1.0 × 40 cm where constraints have been significantly realized, compensated, and applied in designing of 1.5 MW hybrid coupler and presented in paper.

  13. Phased array feed design technology for Large Aperture Microwave Radiometer (LAMR) Earth observations

    NASA Technical Reports Server (NTRS)

    Schuman, H. K.

    1992-01-01

    An assessment of the potential and limitations of phased array antennas in space-based geophysical precision radiometry is described. Mathematical models exhibiting the dependence of system and scene temperatures and system sensitivity on phased array antenna parameters and components such as phase shifters and low noise amplifiers (LNA) are developed. Emphasis is given to minimum noise temperature designs wherein the LNA's are located at the array level, one per element or subarray. Two types of combiners are considered: array lenses (space feeds) and corporate networks. The result of a survey of suitable components and devices is described. The data obtained from that survey are used in conjunction with the mathematical models to yield an assessment of effective array antenna noise temperature for representative geostationary and low Earth orbit systems. Practical methods of calibrating a space-based, phased array radiometer are briefly addressed as well.

  14. Virtual auditorium concepts for exhibition halls

    NASA Astrophysics Data System (ADS)

    Evans, Jack; Himmel, Chad; Knight, Sarah

    2002-11-01

    Many communities lack good performance facilities for symphonic music, opera, dramatic and musical arts, but have basic convention, exhibition or assembly spaces. It should be possible to develop performance space environments within large multipurpose facilities that will accommodate production and presentation of dramatic arts. Concepts for moderate-cost, temporary enhancements that transform boxy spaces into more intimate, acoustically articulated venues will be presented. Acoustical criteria and design parameters will be discussed in the context of creating a virtual auditorium within the building envelope. Physical, economic, and logistical limitations affect implementation. Sound reinforcement system augmentation can supplement the room conversion. Acceptable control of reflection patterns, reverberation, and to some extent, ambient noise, may be achieved with an array of nonpermanent reflector and absorber elements. These elements can sculpture an enclosure to approach the shape and acoustic characteristics of an auditorium. Plan and section illustrations will be included.

  15. Determinant quantum Monte Carlo study of the two-dimensional single-band Hubbard-Holstein model

    DOE PAGES

    Johnston, S.; Nowadnick, E. A.; Kung, Y. F.; ...

    2013-06-24

    Here, we performed numerical studies of the Hubbard-Holstein model in two dimensions using determinant quantum Monte Carlo (DQMC). We also present details of the method, emphasizing the treatment of the lattice degrees of freedom, and then study the filling and behavior of the fermion sign as a function of model parameters. We find a region of parameter space with large Holstein coupling where the fermion sign recovers despite large values of the Hubbard interaction. This indicates that studies of correlated polarons at finite carrier concentrations are likely accessible to DQMC simulations. We then restrict ourselves to the half-filled model andmore » examine the evolution of the antiferromagnetic structure factor, other metrics for antiferromagnetic and charge-density-wave order, and energetics of the electronic and lattice degrees of freedom as a function of electron-phonon coupling. From this we find further evidence for a competition between charge-density-wave and antiferromagnetic order at half- filling.« less

  16. SENSITIVITY OF BLIND PULSAR SEARCHES WITH THE FERMI LARGE AREA TELESCOPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dormody, M.; Johnson, R. P.; Atwood, W. B.

    2011-12-01

    We quantitatively establish the sensitivity to the detection of young to middle-aged, isolated, gamma-ray pulsars through blind searches of Fermi Large Area Telescope (LAT) data using a Monte Carlo simulation. We detail a sensitivity study of the time-differencing blind search code used to discover gamma-ray pulsars in the first year of observations. We simulate 10,000 pulsars across a broad parameter space and distribute them across the sky. We replicate the analysis in the Fermi LAT First Source Catalog to localize the sources, and the blind search analysis to find the pulsars. We analyze the results and discuss the effect ofmore » positional error and spin frequency on gamma-ray pulsar detections. Finally, we construct a formula to determine the sensitivity of the blind search and present a sensitivity map assuming a standard set of pulsar parameters. The results of this study can be applied to population studies and are useful in characterizing unidentified LAT sources.« less

  17. Electrostatics of lipid bilayer bending.

    PubMed Central

    Chou, T; Jarić, M V; Siggia, E D

    1997-01-01

    The electrostatic contribution to spontaneous membrane curvature is calculated within Poisson-Boltzmann theory under a variety of assumptions and emphasizing parameters in the physiological range. Asymmetrical surface charges can be fixed with respect to bilayer midplane area or with respect to the lipid-water area, but induce curvatures of opposite signs. Unequal screening layers on the two sides of a vesicle (e.g., multivalent cationic proteins on one side and monovalent salt on the other) also induce bending. For reasonable parameters, tubules formed by electrostatically induced bending can have radii in the 50-100-nm range, often seen in many intracellular organelles. Thus membrane associated proteins may induce curvature and subsequent budding, without themselves being intrinsically curved. Furthermore, we derive the previously unexplored effects of respecting the strict conservation of charge within the interior of a vesicle. The electrostatic component of the bending modulus is small under most of our conditions and is left as an experimental parameter. The large parameter space of conditions is surveyed in an array of graphs. Images FIGURE 1 FIGURE 10 PMID:9129807

  18. Aeroelastic Flutter Behavior of a Cantilever and Elastically Mounted Plate within a Nozzle-Diffuser Geometry

    NASA Astrophysics Data System (ADS)

    Tosi, Luis Phillipe; Colonius, Tim; Lee, Hyeong Jae; Sherrit, Stewart; Jet Propulsion Laboratory Collaboration; California Institute of Technology Collaboration

    2016-11-01

    Aeroelastic flutter arises when the motion of a structure and its surrounding flowing fluid are coupled in a constructive manner, causing large amplitudes of vibration in the immersed solid. A cantilevered beam in axial flow within a nozzle-diffuser geometry exhibits interesting resonance behavior that presents good prospects for internal flow energy harvesting. Different modes can be excited as a function of throat velocity, nozzle geometry, fluid and cantilever material parameters. Similar behavior has been also observed in elastically mounted rigid plates, enabling new designs for such devices. This work explores the relationship between the aeroelastic flutter instability boundaries and relevant non-dimensional parameters via experiments, numerical, and stability analyses. Parameters explored consist of a non-dimensional stiffness, a non-dimensional mass, non-dimensional throat size, and Reynolds number. A map of the system response in this parameter space may serve as a guide to future work concerning possible electrical output and failure prediction in harvesting devices.

  19. Effects of spatial constraints on channel network topology: Implications for geomorphological inference

    NASA Astrophysics Data System (ADS)

    Cabral, Mariza Castanheira De Moura Da Costa

    In the fifty-two years since Robert Horton's 1945 pioneering quantitative description of channel network planform (or plan view morphology), no conclusive findings have been presented that permit inference of geomorphological processes from any measures of network planform. All measures of network planform studied exhibit limited geographic variability across different environments. Horton (1945), Langbein et al. (1947), Schumm (1956), Hack (1957), Melton (1958), and Gray (1961) established various "laws" of network planform, that is, statistical relationships between different variables which have limited variability. A wide variety of models which have been proposed to simulate the growth of channel networks in time over a landsurface are generally also in agreement with the above planform laws. An explanation is proposed for the generality of the channel network planform laws. Channel networks must be space filling, that is, they must extend over the landscape to drain every hillslope, leaving no large undrained areas, and with no crossing of channels, often achieving a roughly uniform drainage density in a given environment. It is shown that the space-filling constraint can reduce the sensitivity of planform variables to different network growth models, and it is proposed that this constraint may determine the planform laws. The "Q model" of network growth of Van Pelt and Verwer (1985) is used to generate samples of networks. Sensitivity to the model parameter Q is markedly reduced when the networks generated are required to be space filling. For a wide variety of Q values, the space-filling networks are in approximate agreement with the various channel network planform laws. Additional constraints, including of energy efficiency, were not studied but may further reduce the variability of planform laws. Inference of model parameter Q from network topology is successful only in networks not subject to spatial constraints. In space-filling networks, for a wide range of Q values, the maximal-likelihood Q parameter value is generally in the vicinity of 1/2, which yields topological randomness. It is proposed that space filling originates the appearance of randomness in channel network topology, and may cause difficulties to geomorphological inference from network planform.

  20. Ab Initio Crystal Field for Lanthanides.

    PubMed

    Ungur, Liviu; Chibotaru, Liviu F

    2017-03-13

    An ab initio methodology for the first-principle derivation of crystal-field (CF) parameters for lanthanides is described. The methodology is applied to the analysis of CF parameters in [Tb(Pc) 2 ] - (Pc=phthalocyanine) and Dy 4 K 2 ([Dy 4 K 2 O(OtBu) 12 ]) complexes, and compared with often used approximate and model descriptions. It is found that the application of geometry symmetrization, and the use of electrostatic point-charge and phenomenological CF models, lead to unacceptably large deviations from predictions based on ab initio calculations for experimental geometry. It is shown how the predictions of standard CASSCF (Complete Active Space Self-Consistent Field) calculations (with 4f orbitals in the active space) can be systematically improved by including effects of dynamical electronic correlation (CASPT2 step) and by admixing electronic configurations of the 5d shell. This is exemplified for the well-studied Er-trensal complex (H 3 trensal=2,2',2"-tris(salicylideneimido)trimethylamine). The electrostatic contributions to CF parameters in this complex, calculated with true charge distributions in the ligands, yield less than half of the total CF splitting, thus pointing to the dominant role of covalent effects. This analysis allows the conclusion that ab initio crystal field is an essential tool for the decent description of lanthanides. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

Top