Fletcher, Patrick; Bertram, Richard; Tabak, Joel
2016-06-01
Models of electrical activity in excitable cells involve nonlinear interactions between many ionic currents. Changing parameters in these models can produce a variety of activity patterns with sometimes unexpected effects. Further more, introducing new currents will have different effects depending on the initial parameter set. In this study we combined global sampling of parameter space and local analysis of representative parameter sets in a pituitary cell model to understand the effects of adding K (+) conductances, which mediate some effects of hormone action on these cells. Global sampling ensured that the effects of introducing K (+) conductances were captured across a wide variety of contexts of model parameters. For each type of K (+) conductance we determined the types of behavioral transition that it evoked. Some transitions were counterintuitive, and may have been missed without the use of global sampling. In general, the wide range of transitions that occurred when the same current was applied to the model cell at different locations in parameter space highlight the challenge of making accurate model predictions in light of cell-to-cell heterogeneity. Finally, we used bifurcation analysis and fast/slow analysis to investigate why specific transitions occur in representative individual models. This approach relies on the use of a graphics processing unit (GPU) to quickly map parameter space to model behavior and identify parameter sets for further analysis. Acceleration with modern low-cost GPUs is particularly well suited to exploring the moderate-sized (5-20) parameter spaces of excitable cell and signaling models.
Hierarchical multistage MCMC follow-up of continuous gravitational wave candidates
NASA Astrophysics Data System (ADS)
Ashton, G.; Prix, R.
2018-05-01
Leveraging Markov chain Monte Carlo optimization of the F statistic, we introduce a method for the hierarchical follow-up of continuous gravitational wave candidates identified by wide-parameter space semicoherent searches. We demonstrate parameter estimation for continuous wave sources and develop a framework and tools to understand and control the effective size of the parameter space, critical to the success of the method. Monte Carlo tests of simulated signals in noise demonstrate that this method is close to the theoretical optimal performance.
Bursting endemic bubbles in an adaptive network
NASA Astrophysics Data System (ADS)
Sherborne, N.; Blyuss, K. B.; Kiss, I. Z.
2018-04-01
The spread of an infectious disease is known to change people's behavior, which in turn affects the spread of disease. Adaptive network models that account for both epidemic and behavioral change have found oscillations, but in an extremely narrow region of the parameter space, which contrasts with intuition and available data. In this paper we propose a simple susceptible-infected-susceptible epidemic model on an adaptive network with time-delayed rewiring, and show that oscillatory solutions are now present in a wide region of the parameter space. Altering the transmission or rewiring rates reveals the presence of an endemic bubble—an enclosed region of the parameter space where oscillations are observed.
Parameter-space metric of semicoherent searches for continuous gravitational waves
NASA Astrophysics Data System (ADS)
Pletsch, Holger J.
2010-08-01
Continuous gravitational-wave (CW) signals such as emitted by spinning neutron stars are an important target class for current detectors. However, the enormous computational demand prohibits fully coherent broadband all-sky searches for prior unknown CW sources over wide ranges of parameter space and for yearlong observation times. More efficient hierarchical “semicoherent” search strategies divide the data into segments much shorter than one year, which are analyzed coherently; then detection statistics from different segments are combined incoherently. To optimally perform the incoherent combination, understanding of the underlying parameter-space structure is requisite. This problem is addressed here by using new coordinates on the parameter space, which yield the first analytical parameter-space metric for the incoherent combination step. This semicoherent metric applies to broadband all-sky surveys (also embedding directed searches at fixed sky position) for isolated CW sources. Furthermore, the additional metric resolution attained through the combination of segments is studied. From the search parameters (sky position, frequency, and frequency derivatives), solely the metric resolution in the frequency derivatives is found to significantly increase with the number of segments.
Stroet, Martin; Koziara, Katarzyna B; Malde, Alpeshkumar K; Mark, Alan E
2017-12-12
A general method for parametrizing atomic interaction functions is presented. The method is based on an analysis of surfaces corresponding to the difference between calculated and target data as a function of alternative combinations of parameters (parameter space mapping). The consideration of surfaces in parameter space as opposed to local values or gradients leads to a better understanding of the relationships between the parameters being optimized and a given set of target data. This in turn enables for a range of target data from multiple molecules to be combined in a robust manner and for the optimal region of parameter space to be trivially identified. The effectiveness of the approach is illustrated by using the method to refine the chlorine 6-12 Lennard-Jones parameters against experimental solvation free enthalpies in water and hexane as well as the density and heat of vaporization of the liquid at atmospheric pressure for a set of 10 aromatic-chloro compounds simultaneously. Single-step perturbation is used to efficiently calculate solvation free enthalpies for a wide range of parameter combinations. The capacity of this approach to parametrize accurate and transferrable force fields is discussed.
Sensitivity study of Space Station Freedom operations cost and selected user resources
NASA Technical Reports Server (NTRS)
Accola, Anne; Fincannon, H. J.; Williams, Gregory J.; Meier, R. Timothy
1990-01-01
The results of sensitivity studies performed to estimate probable ranges for four key Space Station parameters using the Space Station Freedom's Model for Estimating Space Station Operations Cost (MESSOC) are discussed. The variables examined are grouped into five main categories: logistics, crew, design, space transportation system, and training. The modification of these variables implies programmatic decisions in areas such as orbital replacement unit (ORU) design, investment in repair capabilities, and crew operations policies. The model utilizes a wide range of algorithms and an extensive trial logistics data base to represent Space Station operations. The trial logistics data base consists largely of a collection of the ORUs that comprise the mature station, and their characteristics based on current engineering understanding of the Space Station. A nondimensional approach is used to examine the relative importance of variables on parameters.
On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.
Yamazaki, Keisuke
2012-07-01
Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weigand, Steven J.; Keane, Denis T.
The DuPont-Northwestern-Dow Collaborative Access Team (DND-CAT) built and currently manages sector 5 at the Advanced Photon Source (APS), Argonne National Laboratory. One of the principal techniques supported by DND-CAT is Small and Wide-Angle X-ray Scattering (SAXS/WAXS), with an emphasis on simultaneous data collection over a wide azimuthal and reciprocal space range using a custom SAXS/WAXS detector system. A new triple detector system is now in development, and we describe the key parameters and characteristics of the new instrument, which will be faster, more flexible, more robust, and will improve q-space resolution in a critical reciprocal space regime between the traditionalmore » WAXS and SAXS ranges.« less
NASA Technical Reports Server (NTRS)
Weisskopf, M. C.; Elsner, R. F.; O'Dell, S. L.; Ramsey, B. D.
2010-01-01
We present a progress report on the various endeavors we are undertaking at MSFC in support of the Wide Field X-Ray Telescope development. In particular we discuss assembly and alignment techniques, in-situ polishing corrections, and the results of our efforts to optimize mirror prescriptions including polynomial coefficients, relative shell displacements, detector placements and tilts. This optimization does not require a blind search through the multi-dimensional parameter space. Under the assumption that the parameters are small enough so that second order expansions are valid, we show that the performance at the detector can be expressed as a quadratic function with numerical coefficients derived from a ray trace through the underlying Wolter I optic. The optimal values for the parameters are found by solving the linear system of equations creating by setting derivatives of this function with respect to each parameter to zero.
Library of Giant Planet Reflection Spectra for WFirst and Future Space Telescopes
NASA Astrophysics Data System (ADS)
Smith, Adam J. R. W.; Fortney, Jonathan; Morley, Caroline; Batalha, Natasha E.; Lewis, Nikole K.
2018-01-01
Future large space space telescopes will be able to directly image exoplanets in optical light. The optical light of a resolved planet is due to stellar flux reflected by Rayleigh scattering or cloud scattering, with absorption features imprinted due to molecular bands in the planetary atmosphere. To aid in the design of such missions, and to better understand a wide range of giant planet atmospheres, we have built a library of model giant planet reflection spectra, for the purpose of determining effective methods of spectral analysis as well as for comparison with actual imaged objects. This library covers a wide range of parameters: objects are modeled at ten orbital distances between 0.5 AU and 5.0 AU, which ranges from planets too warm for water clouds, out to those that are true Jupiter analogs. These calculations include six metalicities between solar and 100x solar, with a variety of different cloud thickness parameters, and across all possible phase angles.
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.
Aggarwal, Ankush
2017-08-01
Motivated by the well-known result that stiffness of soft tissue is proportional to the stress, many of the constitutive laws for soft tissues contain an exponential function. In this work, we analyze properties of the exponential function and how it affects the estimation and comparison of elastic parameters for soft tissues. In particular, we find that as a consequence of the exponential function there are lines of high covariance in the elastic parameter space. As a result, one can have widely varying mechanical parameters defining the tissue stiffness but similar effective stress-strain responses. Drawing from elementary algebra, we propose simple changes in the norm and the parameter space, which significantly improve the convergence of parameter estimation and robustness in the presence of noise. More importantly, we demonstrate that these changes improve the conditioning of the problem and provide a more robust solution in the case of heterogeneous material by reducing the chances of getting trapped in a local minima. Based upon the new insight, we also propose a transformed parameter space which will allow for rational parameter comparison and avoid misleading conclusions regarding soft tissue mechanics.
Methods of Optimizing X-Ray Optical Prescriptions for Wide-Field Applications
NASA Technical Reports Server (NTRS)
Elsner, R. F.; O'Dell, S. L.; Ramsey, B. D.; Weisskopf, M. C.
2010-01-01
We are working on the development of a method for optimizing wide-field x-ray telescope mirror prescriptions, including polynomial coefficients, mirror shell relative displacements, and (assuming 4 focal plane detectors) detector placement and tilt that does not require a search through the multi-dimensional parameter space. Under the assumption that the parameters are small enough that second order expansions are valid, we show that the performance at the detector surface can be expressed as a quadratic function of the parameters with numerical coefficients derived from a ray trace through the underlying Wolter I optic. The best values for the parameters are found by solving the linear system of equations creating by setting derivatives of this function with respect to each parameter to zero. We describe the present status of this development effort.
Weak Lensing from Space I: Instrumentation and Survey Strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rhodes, Jason; Refregier, Alexandre; Massey, Richard
A wide field space-based imaging telescope is necessary to fully exploit the technique of observing dark matter via weak gravitational lensing. This first paper in a three part series outlines the survey strategies and relevant instrumental parameters for such a mission. As a concrete example of hardware design, we consider the proposed Supernova/Acceleration Probe (SNAP). Using SNAP engineering models, we quantify the major contributions to this telescope's Point Spread Function (PSF). These PSF contributions are relevant to any similar wide field space telescope. We further show that the PSF of SNAP or a similar telescope will be smaller than currentmore » ground-based PSFs, and more isotropic and stable over time than the PSF of the Hubble Space Telescope. We outline survey strategies for two different regimes - a ''wide'' 300 square degree survey and a ''deep'' 15 square degree survey that will accomplish various weak lensing goals including statistical studies and dark matter mapping.« less
2013-08-15
InAsSb, compositionally graded buffer, MBE, infrared, minority carrier lifetime, reciprocal space mapping Ding Wang, Dmitry Donetsky, Youxi Lin, Gela...infrared, minority carrier lifetime; reciprocal space mapping . Introduction GaSb based Ill-Y materials are widely used in the development of mid... space mapping (RSM) at the symmetric (004) and asymmetric (335) Bragg reflections. Figure 3 presents a set of RSM measurements for a structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaut, Arkadiusz
We present the results of the estimation of parameters with LISA for nearly monochromatic gravitational waves in the low and high frequency regimes for the time-delay interferometry response. Angular resolution of the detector and the estimation errors of the signal's parameters in the high frequency regimes are calculated as functions of the position in the sky and as functions of the frequency. For the long-wavelength domain we give compact formulas for the estimation errors valid on a wide range of the parameter space.
NASA Astrophysics Data System (ADS)
Lazar, M.; Shaaban, S. M.; Fichtner, H.; Poedts, S.
2018-02-01
Two central components are revealed by electron velocity distributions measured in space plasmas, a thermal bi-Maxwellian core and a bi-Kappa suprathermal halo. A new kinetic approach is proposed to characterize the temperature anisotropy instabilities driven by the interplay of core and halo electrons. Suggested by the observations in the solar wind, direct correlations of these two populations are introduced as co-variations of the key parameters, e.g., densities, temperature anisotropies, and (parallel) plasma betas. The approach involving correlations enables the instability characterization in terms of either the core or halo parameters and a comparative analysis to depict mutual effects. In the present paper, the instability conditions are described for an extended range of plasma beta parameters, making the new dual approach relevant for a wide variety of space plasmas, including the solar wind and planetary magnetospheres.
NASA Astrophysics Data System (ADS)
Cabral, Mariza Castanheira De Moura Da Costa
In the fifty-two years since Robert Horton's 1945 pioneering quantitative description of channel network planform (or plan view morphology), no conclusive findings have been presented that permit inference of geomorphological processes from any measures of network planform. All measures of network planform studied exhibit limited geographic variability across different environments. Horton (1945), Langbein et al. (1947), Schumm (1956), Hack (1957), Melton (1958), and Gray (1961) established various "laws" of network planform, that is, statistical relationships between different variables which have limited variability. A wide variety of models which have been proposed to simulate the growth of channel networks in time over a landsurface are generally also in agreement with the above planform laws. An explanation is proposed for the generality of the channel network planform laws. Channel networks must be space filling, that is, they must extend over the landscape to drain every hillslope, leaving no large undrained areas, and with no crossing of channels, often achieving a roughly uniform drainage density in a given environment. It is shown that the space-filling constraint can reduce the sensitivity of planform variables to different network growth models, and it is proposed that this constraint may determine the planform laws. The "Q model" of network growth of Van Pelt and Verwer (1985) is used to generate samples of networks. Sensitivity to the model parameter Q is markedly reduced when the networks generated are required to be space filling. For a wide variety of Q values, the space-filling networks are in approximate agreement with the various channel network planform laws. Additional constraints, including of energy efficiency, were not studied but may further reduce the variability of planform laws. Inference of model parameter Q from network topology is successful only in networks not subject to spatial constraints. In space-filling networks, for a wide range of Q values, the maximal-likelihood Q parameter value is generally in the vicinity of 1/2, which yields topological randomness. It is proposed that space filling originates the appearance of randomness in channel network topology, and may cause difficulties to geomorphological inference from network planform.
NASA Astrophysics Data System (ADS)
Kern, Nicholas S.; Liu, Adrian; Parsons, Aaron R.; Mesinger, Andrei; Greig, Bradley
2017-10-01
Current and upcoming radio interferometric experiments are aiming to make a statistical characterization of the high-redshift 21 cm fluctuation signal spanning the hydrogen reionization and X-ray heating epochs of the universe. However, connecting 21 cm statistics to the underlying physical parameters is complicated by the theoretical challenge of modeling the relevant physics at computational speeds quick enough to enable exploration of the high-dimensional and weakly constrained parameter space. In this work, we use machine learning algorithms to build a fast emulator that can accurately mimic an expensive simulation of the 21 cm signal across a wide parameter space. We embed our emulator within a Markov Chain Monte Carlo framework in order to perform Bayesian parameter constraints over a large number of model parameters, including those that govern the Epoch of Reionization, the Epoch of X-ray Heating, and cosmology. As a worked example, we use our emulator to present an updated parameter constraint forecast for the Hydrogen Epoch of Reionization Array experiment, showing that its characterization of a fiducial 21 cm power spectrum will considerably narrow the allowed parameter space of reionization and heating parameters, and could help strengthen Planck's constraints on {σ }8. We provide both our generalized emulator code and its implementation specifically for 21 cm parameter constraints as publicly available software.
NASA Astrophysics Data System (ADS)
Howe, Alex R.; Burrows, Adam; Deming, Drake
2017-01-01
We provide an example of an analysis to explore the optimization of observations of transiting hot Jupiters with the James Webb Space Telescope (JWST) to characterize their atmospheres based on a simple three-parameter forward model. We construct expansive forward model sets for 11 hot Jupiters, 10 of which are relatively well characterized, exploring a range of parameters such as equilibrium temperature and metallicity, as well as considering host stars over a wide range in brightness. We compute posterior distributions of our model parameters for each planet with all of the available JWST spectroscopic modes and several programs of combined observations and compute their effectiveness using the metric of estimated mutual information per degree of freedom. From these simulations, clear trends emerge that provide guidelines for designing a JWST observing program. We demonstrate that these guidelines apply over a wide range of planet parameters and target brightnesses for our simple forward model.
Characterizing the Space Debris Environment with a Variety of SSA Sensors
NASA Technical Reports Server (NTRS)
Stansbery, Eugene G.
2010-01-01
Damaging space debris spans a wide range of sizes and altitudes. Therefore no single method or sensor can fully characterize the space debris environment. Space debris researchers use a variety of radars and optical telescopes to characterize the space debris environment in terms of number, altitude, and inclination distributions. Some sensors, such as phased array radars, are designed to search a large volume of the sky and can be instrumental in detecting new breakups and cataloging and precise tracking of relatively large debris. For smaller debris sizes more sensitivity is needed which can be provided, in part, by large antenna gains. Larger antenna gains, however, produce smaller fields of view. Statistical measurements of the debris environment with less precise orbital parameters result. At higher altitudes, optical telescopes become the more sensitive instrument and present their own measurement difficulties. Space Situational Awareness, or SSA, is concerned with more than the number and orbits of satellites. SSA also seeks to understand such parameters as the function, shape, and composition of operational satellites. Similarly, debris researchers are seeking to characterize similar parameters for space debris to improve our knowledge of the risks debris poses to operational satellites as well as determine sources of debris for future mitigation. This paper will discuss different sensor and sensor types and the role that each plays in fully characterizing the space debris environment.
Radiative Cooling of Warm Molecular Gas
NASA Technical Reports Server (NTRS)
Neufeld, David A.; Kaufman, Michael J.
1993-01-01
We consider the radiative cooling of warm (T >= 100 K), fully molecular astrophysical gas by rotational and vibrational transitions of the molecules H2O, CO, and H2. Using an escape probability method to solve for the molecular level populations, we have obtained the cooling rate for each molecule as a function of temperature, density, and an optical depth parameter. A four-parameter expression proves useful in fitting the run of cooling rate with density for any fixed values of the temperature and optical depth parameter. We identify the various cooling mechanisms which are dominant in different regions of the astrophysically relevant parameter space. Given the assumption that water is very abundant in warm regions of the interstellar medium, H2O rotational transitions are found to dominate the cooling of warm interstellar gas over a wide portion of the parameter space considered. While chemical models for the interstellar medium make the strong prediction that water will be produced copiously at temperatures above a few hundred degrees, our assumption of a high water abundance has yet to be tested observationally. The Infrared Space Observatory and the Submillimeter Wave Astronomy Satellite will prove ideal instruments for testing whether water is indeed an important coolant of interstellar and circumstellar gas.
Impact of Ice Morphology on Design Space of Pharmaceutical Freeze-Drying.
Goshima, Hiroshika; Do, Gabsoo; Nakagawa, Kyuya
2016-06-01
It has been known that the sublimation kinetics of a freeze-drying product is affected by its internal ice crystal microstructures. This article demonstrates the impact of the ice morphologies of a frozen formulation in a vial on the design space for the primary drying of a pharmaceutical freeze-drying process. Cross-sectional images of frozen sucrose-bovine serum albumin aqueous solutions were optically observed and digital pictures were acquired. Binary images were obtained from the optical data to extract the geometrical parameters (i.e., ice crystal size and tortuosity) that relate to the mass-transfer resistance of water vapor during the primary drying step. A mathematical model was used to simulate the primary drying kinetics and provided the design space for the process. The simulation results predicted that the geometrical parameters of frozen solutions significantly affect the design space, with large and less tortuous ice morphologies resulting in wide design spaces and vice versa. The optimal applicable drying conditions are influenced by the ice morphologies. Therefore, owing to the spatial distributions of the geometrical parameters of a product, the boundary curves of the design space are variable and could be tuned by controlling the ice morphologies. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Updated MDRIZTAB Parameters for ACS/WFC
NASA Astrophysics Data System (ADS)
Hoffman, S. L.; Avila, R. J.
2017-03-01
The Mikulski Archive for Space Telescopes (MAST) pipeline performs geometric distortion corrections, associated image combinations, and cosmic ray rejections with AstroDrizzle. The MDRIZTAB reference table contains a list of relevant parameters that controls this program. This document details our photometric analysis of Advanced Camera for Surveys Wide Field Channel (ACS/WFC) data processed by AstroDrizzle. Based on this analysis, we update the MDRIZTAB table to improve the quality of the drizzled products delivered by MAST.
Fan, Ming; Kuwahara, Hiroyuki; Wang, Xiaolei; Wang, Suojin; Gao, Xin
2015-11-01
Parameter estimation is a challenging computational problem in the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter estimation of gene circuit models from such time-series mRNA data has become an important method for quantitatively dissecting the regulation of gene expression. By focusing on the modeling of gene circuits, we examine here the performance of three types of state-of-the-art parameter estimation methods: population-based methods, online methods and model-decomposition-based methods. Our results show that certain population-based methods are able to generate high-quality parameter solutions. The performance of these methods, however, is heavily dependent on the size of the parameter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, online methods and model decomposition-based methods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fast methods with local search as a subsequent refinement procedure can substantially increase the quality of their parameter estimates to the level on par with the best solution obtained from the population-based methods while maintaining high computational speed. These suggest that such hybrid methods can be a promising alternative to the more commonly used population-based methods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatory mechanisms makes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Phase transitions in tumor growth: V what can be expected from cancer glycolytic oscillations?
NASA Astrophysics Data System (ADS)
Martin, R. R.; Montero, S.; Silva, E.; Bizzarri, M.; Cocho, G.; Mansilla, R.; Nieto-Villar, J. M.
2017-11-01
Experimental evidence confirms the existence of glycolytic oscillations in cancer, which allows it to self-organize in time and space far from thermodynamic equilibrium, and provides it with high robustness, complexity and adaptability. A kinetic model is proposed for HeLa tumor cells grown in hypoxia conditions. It shows oscillations in a wide range of parameters. Two control parameters (glucose and inorganic phosphate concentration) were varied to explore the phase space, showing also the presence of limit cycles and bifurcations. The complexity of the system was evaluated by focusing on stationary state stability and Lempel-Ziv complexity. Moreover, the calculated entropy production rate was demonstrated behaving as a Lyapunov function.
Oktem, Figen S; Ozaktas, Haldun M
2010-08-01
Linear canonical transforms (LCTs) form a three-parameter family of integral transforms with wide application in optics. We show that LCT domains correspond to scaled fractional Fourier domains and thus to scaled oblique axes in the space-frequency plane. This allows LCT domains to be labeled and ordered by the corresponding fractional order parameter and provides insight into the evolution of light through an optical system modeled by LCTs. If a set of signals is highly confined to finite intervals in two arbitrary LCT domains, the space-frequency (phase space) support is a parallelogram. The number of degrees of freedom of this set of signals is given by the area of this parallelogram, which is equal to the bicanonical width product but usually smaller than the conventional space-bandwidth product. The bicanonical width product, which is a generalization of the space-bandwidth product, can provide a tighter measure of the actual number of degrees of freedom, and allows us to represent and process signals with fewer samples.
Su, Luning; Li, Wei; Wu, Mingxuan; Su, Yun; Guo, Chongling; Ruan, Ningjuan; Yang, Bingxin; Yan, Feng
2017-08-01
Lobster-eye optics is widely applied to space x-ray detection missions and x-ray security checks for its wide field of view and low weight. This paper presents a theoretical model to obtain spatial distribution of focusing efficiency based on lobster-eye optics in a soft x-ray wavelength. The calculations reveal the competition mechanism of contributions to the focusing efficiency between the geometrical parameters of lobster-eye optics and the reflectivity of the iridium film. In addition, the focusing efficiency image depending on x-ray wavelengths further explains the influence of different geometrical parameters of lobster-eye optics and different soft x-ray wavelengths on focusing efficiency. These results could be beneficial to optimize parameters of lobster-eye optics in order to realize maximum focusing efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howe, Alex R.; Burrows, Adam; Deming, Drake, E-mail: arhowe@umich.edu, E-mail: burrows@astro.princeton.edu, E-mail: ddeming@astro.umd.edu
We provide an example of an analysis to explore the optimization of observations of transiting hot Jupiters with the James Webb Space Telescope ( JWST ) to characterize their atmospheres based on a simple three-parameter forward model. We construct expansive forward model sets for 11 hot Jupiters, 10 of which are relatively well characterized, exploring a range of parameters such as equilibrium temperature and metallicity, as well as considering host stars over a wide range in brightness. We compute posterior distributions of our model parameters for each planet with all of the available JWST spectroscopic modes and several programs ofmore » combined observations and compute their effectiveness using the metric of estimated mutual information per degree of freedom. From these simulations, clear trends emerge that provide guidelines for designing a JWST observing program. We demonstrate that these guidelines apply over a wide range of planet parameters and target brightnesses for our simple forward model.« less
A review of pharmaceutical extrusion: critical process parameters and scaling-up.
Thiry, J; Krier, F; Evrard, B
2015-02-01
Hot melt extrusion has been a widely used process in the pharmaceutical area for three decades. In this field, it is important to optimize the formulation in order to meet specific requirements. However, the process parameters of the extruder should be as much investigated as the formulation since they have a major impact on the final product characteristics. Moreover, a design space should be defined in order to obtain the expected product within the defined limits. This gives some freedom to operate as long as the processing parameters stay within the limits of the design space. Those limits can be investigated by varying randomly the process parameters but it is recommended to use design of experiments. An examination of the literature is reported in this review to summarize the impact of the variation of the process parameters on the final product properties. Indeed, the homogeneity of the mixing, the state of the drug (crystalline or amorphous), the dissolution rate, the residence time, can be influenced by variations in the process parameters. In particular, the impact of the following process parameters: temperature, screw design, screw speed and feeding, on the final product, has been reviewed. Copyright © 2014 Elsevier B.V. All rights reserved.
Improving Image Drizzling in the HST Archive: Advanced Camera for Surveys
NASA Astrophysics Data System (ADS)
Hoffmann, Samantha L.; Avila, Roberto J.
2017-06-01
The Mikulski Archive for Space Telescopes (MAST) pipeline performs geometric distortion corrections, associated image combinations, and cosmic ray rejections with AstroDrizzle on Hubble Space Telescope (HST) data. The MDRIZTAB reference table contains a list of relevant parameters that controls this program. This document details our photometric analysis of Advanced Camera for Surveys Wide Field Channel (ACS/WFC) data processed by AstroDrizzle. Based on this analysis, we update the MDRIZTAB table to improve the quality of the drizzled products delivered by MAST.
SPICE: A Geometry Information System Supporting Planetary Mapping, Remote Sensing and Data Mining
NASA Technical Reports Server (NTRS)
Acton, C.; Bachman, N.; Semenov, B.; Wright, E.
2013-01-01
SPICE is an information system providing space scientists ready access to a wide assortment of space geometry useful in planning science observations and analyzing the instrument data returned therefrom. The system includes software used to compute many derived parameters such as altitude, LAT/LON and lighting angles, and software able to find when user-specified geometric conditions are obtained. While not a formal standard, it has achieved widespread use in the worldwide planetary science community
Space charge induced surface stresses: implications in ceria and other ionic solids.
Sheldon, Brian W; Shenoy, Vivek B
2011-05-27
Volume changes associated with point defects in space charge layers can produce strains that substantially alter thermodynamic equilibrium near surfaces in ionic solids. For example, near-surface compressive stresses exceeding -10 GPa are predicted for ceria. The magnitude of this effect is consistent with anomalous lattice parameter increases that occur in ceria nanoparticles. These stresses should significantly alter defect concentrations and key transport properties in a wide range of materials (e.g., ceria electrolytes in fuel cells). © 2011 American Physical Society
The evolution of computer monitoring of real time data during the Atlas Centaur launch countdown
NASA Technical Reports Server (NTRS)
Thomas, W. F.
1981-01-01
In the last decade, improvements in computer technology have provided new 'tools' for controlling and monitoring critical missile systems. In this connection, computers have gradually taken a large role in monitoring all flights and ground systems on the Atlas Centaur. The wide body Centaur which will be launched in the Space Shuttle Cargo Bay will use computers to an even greater extent. It is planned to use the wide body Centaur to boost the Galileo spacecraft toward Jupiter in 1985. The critical systems which must be monitored prior to liftoff are examined. Computers have now been programmed to monitor all critical parameters continuously. At this time, there are two separate computer systems used to monitor these parameters.
Optimal directed searches for continuous gravitational waves
NASA Astrophysics Data System (ADS)
Ming, Jing; Krishnan, Badri; Papa, Maria Alessandra; Aulbert, Carsten; Fehrmann, Henning
2016-03-01
Wide parameter space searches for long-lived continuous gravitational wave signals are computationally limited. It is therefore critically important that the available computational resources are used rationally. In this paper we consider directed searches, i.e., targets for which the sky position is known accurately but the frequency and spin-down parameters are completely unknown. Given a list of such potential astrophysical targets, we therefore need to prioritize. On which target(s) should we spend scarce computing resources? What parameter space region in frequency and spin-down should we search through? Finally, what is the optimal search setup that we should use? In this paper we present a general framework that allows us to solve all three of these problems. This framework is based on maximizing the probability of making a detection subject to a constraint on the maximum available computational cost. We illustrate the method for a simplified problem.
NASA Astrophysics Data System (ADS)
Cecchini, Micael A.; Machado, Luiz A. T.; Wendisch, Manfred; Costa, Anja; Krämer, Martina; Andreae, Meinrat O.; Afchine, Armin; Albrecht, Rachel I.; Artaxo, Paulo; Borrmann, Stephan; Fütterer, Daniel; Klimach, Thomas; Mahnke, Christoph; Martin, Scot T.; Minikin, Andreas; Molleker, Sergej; Pardo, Lianet H.; Pöhlker, Christopher; Pöhlker, Mira L.; Pöschl, Ulrich; Rosenfeld, Daniel; Weinzierl, Bernadett
2017-12-01
The behavior of tropical clouds remains a major open scientific question, resulting in poor representation by models. One challenge is to realistically reproduce cloud droplet size distributions (DSDs) and their evolution over time and space. Many applications, not limited to models, use the gamma function to represent DSDs. However, even though the statistical characteristics of the gamma parameters have been widely studied, there is almost no study dedicated to understanding the phase space of this function and the associated physics. This phase space can be defined by the three parameters that define the DSD intercept, shape, and curvature. Gamma phase space may provide a common framework for parameterizations and intercomparisons. Here, we introduce the phase space approach and its characteristics, focusing on warm-phase microphysical cloud properties and the transition to the mixed-phase layer. We show that trajectories in this phase space can represent DSD evolution and can be related to growth processes. Condensational and collisional growth may be interpreted as pseudo-forces that induce displacements in opposite directions within the phase space. The actually observed movements in the phase space are a result of the combination of such pseudo-forces. Additionally, aerosol effects can be evaluated given their significant impact on DSDs. The DSDs associated with liquid droplets that favor cloud glaciation can be delimited in the phase space, which can help models to adequately predict the transition to the mixed phase. We also consider possible ways to constrain the DSD in two-moment bulk microphysics schemes, in which the relative dispersion parameter of the DSD can play a significant role. Overall, the gamma phase space approach can be an invaluable tool for studying cloud microphysical evolution and can be readily applied in many scenarios that rely on gamma DSDs.
1982-10-01
thermal noise and radioastronomy is probably the application Shirman had in mind for that work. Kuriksha considers a wide class of two-dimensional...this point has been discussed In terms of EM wave propagation, signal detection, and parameter estimation in such fields as radar and radioastronomy
Switching LPV Control for High Performance Tactical Aircraft
NASA Technical Reports Server (NTRS)
Lu, Bei; Wu, Fen; Kim, SungWan
2004-01-01
This paper examines a switching Linear Parameter-Varying (LPV) control approach to determine if it is practical to use for flight control designs within a wide angle of attack region. The approach is based on multiple parameter-dependent Lyapunov functions. The full parameter space is partitioned into overlapping subspaces and a family of LPV controllers are designed, each suitable for a specific parameter subspace. The hysteresis switching logic is used to accomplish the transition among different parameter subspaces. The proposed switching LPV control scheme is applied to an F-16 aircraft model with different actuator dynamics in low and high angle of attack regions. The nonlinear simulation results show that the aircraft performs well when switching among different angle of attack regions.
NASA Astrophysics Data System (ADS)
Reynerson, Charles Martin
This research has been performed to create concept design and economic feasibility data for space business parks. A space business park is a commercially run multi-use space station facility designed for use by a wide variety of customers. Both space hardware and crew are considered as revenue producing payloads. Examples of commercial markets may include biological and materials research, processing, and production, space tourism habitats, and satellite maintenance and resupply depots. This research develops a design methodology and an analytical tool to create feasible preliminary design information for space business parks. The design tool is validated against a number of real facility designs. Appropriate model variables are adjusted to ensure that statistical approximations are valid for subsequent analyses. The tool is used to analyze the effect of various payload requirements on the size, weight and power of the facility. The approach for the analytical tool was to input potential payloads as simple requirements, such as volume, weight, power, crew size, and endurance. In creating the theory, basic principles are used and combined with parametric estimation of data when necessary. Key system parameters are identified for overall system design. Typical ranges for these key parameters are identified based on real human spaceflight systems. To connect the economics to design, a life-cycle cost model is created based upon facility mass. This rough cost model estimates potential return on investments, initial investment requirements and number of years to return on the initial investment. Example cases are analyzed for both performance and cost driven requirements for space hotels, microgravity processing facilities, and multi-use facilities. In combining both engineering and economic models, a design-to-cost methodology is created for more accurately estimating the commercial viability for multiple space business park markets.
Direct reconstruction of dark energy.
Clarkson, Chris; Zunckel, Caroline
2010-05-28
An important issue in cosmology is reconstructing the effective dark energy equation of state directly from observations. With so few physically motivated models, future dark energy studies cannot only be based on constraining a dark energy parameter space. We present a new nonparametric method which can accurately reconstruct a wide variety of dark energy behavior with no prior assumptions about it. It is simple, quick and relatively accurate, and involves no expensive explorations of parameter space. The technique uses principal component analysis and a combination of information criteria to identify real features in the data, and tailors the fitting functions to pick up trends and smooth over noise. We find that we can constrain a large variety of w(z) models to within 10%-20% at redshifts z≲1 using just SNAP-quality data.
Planetary and Space Simulation Facilities PSI at DLR for Astrobiology
NASA Astrophysics Data System (ADS)
Rabbow, E.; Rettberg, P.; Panitz, C.; Reitz, G.
2008-09-01
Ground based experiments, conducted in the controlled planetary and space environment simulation facilities PSI at DLR, are used to investigate astrobiological questions and to complement the corresponding experiments in LEO, for example on free flying satellites or on space exposure platforms on the ISS. In-orbit exposure facilities can only accommodate a limited number of experiments for exposure to space parameters like high vacuum, intense radiation of galactic and solar origin and microgravity, sometimes also technically adapted to simulate extraterrestrial planetary conditions like those on Mars. Ground based experiments in carefully equipped and monitored simulation facilities allow the investigation of the effects of simulated single environmental parameters and selected combinations on a much wider variety of samples. In PSI at DLR, international science consortia performed astrobiological investigations and space experiment preparations, exposing organic compounds and a wide range of microorganisms, reaching from bacterial spores to complex microbial communities, lichens and even animals like tardigrades to simulated planetary or space environment parameters in pursuit of exobiological questions on the resistance to extreme environments and the origin and distribution of life. The Planetary and Space Simulation Facilities PSI of the Institute of Aerospace Medicine at DLR in Köln, Germany, providing high vacuum of controlled residual composition, ionizing radiation of a X-ray tube, polychromatic UV radiation in the range of 170-400 nm, VIS and IR or individual monochromatic UV wavelengths, and temperature regulation from -20°C to +80°C at the sample size individually or in selected combinations in 9 modular facilities of varying sizes are presented with selected experiments performed within.
Perspective Space as a Model for Distance and Size Perception.
Erkelens, Casper J
2017-01-01
In the literature, perspective space has been introduced as a model of visual space. Perspective space is grounded on the perspective nature of visual space during both binocular and monocular vision. A single parameter, that is, the distance of the vanishing point, transforms the geometry of physical space into that of perspective space. The perspective-space model predicts perceived angles, distances, and sizes. The model is compared with other models for distance and size perception. Perspective space predicts that perceived distance and size as a function of physical distance are described by hyperbolic functions. Alternatively, power functions have been widely used to describe perceived distance and size. Comparison of power and hyperbolic functions shows that both functions are equivalent within the range of distances that have been judged in experiments. Two models describing perceived distance on the ground plane appear to be equivalent with the perspective-space model too. The conclusion is that perspective space unifies a number of models of distance and size perception.
Perspective Space as a Model for Distance and Size Perception
2017-01-01
In the literature, perspective space has been introduced as a model of visual space. Perspective space is grounded on the perspective nature of visual space during both binocular and monocular vision. A single parameter, that is, the distance of the vanishing point, transforms the geometry of physical space into that of perspective space. The perspective-space model predicts perceived angles, distances, and sizes. The model is compared with other models for distance and size perception. Perspective space predicts that perceived distance and size as a function of physical distance are described by hyperbolic functions. Alternatively, power functions have been widely used to describe perceived distance and size. Comparison of power and hyperbolic functions shows that both functions are equivalent within the range of distances that have been judged in experiments. Two models describing perceived distance on the ground plane appear to be equivalent with the perspective-space model too. The conclusion is that perspective space unifies a number of models of distance and size perception. PMID:29225765
Free-decay time-domain modal identification for large space structures
NASA Technical Reports Server (NTRS)
Kim, Hyoung M.; Vanhorn, David A.; Doiron, Harold H.
1992-01-01
Concept definition studies for the Modal Identification Experiment (MIE), a proposed space flight experiment for the Space Station Freedom (SSF), have demonstrated advantages and compatibility of free-decay time-domain modal identification techniques with the on-orbit operational constraints of large space structures. Since practical experience with modal identification using actual free-decay responses of large space structures is very limited, several numerical and test data reduction studies were conducted. Major issues and solutions were addressed, including closely-spaced modes, wide frequency range of interest, data acquisition errors, sampling delay, excitation limitations, nonlinearities, and unknown disturbances during free-decay data acquisition. The data processing strategies developed in these studies were applied to numerical simulations of the MIE, test data from a deployable truss, and launch vehicle flight data. Results of these studies indicate free-decay time-domain modal identification methods can provide accurate modal parameters necessary to characterize the structural dynamics of large space structures.
Gene flow analysis method, the D-statistic, is robust in a wide parameter space.
Zheng, Yichen; Janke, Axel
2018-01-08
We evaluated the sensitivity of the D-statistic, a parsimony-like method widely used to detect gene flow between closely related species. This method has been applied to a variety of taxa with a wide range of divergence times. However, its parameter space and thus its applicability to a wide taxonomic range has not been systematically studied. Divergence time, population size, time of gene flow, distance of outgroup and number of loci were examined in a sensitivity analysis. The sensitivity study shows that the primary determinant of the D-statistic is the relative population size, i.e. the population size scaled by the number of generations since divergence. This is consistent with the fact that the main confounding factor in gene flow detection is incomplete lineage sorting by diluting the signal. The sensitivity of the D-statistic is also affected by the direction of gene flow, size and number of loci. In addition, we examined the ability of the f-statistics, [Formula: see text] and [Formula: see text], to estimate the fraction of a genome affected by gene flow; while these statistics are difficult to implement to practical questions in biology due to lack of knowledge of when the gene flow happened, they can be used to compare datasets with identical or similar demographic background. The D-statistic, as a method to detect gene flow, is robust against a wide range of genetic distances (divergence times) but it is sensitive to population size. The D-statistic should only be applied with critical reservation to taxa where population sizes are large relative to branch lengths in generations.
Deep Space Network-Wide Portal Development: Planning Service Pilot Project
NASA Technical Reports Server (NTRS)
Doneva, Silviya
2011-01-01
The Deep Space Network (DSN) is an international network of antennas that supports interplanetary spacecraft missions and radio and radar astronomy observations for the exploration of the solar system and the universe. DSN provides the vital two-way communications link that guides and controls planetary explorers, and brings back the images and new scientific information they collect. In an attempt to streamline operations and improve overall services provided by the Deep Space Network a DSN-wide portal is under development. The project is one step in a larger effort to centralize the data collected from current missions including user input parameters for spacecraft to be tracked. This information will be placed into a principal repository where all operations related to the DSN are stored. Furthermore, providing statistical characterization of data volumes will help identify technically feasible tracking opportunities and more precise mission planning by providing upfront scheduling proposals. Business intelligence tools are to be incorporated in the output to deliver data visualization.
Galí, A; García-Montoya, E; Ascaso, M; Pérez-Lozano, P; Ticó, J R; Miñarro, M; Suñé-Negre, J M
2016-09-01
Although tablet coating processes are widely used in the pharmaceutical industry, they often lack adequate robustness. Up-scaling can be challenging as minor changes in parameters can lead to varying quality results. To select critical process parameters (CPP) using retrospective data of a commercial product and to establish a design of experiments (DoE) that would improve the robustness of the coating process. A retrospective analysis of data from 36 commercial batches. Batches were selected based on the quality results generated during batch release, some of which revealed quality deviations concerning the appearance of the coated tablets. The product is already marketed and belongs to the portfolio of a multinational pharmaceutical company. The Statgraphics 5.1 software was used for data processing to determine critical process parameters in order to propose new working ranges. This study confirms that it is possible to determine the critical process parameters and create design spaces based on retrospective data of commercial batches. This type of analysis is thus converted into a tool to optimize the robustness of existing processes. Our results show that a design space can be established with minimum investment in experiments, since current commercial batch data are processed statistically.
SP_Ace: a new code to derive stellar parameters and elemental abundances
NASA Astrophysics Data System (ADS)
Boeche, C.; Grebel, E. K.
2016-03-01
Context. Ongoing and future massive spectroscopic surveys will collect large numbers (106-107) of stellar spectra that need to be analyzed. Highly automated software is needed to derive stellar parameters and chemical abundances from these spectra. Aims: We developed a new method of estimating the stellar parameters Teff, log g, [M/H], and elemental abundances. This method was implemented in a new code, SP_Ace (Stellar Parameters And Chemical abundances Estimator). This is a highly automated code suitable for analyzing the spectra of large spectroscopic surveys with low or medium spectral resolution (R = 2000-20 000). Methods: After the astrophysical calibration of the oscillator strengths of 4643 absorption lines covering the wavelength ranges 5212-6860 Å and 8400-8924 Å, we constructed a library that contains the equivalent widths (EW) of these lines for a grid of stellar parameters. The EWs of each line are fit by a polynomial function that describes the EW of the line as a function of the stellar parameters. The coefficients of these polynomial functions are stored in a library called the "GCOG library". SP_Ace, a code written in FORTRAN95, uses the GCOG library to compute the EWs of the lines, constructs models of spectra as a function of the stellar parameters and abundances, and searches for the model that minimizes the χ2 deviation when compared to the observed spectrum. The code has been tested on synthetic and real spectra for a wide range of signal-to-noise and spectral resolutions. Results: SP_Ace derives stellar parameters such as Teff, log g, [M/H], and chemical abundances of up to ten elements for low to medium resolution spectra of FGK-type stars with precision comparable to the one usually obtained with spectra of higher resolution. Systematic errors in stellar parameters and chemical abundances are presented and identified with tests on synthetic and real spectra. Stochastic errors are automatically estimated by the code for all the parameters. A simple Web front end of SP_Ace can be found at http://dc.g-vo.org/SP_ACE while the source code will be published soon. Full Tables D.1-D.3 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/587/A2
Chasing a Comet with a Solar Sail
NASA Technical Reports Server (NTRS)
Stough, Robert W.; Heaton, Andrew F.; Whorton, Mark S.
2008-01-01
Solar sail propulsion systems enable a wide range of missions that require constant thrust or high delta-V over long mission times. One particularly challenging mission type is a comet rendezvous mission. This paper presents optimal low-thrust trajectory designs for a range of sailcraft performance metrics and mission transit times that enables a comet rendezvous mission. These optimal trajectory results provide a trade space which can be parameterized in terms of mission duration and sailcraft performance parameters such that a design space for a small satellite comet chaser mission is identified. These results show that a feasible space exists for a small satellite to perform a comet chaser mission in a reasonable mission time.
A PARMELA model of the CEBAF injector valid over a wide range of beam parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuhong Zhang; Kevin Beard; Jay Benesch
A PARMELA model of the CEBAF injector valid over a wide range of beam parameters Yuhong Zhang, Kevin Beard, Jay Benesch, Yu-Chiu Chao, Arne Freyberger, Joseph Grames, Reza Kazimi, Geoff Krafft, Rui Li, Lia Merminga, Matt Poelker, Michael Tiefenback, Byung Yunn Thomas Jefferson National Accelerator Facility 12000 Jefferson Avenue, Newport News, VA 23606 USA An earlier PARMELA model of the Jefferson Lab CEBAF photoinjector was recently revised. The initial phase space distribution of an electron bunch was determined by measuring spot size and pulselength of the driver laser and by beam emittance measurements. The improved model has been used formore » simulations of the simultaneous delivery of the Hall A beam required for a hypernuclear experiment, and the Hall C beam required for the G0 parity violation experiment.« less
Liu, Huijie; Li, Nianqiang; Zhao, Qingchun
2015-05-10
Optical chaos generated by chaotic lasers has been widely used in several important applications, such as chaos-based communications and high-speed random-number generators. However, these applications are susceptible to degradation by the presence of time-delay (TD) signature identified from the chaotic output. Here we propose to achieve the concealment of TD signature, along with the enhancement of chaos bandwidth, in three-cascaded vertical-cavity surface-emitting lasers (VCSELs). The cascaded system is composed of an external-cavity master VCSEL, a solitary intermediate VCSEL, and a solitary slave VCSEL. Through mapping the evolutions of TD signature and chaos bandwidth in the parameter space of the injection strength and frequency detuning, photonic generation of polarization-resolved wideband chaos with TD concealment is numerically demonstrated for wide regions of the injection parameters.
Fast Image Restoration for Spatially Varying Defocus Blur of Imaging Sensor
Cheong, Hejin; Chae, Eunjung; Lee, Eunsung; Jo, Gwanghyun; Paik, Joonki
2015-01-01
This paper presents a fast adaptive image restoration method for removing spatially varying out-of-focus blur of a general imaging sensor. After estimating the parameters of space-variant point-spread-function (PSF) using the derivative in each uniformly blurred region, the proposed method performs spatially adaptive image restoration by selecting the optimal restoration filter according to the estimated blur parameters. Each restoration filter is implemented in the form of a combination of multiple FIR filters, which guarantees the fast image restoration without the need of iterative or recursive processing. Experimental results show that the proposed method outperforms existing space-invariant restoration methods in the sense of both objective and subjective performance measures. The proposed algorithm can be employed to a wide area of image restoration applications, such as mobile imaging devices, robot vision, and satellite image processing. PMID:25569760
Locating and defining underground goaf caused by coal mining from space-borne SAR interferometry
NASA Astrophysics Data System (ADS)
Yang, Zefa; Li, Zhiwei; Zhu, Jianjun; Yi, Huiwei; Feng, Guangcai; Hu, Jun; Wu, Lixin; Preusse, Alex; Wang, Yunjia; Papst, Markus
2018-01-01
It is crucial to locate underground goafs (i.e., mined-out areas) resulting from coal mining and define their spatial dimensions for effectively controlling the induced damages and geohazards. Traditional geophysical techniques for locating and defining underground goafs, however, are ground-based, labour-consuming and costly. This paper presents a novel space-based method for locating and defining the underground goaf caused by coal extraction using Interferometric Synthetic Aperture Radar (InSAR) techniques. As the coal mining-induced goaf is often a cuboid-shaped void and eight critical geometric parameters (i.e., length, width, height, inclined angle, azimuth angle, mining depth, and two central geodetic coordinates) are capable of locating and defining this underground space, the proposed method reduces to determine the eight geometric parameters from InSAR observations. Therefore, it first applies the Probability Integral Method (PIM), a widely used model for mining-induced deformation prediction, to construct a functional relationship between the eight geometric parameters and the InSAR-derived surface deformation. Next, the method estimates these geometric parameters from the InSAR-derived deformation observations using a hybrid simulated annealing and genetic algorithm. Finally, the proposed method was tested with both simulated and two real data sets. The results demonstrate that the estimated geometric parameters of the goafs are accurate and compatible overall, with averaged relative errors of approximately 2.1% and 8.1% being observed for the simulated and the real data experiments, respectively. Owing to the advantages of the InSAR observations, the proposed method provides a non-contact, convenient and practical method for economically locating and defining underground goafs in a large spatial area from space.
NASA Astrophysics Data System (ADS)
Olurotimi, E. O.; Sokoya, O.; Ojo, J. S.; Owolawi, P. A.
2018-03-01
Rain height is one of the significant parameters for prediction of rain attenuation for Earth-space telecommunication links, especially those operating at frequencies above 10 GHz. This study examines Three-parameter Dagum distribution of the rain height over Durban, South Africa. 5-year data were used to study the monthly, seasonal, and annual variations using the parameters estimated by the maximum likelihood of the distribution. The performance estimation of the distribution was determined using the statistical goodness of fit. Three-parameter Dagum distribution shows an appropriate distribution for the modeling of rain height over Durban with the Root Mean Square Error of 0.26. Also, the shape and scale parameters for the distribution show a wide variation. The probability exceedance of time for 0.01% indicates the high probability of rain attenuation at higher frequencies.
Effect of parameters on picosecond laser ablation of Cr12MoV cold work mold steel
NASA Astrophysics Data System (ADS)
Wu, Baoye; Liu, Peng; Zhang, Fei; Duan, Jun; Wang, Xizhao; Zeng, Xiaoyan
2018-01-01
Cr12MoV cold work mold steel, which is a difficult-to-machining material, is widely used in the mold and dye industry. A picosecond pulse Nd:YVO4 laser at 1064 nm was used to conduct the study. Effects of operation parameters (i.e., laser fluence, scanning speed, hatched space and number of scans) were studied on ablation depth and quality of Cr12MoV at the repetition rate of 20 MHz. The experimental results reveal that all the four parameters affect the ablation depth significantly. While the surface roughness depends mainly on laser fluence or scanning speed and secondarily on hatched space or number of scans. For laser fluence and scanning speed, three distinct surface morphologies were observed experiencing transition from flat (Ra < 1.40 μm) to bumpy (Ra = 1.40 - 2.40 μm) eventually to rough (Ra > 2.40 μm). However, for hatched space and number of scan, there is a small bumpy and rough zone or even no rough zone. Mechanisms including heat accumulation, plasma shielding and combustion reaction effects are proposed based on the ablation depth and processing morphology. By appropriate management of the laser fluence and scanning speed, high ablation depth with low surface roughness can be obtained at small hatched space and high number of scans.
Representation of the Auroral and Polar Ionosphere in the International Reference Ionosphere (IRI)
NASA Technical Reports Server (NTRS)
Bilitza, Dieter; Reinisch, Bodo
2013-01-01
This issue of Advances in Space Research presents a selection of papers that document the progress in developing and improving the International Reference Ionosphere (IRI), a widely used standard for the parameters that describe the Earths ionosphere. The core set of papers was presented during the 2010 General Assembly of the Committee on Space Research in Bremen, Germany in a session that focused on the representation of the auroral and polar ionosphere in the IRI model. In addition, papers were solicited and submitted from the scientific community in a general call for appropriate papers.
Application of Artificial Neural Networks to the Design of Turbomachinery Airfoils
NASA Technical Reports Server (NTRS)
Rai, Man Mohan; Madavan, Nateri
1997-01-01
Artificial neural networks are widely used in engineering applications, such as control, pattern recognition, plant modeling and condition monitoring to name just a few. In this seminar we will explore the possibility of applying neural networks to aerodynamic design, in particular, the design of turbomachinery airfoils. The principle idea behind this effort is to represent the design space using a neural network (within some parameter limits), and then to employ an optimization procedure to search this space for a solution that exhibits optimal performance characteristics. Results obtained for design problems in two spatial dimensions will be presented.
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; Tachikawa, Yasuto; Shiiba, Michiharu; Kim, Sunmin
Applications of data assimilation techniques have been widely used to improve upon the predictability of hydrologic modeling. Among various data assimilation techniques, sequential Monte Carlo (SMC) filters, known as "particle filters" provide the capability to handle non-linear and non-Gaussian state-space models. This paper proposes a dual state-parameter updating scheme (DUS) based on SMC methods to estimate both state and parameter variables of a hydrologic model. We introduce a kernel smoothing method for the robust estimation of uncertain model parameters in the DUS. The applicability of the dual updating scheme is illustrated using the implementation of the storage function model on a middle-sized Japanese catchment. We also compare performance results of DUS combined with various SMC methods, such as SIR, ASIR and RPF.
Design and performance of the KSC Biomass Production Chamber
NASA Technical Reports Server (NTRS)
Prince, Ralph P.; Knott, William M.; Sager, John C.; Hilding, Suzanne E.
1987-01-01
NASA's Controlled Ecological Life Support System program has instituted the Kennedy Space Center 'breadboard' project of which the Biomass Production Chamber (BPC) presently discussed is a part. The BPC is based on a modified hypobaric test vessel; its design parameters and operational parameters have been chosen in order to meet a wide range of plant-growing objectives aboard future spacecraft on long-duration missions. A control and data acquisition subsystem is used to maintain a common link between the heating, ventilation, and air conditioning system, the illumination system, the gas-circulation system, and the nutrient delivery and monitoring subsystems.
Calvin A. Farris; Christopher H. Baisan; Donald A. Falk; Stephen R. Yool; Thomas W. Swetnam
2010-01-01
Fire scars are used widely to reconstruct historical fire regime parameters in forests around the world. Because fire scars provide incomplete records of past fire occurrence at discrete points in space, inferences must be made to reconstruct fire frequency and extent across landscapes using spatial networks of fire-scar samples. Assessing the relative accuracy of fire...
Cryogenic Evaluation of an Advanced DC/DC Converter Module for Deep Space Applications
NASA Technical Reports Server (NTRS)
Elbuluk, Malik E.; Hammoud, Ahmad; Gerber, Scott S.; Patterson, Richard
2003-01-01
DC/DC converters are widely used in power management, conditioning, and control of space power systems. Deep space applications require electronics that withstand cryogenic temperature and meet a stringent radiation tolerance. In this work, the performance of an advanced, radiation-hardened (rad-hard) commercial DC/DC converter module was investigated at cryogenic temperatures. The converter was investigated in terms of its steady state and dynamic operations. The output voltage regulation, efficiency, terminal current ripple characteristics, and output voltage response to load changes were determined in the temperature range of 20 to -140 C. These parameters were obtained at various load levels and at different input voltages. The experimental procedures along with the results obtained on the investigated converter are presented and discussed.
Development and evaluation of a predictive algorithm for telerobotic task complexity
NASA Technical Reports Server (NTRS)
Gernhardt, M. L.; Hunter, R. C.; Hedgecock, J. C.; Stephenson, A. G.
1993-01-01
There is a wide range of complexity in the various telerobotic servicing tasks performed in subsea, space, and hazardous material handling environments. Experience with telerobotic servicing has evolved into a knowledge base used to design tasks to be 'telerobot friendly.' This knowledge base generally resides in a small group of people. Written documentation and requirements are limited in conveying this knowledge base to serviceable equipment designers and are subject to misinterpretation. A mathematical model of task complexity based on measurable task parameters and telerobot performance characteristics would be a valuable tool to designers and operational planners. Oceaneering Space Systems and TRW have performed an independent research and development project to develop such a tool for telerobotic orbital replacement unit (ORU) exchange. This algorithm was developed to predict an ORU exchange degree of difficulty rating (based on the Cooper-Harper rating used to assess piloted operations). It is based on measurable parameters of the ORU, attachment receptacle and quantifiable telerobotic performance characteristics (e.g., link length, joint ranges, positional accuracy, tool lengths, number of cameras, and locations). The resulting algorithm can be used to predict task complexity as the ORU parameters, receptacle parameters, and telerobotic characteristics are varied.
Dynamics of large-scale brain activity in normal arousal states and epileptic seizures
NASA Astrophysics Data System (ADS)
Robinson, P. A.; Rennie, C. J.; Rowe, D. L.
2002-04-01
Links between electroencephalograms (EEGs) and underlying aspects of neurophysiology and anatomy are poorly understood. Here a nonlinear continuum model of large-scale brain electrical activity is used to analyze arousal states and their stability and nonlinear dynamics for physiologically realistic parameters. A simple ordered arousal sequence in a reduced parameter space is inferred and found to be consistent with experimentally determined parameters of waking states. Instabilities arise at spectral peaks of the major clinically observed EEG rhythms-mainly slow wave, delta, theta, alpha, and sleep spindle-with each instability zone lying near its most common experimental precursor arousal states in the reduced space. Theta, alpha, and spindle instabilities evolve toward low-dimensional nonlinear limit cycles that correspond closely to EEGs of petit mal seizures for theta instability, and grand mal seizures for the other types. Nonlinear stimulus-induced entrainment and seizures are also seen, EEG spectra and potentials evoked by stimuli are reproduced, and numerous other points of experimental agreement are found. Inverse modeling enables physiological parameters underlying observed EEGs to be determined by a new, noninvasive route. This model thus provides a single, powerful framework for quantitative understanding of a wide variety of brain phenomena.
Learning dependence from samples.
Seth, Sohan; Príncipe, José C
2014-01-01
Mutual information, conditional mutual information and interaction information have been widely used in scientific literature as measures of dependence, conditional dependence and mutual dependence. However, these concepts suffer from several computational issues; they are difficult to estimate in continuous domain, the existing regularised estimators are almost always defined only for real or vector-valued random variables, and these measures address what dependence, conditional dependence and mutual dependence imply in terms of the random variables but not finite realisations. In this paper, we address the issue that given a set of realisations in an arbitrary metric space, what characteristic makes them dependent, conditionally dependent or mutually dependent. With this novel understanding, we develop new estimators of association, conditional association and interaction association. Some attractive properties of these estimators are that they do not require choosing free parameter(s), they are computationally simpler, and they can be applied to arbitrary metric spaces.
Amplification of a high-frequency electromagnetic wave by a relativistic plasma
NASA Technical Reports Server (NTRS)
Yoon, Peter H.
1990-01-01
The amplification of a high-frequency transverse electromagnetic wave by a relativistic plasma component, via the synchrotron maser process, is studied. The background plasma that supports the transverse wave is considered to be cold, and the energetic component whose density is much smaller than that of the background component has a loss-cone feature in the perpendicular momentum space and a finite field-aligned drift speed. The ratio of the background plasma frequency squared to the electron gyrofrequency squared is taken to be sufficiently larger than unity. Such a parameter regime is relevant to many space and astrophysical situations. A detailed study of the amplification process is carried out over a wide range of physical parameters including the loss-cone index, the ratio of the electron mass energy to the temperature of the energetic component, the field-aligned drift speed, the normalized density, and the wave propagation angle.
NASA Astrophysics Data System (ADS)
Wang, Z.
2015-12-01
For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.
Electric dipole moments in natural supersymmetry
NASA Astrophysics Data System (ADS)
Nakai, Yuichiro; Reece, Matthew
2017-08-01
We discuss electric dipole moments (EDMs) in the framework of CP-violating natural supersymmetry (SUSY). Recent experimental results have significantly tightened constraints on the EDMs of electrons and of mercury, and substantial further progress is expected in the near future. We assess how these results constrain the parameter space of natural SUSY. In addition to our discussion of SUSY, we provide a set of general formulas for two-loop fermion EDMs, which can be applied to a wide range of models of new physics. In the SUSY context, the two-loop effects of stops and charginos respectively constrain the phases of A t μ and M 2 μ to be small in the natural part of parameter space. If the Higgs mass is lifted to 125 GeV by a new tree-level superpotential interaction and soft term with CP-violating phases, significant EDMs can arise from the two-loop effects of W bosons and tops. We compare the bounds arising from EDMs to those from other probes of new physics including colliders, b → sγ, and dark matter searches. Importantly, improvements in reach not only constrain higher masses, but require the phases to be significantly smaller in the natural parameter space at low mass. The required smallness of phases sharpens the CP problem of natural SUSY model building.
Focusing Intense Charged Particle Beams with Achromatic Effects for Heavy Ion Fusion
NASA Astrophysics Data System (ADS)
Mitrani, James; Kaganovich, Igor
2012-10-01
Final focusing systems designed to minimize the effects of chromatic aberrations in the Neutralized Drift Compression Experiment (NDCX-II) are described. NDCX-II is a linear induction accelerator, designed to accelerate short bunches at high current. Previous experiments showed that neutralized drift compression significantly compresses the beam longitudinally (˜60x) in the z-direction, resulting in a narrow distribution in z-space, but a wide distribution in pz-space. Using simple lenses (e.g., solenoids, quadrupoles) to focus beam bunches with wide distributions in pz-space results in chromatic aberrations, leading to lower beam intensities (J/cm^2). Therefore, the final focusing system must be designed to compensate for chromatic aberrations. The paraxial ray equations and beam envelope equations are numerically solved for parameters appropriate to NDCX-II. Based on these results, conceptual designs for final focusing systems using a combination of solenoids and/or quadrupoles are optimized to compensate for chromatic aberrations. Lens aberrations and emittance growth will be investigated, and analytical results will be compared with results from numerical particle-in-cell (PIC) simulation codes.
Motion robust remote photoplethysmography in CIELab color space
NASA Astrophysics Data System (ADS)
Yang, Yuting; Liu, Chenbin; Yu, Hui; Shao, Dangdang; Tsow, Francis; Tao, Nongjian
2016-11-01
Remote photoplethysmography (rPPG) is attractive for tracking a subject's physiological parameters without wearing a device. However, rPPG is known to be prone to body movement-induced artifacts, making it unreliable in realistic situations. Here we report a method to minimize the movement-induced artifacts. The method selects an optimal region of interest (ROI) automatically, prunes frames in which the ROI is not clearly captured (e.g., subject moves out of the view), and analyzes rPPG using an algorithm in CIELab color space, rather than the widely used RGB color space. We show that body movement primarily affects image intensity, rather than chromaticity, and separating chromaticity from intensity in CIELab color space thus helps achieve effective reduction of the movement-induced artifacts. We validate the method by performing a pilot study including 17 people with diverse skin tones.
A bifurcation study to guide the design of a landing gear with a combined uplock/downlock mechanism.
Knowles, James A C; Lowenberg, Mark H; Neild, Simon A; Krauskopf, Bernd
2014-12-08
This paper discusses the insights that a bifurcation analysis can provide when designing mechanisms. A model, in the form of a set of coupled steady-state equations, can be derived to describe the mechanism. Solutions to this model can be traced through the mechanism's state versus parameter space via numerical continuation, under the simultaneous variation of one or more parameters. With this approach, crucial features in the response surface, such as bifurcation points, can be identified. By numerically continuing these points in the appropriate parameter space, the resulting bifurcation diagram can be used to guide parameter selection and optimization. In this paper, we demonstrate the potential of this technique by considering an aircraft nose landing gear, with a novel locking strategy that uses a combined uplock/downlock mechanism. The landing gear is locked when in the retracted or deployed states. Transitions between these locked states and the unlocked state (where the landing gear is a mechanism) are shown to depend upon the positions of two fold point bifurcations. By performing a two-parameter continuation, the critical points are traced to identify operational boundaries. Following the variation of the fold points through parameter space, a minimum spring stiffness is identified that enables the landing gear to be locked in the retracted state. The bifurcation analysis also shows that the unlocking of a retracted landing gear should use an unlock force measure, rather than a position indicator, to de-couple the effects of the retraction and locking actuators. Overall, the study demonstrates that bifurcation analysis can enhance the understanding of the influence of design choices over a wide operating range where nonlinearity is significant.
A bifurcation study to guide the design of a landing gear with a combined uplock/downlock mechanism
Knowles, James A. C.; Lowenberg, Mark H.; Neild, Simon A.; Krauskopf, Bernd
2014-01-01
This paper discusses the insights that a bifurcation analysis can provide when designing mechanisms. A model, in the form of a set of coupled steady-state equations, can be derived to describe the mechanism. Solutions to this model can be traced through the mechanism's state versus parameter space via numerical continuation, under the simultaneous variation of one or more parameters. With this approach, crucial features in the response surface, such as bifurcation points, can be identified. By numerically continuing these points in the appropriate parameter space, the resulting bifurcation diagram can be used to guide parameter selection and optimization. In this paper, we demonstrate the potential of this technique by considering an aircraft nose landing gear, with a novel locking strategy that uses a combined uplock/downlock mechanism. The landing gear is locked when in the retracted or deployed states. Transitions between these locked states and the unlocked state (where the landing gear is a mechanism) are shown to depend upon the positions of two fold point bifurcations. By performing a two-parameter continuation, the critical points are traced to identify operational boundaries. Following the variation of the fold points through parameter space, a minimum spring stiffness is identified that enables the landing gear to be locked in the retracted state. The bifurcation analysis also shows that the unlocking of a retracted landing gear should use an unlock force measure, rather than a position indicator, to de-couple the effects of the retraction and locking actuators. Overall, the study demonstrates that bifurcation analysis can enhance the understanding of the influence of design choices over a wide operating range where nonlinearity is significant. PMID:25484601
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arreola, Rodrigo; Vega-Miranda, Anita; Gómez-Puyou, Armando
The gene-regulation factor PyrR from B. halodurans has been crystallized in two crystal forms. Preliminary crystallographic analysis showed that the protein forms tetramers in both space groups. The PyrR transcriptional regulator is widely distributed in bacteria. This RNA-binding protein is involved in the control of genes involved in pyrimidine biosynthesis, in which uridyl and guanyl nucleotides function as effectors. Here, the crystallization and preliminary X-ray diffraction analysis of two crystal forms of Bacillus halodurans PyrR are reported. One of the forms belongs to the monoclinic space group P2{sub 1} with unit-cell parameters a = 59.7, b = 87.4, c =more » 72.1 Å, β = 104.4°, while the other form belongs to the orthorhombic space group P22{sub 1}2{sub 1} with unit-cell parameters a = 72.7, b = 95.9, c = 177.1 Å. Preliminary X-ray diffraction data analysis and molecular-replacement solution revealed the presence of four and six monomers per asymmetric unit; a crystallographic tetramer is formed in both forms.« less
NASA Technical Reports Server (NTRS)
Howell, L. W.; Kennel, H. F.
1984-01-01
The Space Telescope (ST) is subjected to charged particle strikes in its space environment. ST's onboard fine guidance sensors utilize multiplier phototubes (PMT) for attitude determination. These tubes, when subjected to charged particle strikes, generate spurious photons in the form of Cerenkov radiation and fluorescence which give rise to unwanted disturbances in the pointing of the telescope. A stochastic model for the number of these spurious photons which strike the photocathode of the multiplier phototube which in turn produce the unwanted photon noise are presented. The model is applicable to both galactic cosmic rays and charged particles trapped in the Earth's radiation belts. The model which was programmed allows for easy adaption to a wide range of particles and different parameters for the phototube of the multiplier. The probability density functions for photons noise caused by protons, alpha particles, and carbon nuclei were using thousands of simulated strikes. These distributions are used as part of an overall ST dynamics simulation. The sensitivity of the density function to changes in the window parameters was also investigated.
NASA Technical Reports Server (NTRS)
Howell, L. W.; Kennel, H. F.
1986-01-01
The Space Telescope (ST) is subjected to charged particle strikes in its space environment. ST's onboard fine guidance sensors utilize multiplier phototubes (PMT) for attitude determination. These tubes, when subjected to charged particle strikes, generate spurious photons in the form of Cerenkov radiation and fluorescence which give rise to unwanted disturbances in the pointing of the telescope. A stochastic model for the number of these spurious photons which strike the photocathodes of the multiplier phototube which in turn produce the unwanted photon noise are presented. The model is applicable to both galactic cosmic rays and charged particles trapped in the earth's radiation belts. The model which was programmed allows for easy adaption to a wide range of particles and different parameters for the phototube of the multiplier. The probability density functions for photons noise caused by protons, alpha particles, and carbon nuclei were using thousands of simulated strikes. These distributions are used as part of an overall ST dynamics simulation. The sensitivity of the density function to changes in the window parameters was also investigated.
NASA Technical Reports Server (NTRS)
Phillips, K.
1976-01-01
A mathematical model for job scheduling in a specified context is presented. The model uses both linear programming and combinatorial methods. While designed with a view toward optimization of scheduling of facility and plant operations at the Deep Space Communications Complex, the context is sufficiently general to be widely applicable. The general scheduling problem including options for scheduling objectives is discussed and fundamental parameters identified. Mathematical algorithms for partitioning problems germane to scheduling are presented.
NASA Technical Reports Server (NTRS)
Olinto, Angela V.; Haensel, Pawel; Frieman, Joshua A.
1991-01-01
The effects are studied of H-dibaryons on the structure of neutron stars. It was found that H particles could be present in neutron stars for a wide range of dibaryon masses. The appearance of dibaryons softens the equations of state, lowers the maximum neutron star mass, and affects the transport properties of dense matter. The parameter space is constrained for dibaryons by requiring that a 1.44 solar mass neutron star be gravitationally stable.
Parameter Optimization for Turbulent Reacting Flows Using Adjoints
NASA Astrophysics Data System (ADS)
Lapointe, Caelan; Hamlington, Peter E.
2017-11-01
The formulation of a new adjoint solver for topology optimization of turbulent reacting flows is presented. This solver provides novel configurations (e.g., geometries and operating conditions) based on desired system outcomes (i.e., objective functions) for complex reacting flow problems of practical interest. For many such problems, it would be desirable to know optimal values of design parameters (e.g., physical dimensions, fuel-oxidizer ratios, and inflow-outflow conditions) prior to real-world manufacture and testing, which can be expensive, time-consuming, and dangerous. However, computational optimization of these problems is made difficult by the complexity of most reacting flows, necessitating the use of gradient-based optimization techniques in order to explore a wide design space at manageable computational cost. The adjoint method is an attractive way to obtain the required gradients, because the cost of the method is determined by the dimension of the objective function rather than the size of the design space. Here, the formulation of a novel solver is outlined that enables gradient-based parameter optimization of turbulent reacting flows using the discrete adjoint method. Initial results and an outlook for future research directions are provided.
Complementarity of dark matter searches in the phenomenological MSSM
Cahill-Rowley, Matthew; Cotta, Randy; Drlica-Wagner, Alex; ...
2015-03-11
As is well known, the search for and eventual identification of dark matter in supersymmetry requires a simultaneous, multipronged approach with important roles played by the LHC as well as both direct and indirect dark matter detection experiments. We examine the capabilities of these approaches in the 19-parameter phenomenological MSSM which provides a general framework for complementarity studies of neutralino dark matter. We summarize the sensitivity of dark matter searches at the 7 and 8 (and eventually 14) TeV LHC, combined with those by Fermi, CTA, IceCube/DeepCore, COUPP, LZ and XENON. The strengths and weaknesses of each of these techniques aremore » examined and contrasted and their interdependent roles in covering the model parameter space are discussed in detail. We find that these approaches explore orthogonal territory and that advances in each are necessary to cover the supersymmetric weakly interacting massive particle parameter space. We also find that different experiments have widely varying sensitivities to the various dark matter annihilation mechanisms, some of which would be completely excluded by null results from these experiments.« less
Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE
Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2013-01-01
Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478
Mapping the Chevallier-Polarski-Linder parametrization onto physical dark energy Models
NASA Astrophysics Data System (ADS)
Scherrer, Robert J.
2015-08-01
We examine the Chevallier-Polarski-Linder (CPL) parametrization, in the context of quintessence and barotropic dark energy models, to determine the subset of such models to which it can provide a good fit. The CPL parametrization gives the equation of state parameter w for the dark energy as a linear function of the scale factor a , namely w =w0+wa(1 -a ). In the case of quintessence models, we find that over most of the w0, wa parameter space the CPL parametrization maps onto a fairly narrow form of behavior for the potential V (ϕ ), while a one-dimensional subset of parameter space, for which wa=κ (1 +w0) , with κ constant, corresponds to a wide range of functional forms for V (ϕ ). For barotropic models, we show that the functional dependence of the pressure on the density, up to a multiplicative constant, depends only on wi=wa+w0 and not on w0 and wa separately. Our results suggest that the CPL parametrization may not be optimal for testing either type of model.
How Complex, Probable, and Predictable is Genetically Driven Red Queen Chaos?
Duarte, Jorge; Rodrigues, Carla; Januário, Cristina; Martins, Nuno; Sardanyés, Josep
2015-12-01
Coevolution between two antagonistic species has been widely studied theoretically for both ecologically- and genetically-driven Red Queen dynamics. A typical outcome of these systems is an oscillatory behavior causing an endless series of one species adaptation and others counter-adaptation. More recently, a mathematical model combining a three-species food chain system with an adaptive dynamics approach revealed genetically driven chaotic Red Queen coevolution. In the present article, we analyze this mathematical model mainly focusing on the impact of species rates of evolution (mutation rates) in the dynamics. Firstly, we analytically proof the boundedness of the trajectories of the chaotic attractor. The complexity of the coupling between the dynamical variables is quantified using observability indices. By using symbolic dynamics theory, we quantify the complexity of genetically driven Red Queen chaos computing the topological entropy of existing one-dimensional iterated maps using Markov partitions. Co-dimensional two bifurcation diagrams are also built from the period ordering of the orbits of the maps. Then, we study the predictability of the Red Queen chaos, found in narrow regions of mutation rates. To extend the previous analyses, we also computed the likeliness of finding chaos in a given region of the parameter space varying other model parameters simultaneously. Such analyses allowed us to compute a mean predictability measure for the system in the explored region of the parameter space. We found that genetically driven Red Queen chaos, although being restricted to small regions of the analyzed parameter space, might be highly unpredictable.
Six-vertex model and Schramm-Loewner evolution.
Kenyon, Richard; Miller, Jason; Sheffield, Scott; Wilson, David B
2017-05-01
Square ice is a statistical mechanics model for two-dimensional ice, widely believed to have a conformally invariant scaling limit. We associate a Peano (space-filling) curve to a square ice configuration, and more generally to a so-called six-vertex model configuration, and argue that its scaling limit is a space-filling version of the random fractal curve SLE_{κ}, Schramm-Loewner evolution with parameter κ, where 4<κ≤12+8sqrt[2]. For square ice, κ=12. At the "free-fermion point" of the six-vertex model, κ=8+4sqrt[3]. These unusual values lie outside the classical interval 2≤κ≤8.
Tunable resonances due to vacancies in graphene nanoribbons
NASA Astrophysics Data System (ADS)
Bahamon, D. A.; Pereira, A. L. C.; Schulz, P. A.
2010-10-01
The coherent electron transport along zigzag and metallic armchair graphene nanoribbons in the presence of one or two vacancies is investigated. Having in mind atomic scale tunability of the conductance fingerprints, the primary focus is on the effect of the distance to the edges and intervacancies spacing. An involved interplay of vacancies sublattice location and nanoribbon edge termination, together with the spacing parameters lead to a wide conductance resonance line-shape modification. Turning on a magnetic field introduces a new length scale that unveils counterintuitive aspects of the interplay between purely geometric aspects of the system and the underlying atomic scale nature of graphene.
Theory and simulation of backbombardment in single-cell thermionic-cathode electron guns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelen, J. P.; Biedron, S. G.; Harris, J. R.
This paper presents a comparison between simulation results and a first principles analytical model of electron back-bombardment developed at Colorado State University for single-cell, thermionic-cathode rf guns. While most previous work on back-bombardment has been specific to particular accelerator systems, this work is generalized to a wide variety of guns within the applicable parameter space. The merits and limits of the analytic model will be discussed. This paper identifies the three fundamental parameters that drive the back-bombardment process, and demonstrates relative accuracy in calculating the predicted back-bombardment power of a single-cell thermionic gun.
Theory and simulation of backbombardment in single-cell thermionic-cathode electron guns
Edelen, J. P.; Biedron, S. G.; Harris, J. R.; ...
2015-04-01
This paper presents a comparison between simulation results and a first principles analytical model of electron back-bombardment developed at Colorado State University for single-cell, thermionic-cathode rf guns. While most previous work on back-bombardment has been specific to particular accelerator systems, this work is generalized to a wide variety of guns within the applicable parameter space. The merits and limits of the analytic model will be discussed. This paper identifies the three fundamental parameters that drive the back-bombardment process, and demonstrates relative accuracy in calculating the predicted back-bombardment power of a single-cell thermionic gun.
Gravitational collapse in Husain space-time for Brans-Dicke gravity theory with power-law potential
NASA Astrophysics Data System (ADS)
Rudra, Prabir; Biswas, Ritabrata; Debnath, Ujjal
2014-12-01
The motive of this work is to study gravitational collapse in Husain space-time in Brans-Dicke gravity theory. Among many scalar-tensor theories of gravity, Brans-Dicke is the simplest and the impact of it can be regulated by two parameters associated with it, namely, the Brans-Dicke parameter, ω, and the potential-scalar field dependency parameter n respectively. V. Husain's work on exact solution for null fluid collapse in 1996 has influenced many authors to follow his way to find the end-state of the homogeneous/inhomogeneous dust cloud. Vaidya's metric is used all over to follow the nature of future outgoing radial null geodesics. Detecting whether the central singularity is naked or wrapped by an event horizon, by the existence of future directed radial null geodesic emitted in past from the singularity is the basic objective. To point out the existence of positive trajectory tangent solution, both particular parametric cases (through tabular forms) and wide range contouring process have been applied. Precisely, perfect fluid's EoS satisfies a wide range of phenomena: from dust to exotic fluid like dark energy. We have used the EoS parameter k to determine the end state of collapse in different cosmological era. Our main target is to check low ω (more deviations from Einstein gravity-more Brans Dicke effect) and negative k zones. This particularly throws light on the nature of the end-state of collapse in accelerated expansion in Brans Dicke gravity. It is seen that for positive values of EoS parameter k, the collapse results in a black hole, whereas for negative values of k, naked singularity is the only outcome. It is also to be noted that "low ω" leads to the possibility of getting more naked singularities even for a non-accelerating universe.
Gravitational Collapse in Husain space-time for Brans-Dicke Gravity Theory with Power-law Potential.
NASA Astrophysics Data System (ADS)
Rudra, Prabir
2016-07-01
The motive of this work is to study gravitational collapse in Husain space-time in Brans-Dicke gravity theory. Among many scalar-tensor theories of gravity, Brans-Dicke is the simplest and the impact of it can be regulated by two parameters associated with it, namely, the Brans-Dicke parameter, ω, and the potential-scalar field dependency parameter 'n' respectively. V. Husain's work on exact solution for null fluid collapse in 1996 has influenced many authors to follow his way to find the end-state of the homogeneous/inhomogeneous dust cloud. Vaidya's metric is used all over to follow the nature of future outgoing radial null geodesics. Detecting whether the central singularity is naked or wrapped by an event horizon, by the existence of future directed radial null geodesic emitted in past from the singularity is the basic objective. To point out the existence of positive trajectory tangent solution, both particular parametric cases(through tabular forms) and wide range contouring process have been applied. Precisely, perfect fluid's equation of state satisfies a wide range of phenomena : from dust to exotic fluid like dark energy. We have used the equation of state parameter 'k' to determine the end state of collapse in different cosmological era. Our main target is to check low ω (more deviations from Einstein gravity-more Brans Dicke effect) and negative 'k' zones. This particularly throws light on the nature of the end-state of collapse in accelerated expansion in Brans Dicke gravity. It is seen that for positive values of EoS parameter 'k', the collapse results in a black hole, whereas for negative values of 'k', naked singularity is the only outcome. It is also to be noted that "low ω" leads to the possibility of getting more naked singularities even for a non-accelerating universe.
NASA Technical Reports Server (NTRS)
Myers, J. G.; Feola, A.; Werner, C.; Nelson, E. S.; Raykin, J.; Samuels, B.; Ethier, C. R.
2016-01-01
The earliest manifestations of Visual Impairment and Intracranial Pressure (VIIP) syndrome become evident after months of spaceflight and include a variety of ophthalmic changes, including posterior globe flattening and distension of the optic nerve sheath. Prevailing evidence links the occurrence of VIIP to the cephalic fluid shift induced by microgravity and the subsequent pressure changes around the optic nerve and eye. Deducing the etiology of VIIP is challenging due to the wide range of physiological parameters that may be influenced by spaceflight and are required to address a realistic spectrum of physiological responses. Here, we report on the application of an efficient approach to interrogating physiological parameter space through computational modeling. Specifically, we assess the influence of uncertainty in input parameters for two models of VIIP syndrome: a lumped-parameter model (LPM) of the cardiovascular and central nervous systems, and a finite-element model (FEM) of the posterior eye, optic nerve head (ONH) and optic nerve sheath. Methods: To investigate the parameter space in each model, we employed Latin hypercube sampling partial rank correlation coefficient (LHSPRCC) strategies. LHS techniques outperform Monte Carlo approaches by enforcing efficient sampling across the entire range of all parameters. The PRCC method estimates the sensitivity of model outputs to these parameters while adjusting for the linear effects of all other inputs. The LPM analysis addressed uncertainties in 42 physiological parameters, such as initial compartmental volume and nominal compartment percentage of total cardiac output in the supine state, while the FEM evaluated the effects on biomechanical strain from uncertainties in 23 material and pressure parameters for the ocular anatomy. Results and Conclusion: The LPM analysis identified several key factors including high sensitivity to the initial fluid distribution. The FEM study found that intraocular pressure and intracranial pressure had dominant impact on the peak strains in the ONH and retro-laminar optic nerve, respectively; optic nerve and lamina cribrosa stiffness were also important. This investigation illustrates the ability of LHSPRCC to identify the most influential physiological parameters, which must therefore be well-characterized to produce the most accurate numerical results.
Present and future free-space quantum key distribution
NASA Astrophysics Data System (ADS)
Nordholt, Jane E.; Hughes, Richard J.; Morgan, George L.; Peterson, C. Glen; Wipf, Christopher C.
2002-04-01
Free-space quantum key distribution (QKD), more popularly know as quantum cryptography, uses single-photon free-space optical communications to distribute the secret keys required for secure communications. At Los Alamos National Laboratory we have demonstrated a fully automated system that is capable of operations at any time of day over a horizontal range of several kilometers. This has proven the technology is capable of operation from a spacecraft to the ground, opening up the possibility of QKD between any group of users anywhere on Earth. This system, the prototyping of a new system for use on a spacecraft, and the techniques required for world-wide quantum key distribution will be described. The operational parameters and performance of a system designed to operate between low earth orbit (LEO) and the ground will also be discussed.
Motion robust remote photoplethysmography in CIELab color space
Yang, Yuting; Liu, Chenbin; Yu, Hui; Shao, Dangdang; Tsow, Francis; Tao, Nongjian
2016-01-01
Abstract. Remote photoplethysmography (rPPG) is attractive for tracking a subject’s physiological parameters without wearing a device. However, rPPG is known to be prone to body movement-induced artifacts, making it unreliable in realistic situations. Here we report a method to minimize the movement-induced artifacts. The method selects an optimal region of interest (ROI) automatically, prunes frames in which the ROI is not clearly captured (e.g., subject moves out of the view), and analyzes rPPG using an algorithm in CIELab color space, rather than the widely used RGB color space. We show that body movement primarily affects image intensity, rather than chromaticity, and separating chromaticity from intensity in CIELab color space thus helps achieve effective reduction of the movement-induced artifacts. We validate the method by performing a pilot study including 17 people with diverse skin tones. PMID:27812695
O-6 Optical Property Degradation of the Hubble Space Telescope's Wide Field Camera-2 Pick Off Mirror
NASA Technical Reports Server (NTRS)
McNamara, Karen M.; Hughes, D. W.; Lauer, H. V.; Burkett, P. J.; Reed, B. B.
2011-01-01
Degradation in the performance of optical components can be greatly affected by exposure to the space environment. Many factors can contribute to such degradation including surface contaminants; outgassing; vacuum, UV, and atomic oxygen exposure; temperature cycling; or combinations of parameters. In-situ observations give important clues to degradation processes, but there are relatively few opportunities to correlate those observations with post-flight ground analyses. The return of instruments from the Hubble Space Telescope (HST) after its final servicing mission in May 2009 provided such an opportunity. Among the instruments returned from HST was the Wide-Field Planetary Camera-2 (WFPC-2), which had been exposed to the space environment for 16 years. This work focuses on the identifying the sources of degradation in the performance of the Pick-off mirror (POM) from WFPC-2. Techniques including surface reflectivity measurements, spectroscopic ellipsometry, FTIR (and ATR-FTIR) analyses, SEM/EDS, X-ray photoelectron spectroscopy (XPS) with and without ion milling, and wet and dry physical surface sampling were performed. Destructive and contact analyses took place only after completion of the non-destructive measurements. Spectroscopic ellipsometry was then repeated to determine the extent of contaminant removal by the destructive techniques, providing insight into the nature and extent of polymerization of the contaminant layer.
Microwave Plasma Propulsion Systems for Defensive Counter-Space
2007-09-01
microwave/ECR-based propulsion system. No electron cathode or neutralizer is needed. There are no electrodes to erode, sputter or damage. Measurement of...without the need for a cathode neutralizer, a wide range of performance parameters can be achieved by selecting the size and length of the resonance...EC • Earth Coverage Antenna NCA • Narrow coverege Antenna LNA • Low Noise Amplifier Rx • Receive Tx =Transmit IV IV TI.IO CMOI Figure 53
Advanced electrical power, distribution and control for the Space Transportation System
NASA Astrophysics Data System (ADS)
Hansen, Irving G.; Brandhorst, Henry W., Jr.
1990-08-01
High frequency power distribution and management is a technology ready state of development. As such, a system employs the fewest power conversion steps, and employs zero current switching for those steps. It results in the most efficiency, and lowest total parts system count when equivalent systems are compared. The operating voltage and frequency are application specific trade off parameters. However, a 20 kHz Hertz system is suitable for wide range systems.
Advanced electrical power, distribution and control for the Space Transportation System
NASA Technical Reports Server (NTRS)
Hansen, Irving G.; Brandhorst, Henry W., Jr.
1990-01-01
High frequency power distribution and management is a technology ready state of development. As such, a system employs the fewest power conversion steps, and employs zero current switching for those steps. It results in the most efficiency, and lowest total parts system count when equivalent systems are compared. The operating voltage and frequency are application specific trade off parameters. However, a 20 kHz Hertz system is suitable for wide range systems.
NASA Astrophysics Data System (ADS)
Wolff, Schuyler; Schuyler G. Wolff
2018-01-01
The study of circumstellar disks at a variety of evolutionary stages is essential to understand the physical processes leading to planet formation. The recent development of high contrast instruments designed to directly image the structures surrounding nearby stars, such as the Gemini Planet Imager (GPI) and coronagraphic data from the Hubble Space Telescope (HST) have made detailed studies of circumstellar systems possible. In my thesis work I detail the observation and characterization of three systems. GPI polarization data for the transition disk, PDS 66 shows a double ring and gap structure with a temporally variable azimuthal asymmetry. This evolved morphology could indicate shadowing from some feature in the innermost regions of the disk, a gap-clearing planet, or a localized change in the dust properties of the disk. Millimeter continuum data of the DH Tau system places limits on the dust mass that is contributing to the strong accretion signature on the wide-separation planetary mass companion, DH Tau b. The lower than expected dust mass constrains the possible formation mechanism, with core accretion followed by dynamical scattering being the most likely. Finally, I present HST scattered light observations of the flared, edge-on protoplanetary disk ESO H$\\alpha$ 569. I combine these data with a spectral energy distribution to model the key structural parameters such as the geometry (disk outer radius, vertical scale height, radial flaring profile), total mass, and dust grain properties in the disk using the radiative transfer code MCFOST. In order to conduct this work, I developed a new tool set to optimize the fitting of disk parameters using the MCMC code \\texttt{emcee} to efficiently explore the high dimensional parameter space. This approach allows us to self-consistently and simultaneously fit a wide variety of observables in order to place constraints on the physical properties of a given disk, while also rigorously assessing the uncertainties in those derived properties.
Wind-tunnel based definition of the AFE aerothermodynamic environment. [Aeroassist Flight Experiment
NASA Technical Reports Server (NTRS)
Miller, Charles G.; Wells, W. L.
1992-01-01
The Aeroassist Flight Experiment (AFE), scheduled to be performed in 1994, will serve as a precursor for aeroassisted space transfer vehicles (ASTV's) and is representative of entry concepts being considered for missions to Mars. Rationale for the AFE is reviewed briefly as are the various experiments carried aboard the vehicle. The approach used to determine hypersonic aerodynamic and aerothermodynamic characteristics over a wide range of simulation parameters in ground-based facilities is presented. Facilities, instrumentation and test procedures employed in the establishment of the data base are discussed. Measurements illustrating the effects of hypersonic simulation parameters, particularly normal-shock density ratio (an important parameter for hypersonic blunt bodies), and attitude on aerodynamic and aerothermodynamic characteristics are presented, and predictions from computational fluid dynamic (CFD) computer codes are compared with measurement.
A case study in nonconformance and performance trend analysis
NASA Technical Reports Server (NTRS)
Maloy, Joseph E.; Newton, Coy P.
1990-01-01
As part of NASA's effort to develop an agency-wide approach to trend analysis, a pilot nonconformance and performance trending analysis study was conducted on the Space Shuttle auxiliary power unit (APU). The purpose of the study was to (1) demonstrate that nonconformance analysis can be used to identify repeating failures of a specific item (and the associated failure modes and causes) and (2) determine whether performance parameters could be analyzed and monitored to provide an indication of component or system degradation prior to failure. The nonconformance analysis of the APU did identify repeating component failures, which possibly could be reduced if key performance parameters were monitored and analyzed. The performance-trending analysis verified that the characteristics of hardware parameters can be effective in detecting degradation of hardware performance prior to failure.
The HelCat dual-source plasma device.
Lynn, Alan G; Gilmore, Mark; Watts, Christopher; Herrea, Janis; Kelly, Ralph; Will, Steve; Xie, Shuangwei; Yan, Lincan; Zhang, Yue
2009-10-01
The HelCat (Helicon-Cathode) device has been constructed to support a broad range of basic plasma science experiments relevant to the areas of solar physics, laboratory astrophysics, plasma nonlinear dynamics, and turbulence. These research topics require a relatively large plasma source capable of operating over a broad region of parameter space with a plasma duration up to at least several milliseconds. To achieve these parameters a novel dual-source system was developed utilizing both helicon and thermionic cathode sources. Plasma parameters of n(e) approximately 0.5-50 x 10(18) m(-3) and T(e) approximately 3-12 eV allow access to a wide range of collisionalities important to the research. The HelCat device and initial characterization of plasma behavior during dual-source operation are described.
NASA Astrophysics Data System (ADS)
Qin, B.; Li, L.; Li, S.
2018-04-01
Tiangong-2 is the first space laboratory in China, which launched in September 15, 2016. Wide-band Imaging Spectrometer is a medium resolution multispectral imager on Tiangong-2. In this paper, the authors introduced the indexes and parameters of Wideband Imaging Spectrometer, and made an objective evaluation about the data quality of Wide-band Imaging Spectrometer in radiation quality, image sharpness and information content, and compared the data quality evaluation results with that of Landsat-8. Although the data quality of Wide-band Imager Spectrometer has a certain disparity with Landsat-8 OLI data in terms of signal to noise ratio, clarity and entropy. Compared with OLI, Wide-band Imager Spectrometer has more bands, narrower bandwidth and wider swath, which make it a useful remote sensing data source in classification and identification of large and medium scale ground objects. In the future, Wide-band Imaging Spectrometer data will be widely applied in land cover classification, ecological environment assessment, marine and coastal zone monitoring, crop identification and classification, and other related areas.
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem.
Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun; Zhong, Yi-wen
2016-01-01
Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.
Tool Support for Parametric Analysis of Large Software Simulation Systems
NASA Technical Reports Server (NTRS)
Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony
2008-01-01
The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.
NASA Astrophysics Data System (ADS)
Cassan, Arnaud
2017-07-01
The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.
Cope, Davis; Blakeslee, Barbara; McCourt, Mark E
2013-05-01
The difference-of-Gaussians (DOG) filter is a widely used model for the receptive field of neurons in the retina and lateral geniculate nucleus (LGN) and is a potential model in general for responses modulated by an excitatory center with an inhibitory surrounding region. A DOG filter is defined by three standard parameters: the center and surround sigmas (which define the variance of the radially symmetric Gaussians) and the balance (which defines the linear combination of the two Gaussians). These parameters are not directly observable and are typically determined by nonlinear parameter estimation methods applied to the frequency response function. DOG filters show both low-pass (optimal response at zero frequency) and bandpass (optimal response at a nonzero frequency) behavior. This paper reformulates the DOG filter in terms of a directly observable parameter, the zero-crossing radius, and two new (but not directly observable) parameters. In the two-dimensional parameter space, the exact region corresponding to bandpass behavior is determined. A detailed description of the frequency response characteristics of the DOG filter is obtained. It is also found that the directly observable optimal frequency and optimal gain (the ratio of the response at optimal frequency to the response at zero frequency) provide an alternate coordinate system for the bandpass region. Altogether, the DOG filter and its three standard implicit parameters can be determined by three directly observable values. The two-dimensional bandpass region is a potential tool for the analysis of populations of DOG filters (for example, populations of neurons in the retina or LGN), because the clustering of points in this parameter space may indicate an underlying organizational principle. This paper concentrates on circular Gaussians, but the results generalize to multidimensional radially symmetric Gaussians and are given as an appendix.
Optimization Strategies for Single-Stage, Multi-Stage and Continuous ADRs
NASA Technical Reports Server (NTRS)
Shirron, Peter J.
2014-01-01
Adiabatic Demagnetization Refrigerators (ADR) have many advantages that are prompting a resurgence in their use in spaceflight and laboratory applications. They are solid-state coolers capable of very high efficiency and very wide operating range. However, their low energy storage density translates to larger mass for a given cooling capacity than is possible with other refrigeration techniques. The interplay between refrigerant mass and other parameters such as magnetic field and heat transfer points in multi-stage ADRs gives rise to a wide parameter space for optimization. This paper first presents optimization strategies for single ADR stages, focusing primarily on obtaining the largest cooling capacity per stage mass, then discusses the optimization of multi-stage and continuous ADRs in the context of the coordinated heat transfer that must occur between stages. The goal for the latter is usually to obtain the largest cooling power per mass or volume, but there can also be many secondary objectives, such as limiting instantaneous heat rejection rates and producing intermediate temperatures for cooling of other instrument components.
Digit replacement: A generic map for nonlinear dynamical systems.
García-Morales, Vladimir
2016-09-01
A simple discontinuous map is proposed as a generic model for nonlinear dynamical systems. The orbit of the map admits exact solutions for wide regions in parameter space and the method employed (digit manipulation) allows the mathematical design of useful signals, such as regular or aperiodic oscillations with specific waveforms, the construction of complex attractors with nontrivial properties as well as the coexistence of different basins of attraction in phase space with different qualitative properties. A detailed analysis of the dynamical behavior of the map suggests how the latter can be used in the modeling of complex nonlinear dynamics including, e.g., aperiodic nonchaotic attractors and the hierarchical deposition of grains of different sizes on a surface.
Space-Based Information Infrastructure Architecture for Broadband Services
NASA Technical Reports Server (NTRS)
Price, Kent M.; Inukai, Tom; Razdan, Rajendev; Lazeav, Yvonne M.
1996-01-01
This study addressed four tasks: (1) identify satellite-addressable information infrastructure markets; (2) perform network analysis for space-based information infrastructure; (3) develop conceptual architectures; and (4) economic assessment of architectures. The report concludes that satellites will have a major role in the national and global information infrastructure, requiring seamless integration between terrestrial and satellite networks. The proposed LEO, MEO, and GEO satellite systems have satellite characteristics that vary widely. They include delay, delay variations, poorer link quality and beam/satellite handover. The barriers against seamless interoperability between satellite and terrestrial networks are discussed. These barriers are the lack of compatible parameters, standards and protocols, which are presently being evaluated and reduced.
Program manual for ASTOP, an Arbitrary space trajectory optimization program
NASA Technical Reports Server (NTRS)
Horsewood, J. L.
1974-01-01
The ASTOP program (an Arbitrary Space Trajectory Optimization Program) designed to generate optimum low-thrust trajectories in an N-body field while satisfying selected hardware and operational constraints is presented. The trajectory is divided into a number of segments or arcs over which the control is held constant. This constant control over each arc is optimized using a parameter optimization scheme based on gradient techniques. A modified Encke formulation of the equations of motion is employed. The program provides a wide range of constraint, end conditions, and performance index options. The basic approach is conducive to future expansion of features such as the incorporation of new constraints and the addition of new end conditions.
NASA Astrophysics Data System (ADS)
Miloi, Mădălina Mihaela; Goryunov, Semyon; Kulin, German
2018-04-01
A wide range of problems in neutron optics is well described by a theory based on application of the effective potential model. It was assumed that the concept of the effective potential in neutron optics have a limited region of validity and ceases to be correct in the case of the giant acceleration of a matter. To test this hypothesis a new Ultra Cold neutron experiment for the observation neutron interaction with potential structure oscillating in space was proposed. The report is focused on the model calculations of the topography of sample surface that oscillate in space. These calculations are necessary to find an optimal parameters and geometry of the planned experiment.
NASA Astrophysics Data System (ADS)
Su, Yun-Ting; Hu, Shuowen; Bethel, James S.
2017-05-01
Light detection and ranging (LIDAR) has become a widely used tool in remote sensing for mapping, surveying, modeling, and a host of other applications. The motivation behind this work is the modeling of piping systems in industrial sites, where cylinders are the most common primitive or shape. We focus on cylinder parameter estimation in three-dimensional point clouds, proposing a mathematical formulation based on angular distance to determine the cylinder orientation. We demonstrate the accuracy and robustness of the technique on synthetically generated cylinder point clouds (where the true axis orientation is known) as well as on real LIDAR data of piping systems. The proposed algorithm is compared with a discrete space Hough transform-based approach as well as a continuous space inlier approach, which iteratively discards outlier points to refine the cylinder parameter estimates. Results show that the proposed method is more computationally efficient than the Hough transform approach and is more accurate than both the Hough transform approach and the inlier method.
An Analytic Model for the Success Rate of a Robotic Actuator System in Hitting Random Targets.
Bradley, Stuart
2015-11-20
Autonomous robotic systems are increasingly being used in a wide range of applications such as precision agriculture, medicine, and the military. These systems have common features which often includes an action by an "actuator" interacting with a target. While simulations and measurements exist for the success rate of hitting targets by some systems, there is a dearth of analytic models which can give insight into, and guidance on optimization, of new robotic systems. The present paper develops a simple model for estimation of the success rate for hitting random targets from a moving platform. The model has two main dimensionless parameters: the ratio of actuator spacing to target diameter; and the ratio of platform distance moved (between actuator "firings") to the target diameter. It is found that regions of parameter space having specified high success are described by simple equations, providing guidance on design. The role of a "cost function" is introduced which, when minimized, provides optimization of design, operating, and risk mitigation costs.
NASA Technical Reports Server (NTRS)
Schatte, C.; Grindeland, R.; Callahan, P.; Berry, W.; Funk, G.; Lencki, W.
1987-01-01
The flight of two squirrel monkeys and 24 rats on Spacelab-3 was the first mission to provide hands-on maintenance on animals in a laboratory environment. With few exceptions, the animals grew and behaved normally, were free of chronic stress, and differed from ground controls only for gravity dependent parameters. One of the monkeys exhibited symptoms of space sickness similar to those observed in humans, which suggests squirrel monkeys may be good models for studying the space adaptation syndrome. Among the wide variety of parameters measured in the rats, most notable was the dramatic loss of muscle mass and increased fragility of long bones. Other interesting rat findings were those of suppressed interferom production by spleen cells, defective release of growth hormone by somatrophs, possible dissociation of circadian pacemakers, changes in hepatic lipid and carbohydrate metabolism, and hypersensitivity of marrow cells to erythropoietin. These results portend a strong role for animals in identifying and elucidating the physiological and anatomical responses of mammals to microgravity.
Data Analysis of the Floating Potential Measurement Unit aboard the International Space Station
NASA Technical Reports Server (NTRS)
Barjatya, Aroh; Swenson, Charles M.; Thompson, Donald C.; Wright, Kenneth H., Jr.
2009-01-01
We present data from the Floating Potential Measurement Unit (FPMU), that is deployed on the starboard (S1) truss of the International Space Station. The FPMU is a suite of instruments capable of redundant measurements of various plasma parameters. The instrument suite consists of: a Floating Potential Probe, a Wide-sweeping spherical Langmuir probe, a Narrow-sweeping cylindrical Langmuir Probe, and a Plasma Impedance Probe. This paper gives a brief overview of the instrumentation and the received data quality, and then presents the algorithm used to reduce I-V curves to plasma parameters. Several hours of data is presented from August 5th, 2006 and March 3rd, 2007. The FPMU derived plasma density and temperatures are compared with the International Reference Ionosphere (IRI) and USU-Global Assimilation of Ionospheric Measurement (USU-GAIM) models. Our results show that the derived in-situ density matches the USU-GAIM model better than the IRI, and the derived in-situ temperatures are comparable to the average temperatures given by the IRI.
NASA Technical Reports Server (NTRS)
Schatte, C.; Grindeland, R.; Callahan, P.; Funk, G.; Lencki, W.; Berry, W.
1986-01-01
The flight of two squirrel monkeys and 24 rates on Spacelab-3 was the first mission to provide hand-on maintenance on animals in a laboratory environment. With few exceptions, the animals grew and behaved normally, were free of chronic stress, and differed from ground controls only for gravity-dependent parameters. One of the monkeys exhibited symptoms of space sickness similar to those observed in humans, which suggests squirrel monkeys may be good models for studying the space-adaptation syndrome. Among the wide variety of parameters measured in the rats, most notable was the dramatic loss of muscle mass and increased fragility of long bones. Other interesting rat findings were those of suppressed interferon production by spleen cells, defective release of growth hormone by somatotrophs, possible dissociation of circadian pacemakers, changes in hepatic lipid and carbohydrate metabolism, and hypersensitivity of marrow cells to erythopoietin. These results portend a strong role for animals in identifying and elucidating the physiological and anatomical responses of mammals to microgravity.
2008-01-01
Figure 11. Screenshot of OrthoPro seam lines (pink), tiles (blue), and photos (green)................ 26 Figure 12. Calibration craters (existing...with aerial targets for the orthophotography data collection, 1 per data collection tile (1 sq km). For the Phase I data collection, 9 LiDAR ground...Orthophotography data were collected concurrently with the LiDAR data collection. Based on the LiDAR flight line spacing parameters, the orthophoto images were
NASA Astrophysics Data System (ADS)
Cenek, Martin; Dahl, Spencer K.
2016-11-01
Systems with non-linear dynamics frequently exhibit emergent system behavior, which is important to find and specify rigorously to understand the nature of the modeled phenomena. Through this analysis, it is possible to characterize phenomena such as how systems assemble or dissipate and what behaviors lead to specific final system configurations. Agent Based Modeling (ABM) is one of the modeling techniques used to study the interaction dynamics between a system's agents and its environment. Although the methodology of ABM construction is well understood and practiced, there are no computational, statistically rigorous, comprehensive tools to evaluate an ABM's execution. Often, a human has to observe an ABM's execution in order to analyze how the ABM functions, identify the emergent processes in the agent's behavior, or study a parameter's effect on the system-wide behavior. This paper introduces a new statistically based framework to automatically analyze agents' behavior, identify common system-wide patterns, and record the probability of agents changing their behavior from one pattern of behavior to another. We use network based techniques to analyze the landscape of common behaviors in an ABM's execution. Finally, we test the proposed framework with a series of experiments featuring increasingly emergent behavior. The proposed framework will allow computational comparison of ABM executions, exploration of a model's parameter configuration space, and identification of the behavioral building blocks in a model's dynamics.
Cenek, Martin; Dahl, Spencer K
2016-11-01
Systems with non-linear dynamics frequently exhibit emergent system behavior, which is important to find and specify rigorously to understand the nature of the modeled phenomena. Through this analysis, it is possible to characterize phenomena such as how systems assemble or dissipate and what behaviors lead to specific final system configurations. Agent Based Modeling (ABM) is one of the modeling techniques used to study the interaction dynamics between a system's agents and its environment. Although the methodology of ABM construction is well understood and practiced, there are no computational, statistically rigorous, comprehensive tools to evaluate an ABM's execution. Often, a human has to observe an ABM's execution in order to analyze how the ABM functions, identify the emergent processes in the agent's behavior, or study a parameter's effect on the system-wide behavior. This paper introduces a new statistically based framework to automatically analyze agents' behavior, identify common system-wide patterns, and record the probability of agents changing their behavior from one pattern of behavior to another. We use network based techniques to analyze the landscape of common behaviors in an ABM's execution. Finally, we test the proposed framework with a series of experiments featuring increasingly emergent behavior. The proposed framework will allow computational comparison of ABM executions, exploration of a model's parameter configuration space, and identification of the behavioral building blocks in a model's dynamics.
Accuracy limit of rigid 3-point water models
NASA Astrophysics Data System (ADS)
Izadi, Saeed; Onufriev, Alexey V.
2016-08-01
Classical 3-point rigid water models are most widely used due to their computational efficiency. Recently, we introduced a new approach to constructing classical rigid water models [S. Izadi et al., J. Phys. Chem. Lett. 5, 3863 (2014)], which permits a virtually exhaustive search for globally optimal model parameters in the sub-space that is most relevant to the electrostatic properties of the water molecule in liquid phase. Here we apply the approach to develop a 3-point Optimal Point Charge (OPC3) water model. OPC3 is significantly more accurate than the commonly used water models of same class (TIP3P and SPCE) in reproducing a comprehensive set of liquid bulk properties, over a wide range of temperatures. Beyond bulk properties, we show that OPC3 predicts the intrinsic charge hydration asymmetry (CHA) of water — a characteristic dependence of hydration free energy on the sign of the solute charge — in very close agreement with experiment. Two other recent 3-point rigid water models, TIP3PFB and H2ODC, each developed by its own, completely different optimization method, approach the global accuracy optimum represented by OPC3 in both the parameter space and accuracy of bulk properties. Thus, we argue that an accuracy limit of practical 3-point rigid non-polarizable models has effectively been reached; remaining accuracy issues are discussed.
The Mira-Titan Universe. II. Matter Power Spectrum Emulation
NASA Astrophysics Data System (ADS)
Lawrence, Earl; Heitmann, Katrin; Kwan, Juliana; Upadhye, Amol; Bingham, Derek; Habib, Salman; Higdon, David; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas
2017-09-01
We introduce a new cosmic emulator for the matter power spectrum covering eight cosmological parameters. Targeted at optical surveys, the emulator provides accurate predictions out to a wavenumber k˜ 5 Mpc-1 and redshift z≤slant 2. In addition to covering the standard set of ΛCDM parameters, massive neutrinos and a dynamical dark energy of state are included. The emulator is built on a sample set of 36 cosmological models, carefully chosen to provide accurate predictions over the wide and large parameter space. For each model, we have performed a high-resolution simulation, augmented with 16 medium-resolution simulations and TimeRG perturbation theory results to provide accurate coverage over a wide k-range; the data set generated as part of this project is more than 1.2Pbytes. With the current set of simulated models, we achieve an accuracy of approximately 4%. Because the sampling approach used here has established convergence and error-control properties, follow-up results with more than a hundred cosmological models will soon achieve ˜ 1 % accuracy. We compare our approach with other prediction schemes that are based on halo model ideas and remapping approaches. The new emulator code is publicly available.
The Mira-Titan Universe. II. Matter Power Spectrum Emulation
Lawrence, Earl; Heitmann, Katrin; Kwan, Juliana; ...
2017-09-20
We introduce a new cosmic emulator for the matter power spectrum covering eight cosmological parameters. Targeted at optical surveys, the emulator provides accurate predictions out to a wavenumber k ~ 5Mpc -1 and redshift z ≤ 2. Besides covering the standard set of CDM parameters, massive neutrinos and a dynamical dark energy of state are included. The emulator is built on a sample set of 36 cosmological models, carefully chosen to provide accurate predictions over the wide and large parameter space. For each model, we have performed a high-resolution simulation, augmented with sixteen medium-resolution simulations and TimeRG perturbation theory resultsmore » to provide accurate coverage of a wide k-range; the dataset generated as part of this project is more than 1.2Pbyte. With the current set of simulated models, we achieve an accuracy of approximately 4%. Because the sampling approach used here has established convergence and error-control properties, follow-on results with more than a hundred cosmological models will soon achieve ~1% accuracy. We compare our approach with other prediction schemes that are based on halo model ideas and remapping approaches. The new emulator code is publicly available.« less
The Mira-Titan Universe. II. Matter Power Spectrum Emulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawrence, Earl; Heitmann, Katrin; Kwan, Juliana
We introduce a new cosmic emulator for the matter power spectrum covering eight cosmological parameters. Targeted at optical surveys, the emulator provides accurate predictions out to a wavenumber k similar to 5 Mpc(-1) and redshift z <= 2. In addition to covering the standard set of Lambda CDM parameters, massive neutrinos and a dynamical dark energy of state are included. The emulator is built on a sample set of 36 cosmological models, carefully chosen to provide accurate predictions over the wide and large parameter space. For each model, we have performed a high-resolution simulation, augmented with 16 medium-resolution simulations andmore » TimeRG perturbation theory results to provide accurate coverage over a wide k-range; the data set generated as part of this project is more than 1.2Pbytes. With the current set of simulated models, we achieve an accuracy of approximately 4%. Because the sampling approach used here has established convergence and error-control properties, follow-up results with more than a hundred cosmological models will soon achieve similar to 1% accuracy. We compare our approach with other prediction schemes that are based on halo model ideas and remapping approaches.« less
The Mira-Titan Universe. II. Matter Power Spectrum Emulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawrence, Earl; Heitmann, Katrin; Kwan, Juliana
We introduce a new cosmic emulator for the matter power spectrum covering eight cosmological parameters. Targeted at optical surveys, the emulator provides accurate predictions out to a wavenumber k ~ 5Mpc -1 and redshift z ≤ 2. Besides covering the standard set of CDM parameters, massive neutrinos and a dynamical dark energy of state are included. The emulator is built on a sample set of 36 cosmological models, carefully chosen to provide accurate predictions over the wide and large parameter space. For each model, we have performed a high-resolution simulation, augmented with sixteen medium-resolution simulations and TimeRG perturbation theory resultsmore » to provide accurate coverage of a wide k-range; the dataset generated as part of this project is more than 1.2Pbyte. With the current set of simulated models, we achieve an accuracy of approximately 4%. Because the sampling approach used here has established convergence and error-control properties, follow-on results with more than a hundred cosmological models will soon achieve ~1% accuracy. We compare our approach with other prediction schemes that are based on halo model ideas and remapping approaches. The new emulator code is publicly available.« less
NASA Astrophysics Data System (ADS)
Stromqvist Vetelino, Frida; Borbath, Michael R.; Andrews, Larry C.; Phillips, Ronald L.; Burdge, Geoffrey L.; Chin, Peter G.; Galus, Darren J.; Wayne, David; Pescatore, Robert; Cowan, Doris; Thomas, Frederick
2005-08-01
The Shuttle Landing Facility runway at the Kennedy Space Center in Cape Canaveral, Florida is almost 5 km long and 100 m wide. Its homogeneous environment makes it a unique and ideal place for testing and evaluating EO systems. An experiment, with the goal of characterizing atmospheric parameters on the runway, was conducted in June 2005. Weather data was collected and the refractive index structure parameter was measured with a commercial scintillometer. The inner scale of turbulence was inferred from wind speed measurements and surface roughness. Values of the crosswind speed obtained from the scintillometer were compared with wind measurements taken by a weather station.
A Flexile and High Precision Calibration Method for Binocular Structured Light Scanning System
Yuan, Jianying; Wang, Qiong; Li, Bailin
2014-01-01
3D (three-dimensional) structured light scanning system is widely used in the field of reverse engineering, quality inspection, and so forth. Camera calibration is the key for scanning precision. Currently, 2D (two-dimensional) or 3D fine processed calibration reference object is usually applied for high calibration precision, which is difficult to operate and the cost is high. In this paper, a novel calibration method is proposed with a scale bar and some artificial coded targets placed randomly in the measuring volume. The principle of the proposed method is based on hierarchical self-calibration and bundle adjustment. We get initial intrinsic parameters from images. Initial extrinsic parameters in projective space are estimated with the method of factorization and then upgraded to Euclidean space with orthogonality of rotation matrix and rank 3 of the absolute quadric as constraint. Last, all camera parameters are refined through bundle adjustment. Real experiments show that the proposed method is robust, and has the same precision level as the result using delicate artificial reference object, but the hardware cost is very low compared with the current calibration method used in 3D structured light scanning system. PMID:25202736
Exponential Sum-Fitting of Dwell-Time Distributions without Specifying Starting Parameters
Landowne, David; Yuan, Bin; Magleby, Karl L.
2013-01-01
Fitting dwell-time distributions with sums of exponentials is widely used to characterize histograms of open- and closed-interval durations recorded from single ion channels, as well as for other physical phenomena. However, it can be difficult to identify the contributing exponential components. Here we extend previous methods of exponential sum-fitting to present a maximum-likelihood approach that consistently detects all significant exponentials without the need for user-specified starting parameters. Instead of searching for exponentials, the fitting starts with a very large number of initial exponentials with logarithmically spaced time constants, so that none are missed. Maximum-likelihood fitting then determines the areas of all the initial exponentials keeping the time constants fixed. In an iterative manner, with refitting after each step, the analysis then removes exponentials with negligible area and combines closely spaced adjacent exponentials, until only those exponentials that make significant contributions to the dwell-time distribution remain. There is no limit on the number of significant exponentials and no starting parameters need be specified. We demonstrate fully automated detection for both experimental and simulated data, as well as for classical exponential-sum-fitting problems. PMID:23746510
(abstract) Space Science with Commercial Funding
NASA Technical Reports Server (NTRS)
1994-01-01
The world-wide recession, and other factors, have led to reduced or flat budgets in real terms for space agencies around the world. Consequently space science projects and proposals have been under pressure and seemingly will continue to be pressured for some years into the future. A new concept for space science funding is underway at JPL. A partnership has been arranged with a commercial, for-profit, company that proposes to implement a (bandwidth-on-demand) information and telephone system through a network of low earth orbiting satellites (LEO). This network will consist of almost 1000 satellites operating in polar orbit at Ka-band. JPL has negotiated an agreement with this company that each satellite will also carry one or more science instruments for astrophysics, astronomy, and for earth observations. This paper discussed the details of the arrangement and the financial arrangements. It describes the technical parameters, such as the 60 GHz wideband inter-satellite links and the frequency, time, and position control, on which the science is based, and it also discusses the complementarity of this commercially funded space science with conventional space science.
NASA Astrophysics Data System (ADS)
Shahzad, Munir; Sengupta, Pinaki
2017-08-01
We study the Shastry-Sutherland Kondo lattice model with additional Dzyaloshinskii-Moriya (DM) interactions, exploring the possible magnetic phases in its multi-dimensional parameter space. Treating the local moments as classical spins and using a variational ansatz, we identify the parameter ranges over which various common magnetic orderings are potentially stabilized. Our results reveal that the competing interactions result in a heightened susceptibility towards a wide range of spin configurations including longitudinal ferromagnetic and antiferromagnetic order, coplanar flux configurations and most interestingly, multiple non-coplanar configurations including a novel canted-flux state as the different Hamiltonian parameters like electron density, interaction strengths and degree of frustration are varied. The non-coplanar and non-collinear magnetic ordering of localized spins behave like emergent electromagnetic fields and drive unusual transport and electronic phenomena.
A potential hyperspectral remote sensing imager for water quality measurements
NASA Astrophysics Data System (ADS)
Zur, Yoav; Braun, Ofer; Stavitsky, David; Blasberger, Avigdor
2003-04-01
Utilization of Pan Chromatic and Multi Spectral Remote Sensing Imagery is wide spreading and becoming an established business for commercial suppliers of such imagery like ISI and others. Some emerging technologies are being used to generate Hyper-Spectral imagery (HSI) by aircraft as well as other platforms. The commercialization of such technology for Remote Sensing from space is still questionable and depends upon several parameters including maturity, cost, market reception and many others. HSI can be used in a variety of applications in agriculture, urban mapping, geology and others. One outstanding potential usage of HSI is for water quality monitoring, a subject studied in this paper. Water quality monitoring is becoming a major area of interest in HSI due to the increase in water demand around the globe. The ability to monitor water quality in real time having both spatial and temporal resolution is one of the advantages of Remote Sensing. This ability is not limited only for measurements of oceans and inland water, but can be applied for drinking and irrigation water reservoirs as well. HSI in the UV-VNIR has the ability to measure a wide range of constituents that define water quality. Among the constituents that can be measured are the pigment concentration of various algae, chlorophyll a and c, carotenoids and phycocyanin, thus enabling to define the algal phyla. Other parameters that can be measured are TSS (Total Suspended Solids), turbidity, BOD (Biological Oxygen Demand), hydrocarbons, oxygen demand. The study specifies the properties of such a space borne device that results from the spectral signatures and the absorption bands of the constituents in question. Other parameters considered are the repetition of measurements, the spatial aspects of the sensor and the SNR of the sensor in question.
Deng, Zhi-De; Lisanby, Sarah H; Peterchev, Angel V
2013-12-01
Understanding the relationship between the stimulus parameters of electroconvulsive therapy (ECT) and the electric field characteristics could guide studies on improving risk/benefit ratio. We aimed to determine the effect of current amplitude and electrode size and spacing on the ECT electric field characteristics, compare ECT focality with magnetic seizure therapy (MST), and evaluate stimulus individualization by current amplitude adjustment. Electroconvulsive therapy and double-cone-coil MST electric field was simulated in a 5-shell spherical human head model. A range of ECT electrode diameters (2-5 cm), spacing (1-25 cm), and current amplitudes (0-900 mA) was explored. The head model parameters were varied to examine the stimulus current adjustment required to compensate for interindividual anatomical differences. By reducing the electrode size, spacing, and current, the ECT electric field can be more focal and superficial without increasing scalp current density. By appropriately adjusting the electrode configuration and current, the ECT electric field characteristics can be made to approximate those of MST within 15%. Most electric field characteristics in ECT are more sensitive to head anatomy variation than in MST, especially for close electrode spacing. Nevertheless, ECT current amplitude adjustment of less than 70% can compensate for interindividual anatomical variability. The strength and focality of ECT can be varied over a wide range by adjusting the electrode size, spacing, and current. If desirable, ECT can be made as focal as MST while using simpler stimulation equipment. Current amplitude individualization can compensate for interindividual anatomical variability.
On the formulation of a minimal uncertainty model for robust control with structured uncertainty
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert
1991-01-01
In the design and analysis of robust control systems for uncertain plants, representing the system transfer matrix in the form of what has come to be termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents a transfer function matrix M(s) of the nominal closed loop system, and the delta represents an uncertainty matrix acting on M(s). The nominal closed loop system M(s) results from closing the feedback control system, K(s), around a nominal plant interconnection structure P(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unsaturated uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, but for real parameter variations delta is a diagonal matrix of real elements. Conceptually, the M-delta structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the currently available literature addresses computational methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty, where the term minimal refers to the dimension of the delta matrix. Since having a minimally dimensioned delta matrix would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta would be useful. Hence, a method of obtaining the interconnection system P(s) is required. A generalized procedure for obtaining a minimal P-delta structure for systems with real parameter variations is presented. Using this model, the minimal M-delta model can then be easily obtained by closing the feedback loop. The procedure involves representing the system in a cascade-form state-space realization, determining the minimal uncertainty matrix, delta, and constructing the state-space representation of P(s). Three examples are presented to illustrate the procedure.
NASA Astrophysics Data System (ADS)
Espinoza, Néstor; Jordán, Andrés
2016-04-01
Very precise measurements of exoplanet transit light curves both from ground- and space-based observatories make it now possible to fit the limb-darkening coefficients in the transit-fitting procedure rather than fix them to theoretical values. This strategy has been shown to give better results, as fixing the coefficients to theoretical values can give rise to important systematic errors which directly impact the physical properties of the system derived from such light curves such as the planetary radius. However, studies of the effect of limb-darkening assumptions on the retrieved parameters have mostly focused on the widely used quadratic limb-darkening law, leaving out other proposed laws that are either simpler or better descriptions of model intensity profiles. In this work, we show that laws such as the logarithmic, square-root and three-parameter law do a better job that the quadratic and linear laws when deriving parameters from transit light curves, both in terms of bias and precision, for a wide range of situations. We therefore recommend to study which law to use on a case-by-case basis. We provide code to guide the decision of when to use each of these laws and select the optimal one in a mean-square error sense, which we note has a dependence on both stellar and transit parameters. Finally, we demonstrate that the so-called exponential law is non-physical as it typically produces negative intensities close to the limb and should therefore not be used.
Multiscale Modeling of Stiffness, Friction and Adhesion in Mechanical Contacts
2012-02-29
over a lateral length l scales as a power law: h lH, where H is called the Hurst exponent . For typical experimental surfaces, H ranges from 0.5 to 0.8...surfaces with a wide range of Hurst exponents using fully atomistic calculations and the Green’s function method. A simple relation like Eq. (2...described above to explore a full range of parameter space with different rms roughness h0, rms slope h’0, Hurst exponent H, adhesion energy
Zheng, Ming-Yang; Shentu, Guo-Liang; Ma, Fei; Zhou, Fei; Zhang, Hai-Ting; Dai, Yun-Qi; Xie, Xiuping; Zhang, Qiang; Pan, Jian-Wei
2016-09-01
Up-conversion single photon detector (UCSPD) has been widely used in many research fields including quantum key distribution, lidar, optical time domain reflectrometry, and deep space communication. For the first time in laboratory, we have developed an integrated four-channel all-fiber UCSPD which can work in both free-running and gate modes. This compact module can satisfy different experimental demands with adjustable detection efficiency and dark count. We have characterized the key parameters of the UCSPD system.
Charting the parameter space of the global 21-cm signal
NASA Astrophysics Data System (ADS)
Cohen, Aviad; Fialkov, Anastasia; Barkana, Rennan; Lotem, Matan
2017-12-01
The early star-forming Universe is still poorly constrained, with the properties of high-redshift stars, the first heating sources and reionization highly uncertain. This leaves observers planning 21-cm experiments with little theoretical guidance. In this work, we explore the possible range of high-redshift parameters including the star formation efficiency and the minimal mass of star-forming haloes; the efficiency, spectral energy distribution and redshift evolution of the first X-ray sources; and the history of reionization. These parameters are only weakly constrained by available observations, mainly the optical depth to the cosmic microwave background. We use realistic semi-numerical simulations to produce the global 21-cm signal over the redshift range z = 6-40 for each of 193 different combinations of the astrophysical parameters spanning the allowed range. We show that the expected signal fills a large parameter space, but with a fixed general shape for the global 21-cm curve. Even with our wide selection of models, we still find clear correlations between the key features of the global 21-cm signal and underlying astrophysical properties of the high-redshift Universe, namely the Ly α intensity, the X-ray heating rate and the production rate of ionizing photons. These correlations can be used to directly link future measurements of the global 21-cm signal to astrophysical quantities in a mostly model-independent way. We identify additional correlations that can be used as consistency checks.
Solis, Kyle Jameson; Martin, James E.
2012-11-01
Isothermal magnetic advection is a recently discovered method of inducing highly organized, non-contact flow lattices in suspensions of magnetic particles, using only uniform ac magnetic fields of modest strength. The initiation of these vigorous flows requires neither a thermal gradient nor a gravitational field and so can be used to transfer heat and mass in circumstances where natural convection does not occur. These advection lattices are comprised of a square lattice of antiparallel flow columns. If the column spacing is sufficiently large compared to the column length, and the flow rate within the columns is sufficiently large, then one wouldmore » expect efficient transfer of both heat and mass. Otherwise, the flow lattice could act as a countercurrent heat exchanger and only mass will be efficiently transferred. Although this latter case might be useful for feeding a reaction front without extracting heat, it is likely that most interest will be focused on using IMA for heat transfer. In this paper we explore the various experimental parameters of IMA to determine which of these can be used to control the column spacing. These parameters include the field frequency, strength, and phase relation between the two field components, the liquid viscosity and particle volume fraction. We find that the column spacing can easily be tuned over a wide range, to enable the careful control of heat and mass transfer.« less
NASA Astrophysics Data System (ADS)
Potvin-Trottier, Laurent; Chen, Lingfeng; Horwitz, Alan Rick; Wiseman, Paul W.
2013-08-01
We introduce a new generalized theoretical framework for image correlation spectroscopy (ICS). Using this framework, we extend the ICS method in time-frequency (ν, nu) space to map molecular flow of fluorescently tagged proteins in individual living cells. Even in the presence of a dominant immobile population of fluorescent molecules, nu-space ICS (nICS) provides an unbiased velocity measurement, as well as the diffusion coefficient of the flow, without requiring filtering. We also develop and characterize a tunable frequency-filter for spatio-temporal ICS (STICS) that allows quantification of the density, the diffusion coefficient and the velocity of biased diffusion. We show that the techniques are accurate over a wide range of parameter space in computer simulation. We then characterize the retrograde flow of adhesion proteins (α6- and αLβ2-GFP integrins and mCherry-paxillin) in CHO.B2 cells plated on laminin and intercellular adhesion molecule 1 (ICAM-1) ligands respectively. STICS with a tunable frequency filter, in conjunction with nICS, measures two new transport parameters, the density and transport bias coefficient (a measure of the diffusive character of a flow/biased diffusion), showing that molecular flow in this cell system has a significant diffusive component. Our results suggest that the integrin-ligand interaction, along with the internal myosin-motor generated force, varies for different integrin-ligand pairs, consistent with previous results.
Adaptive matching of the iota ring linear optics for space charge compensation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romanov, A.; Bruhwiler, D. L.; Cook, N.
Many present and future accelerators must operate with high intensity beams when distortions induced by space charge forces are among major limiting factors. Betatron tune depression of above approximately 0.1 per cell leads to significant distortions of linear optics. Many aspects of machine operation depend on proper relations between lattice functions and phase advances, and can be i proved with proper treatment of space charge effects. We implement an adaptive algorithm for linear lattice re matching with full account of space charge in the linear approximation for the case of Fermilab’s IOTA ring. The method is based on a searchmore » for initial second moments that give closed solution and, at the same predefined set of goals for emittances, beta functions, dispersions and phase advances at and between points of interest. Iterative singular value decomposition based technique is used to search for optimum by varying wide array of model parameters« less
Challenges in Physical Characterization of Dim Space Objects: What Can We Learn from NEOs
NASA Astrophysics Data System (ADS)
Reddy, V.; Sanchez, J.; Thirouin, A.; Rivera-Valentin, E.; Ryan, W.; Ryan, E.; Mokovitz, N.; Tegler, S.
2016-09-01
Physical characterization of dim space objects in cis-lunar space can be a challenging task. Of particular interest to both natural and artificial space object behavior scientists are the properties beyond orbital parameters that can uniquely identify them. These properties include rotational state, size, shape, density and composition. A wide range of observational and non-observational factors affect our ability to characterize dim objects in cis-lunar space. For example, phase angle (angle between Sun-Target-Observer), temperature, rotational variations, temperature, and particle size (for natural dim objects). Over the last two decades, space object behavior scientists studying natural dim objects have attempted to quantify and correct for a majority of these factors to enhance our situational awareness. These efforts have been primarily focused on developing laboratory spectral calibrations in a space-like environment. Calibrations developed correcting spectral observations of natural dim objects could be applied to characterizing artificial objects, as the underlying physics is the same. The paper will summarize our current understanding of these observational and non-observational factors and present a case study showcasing the state of the art in characterization of natural dim objects.
NASA Astrophysics Data System (ADS)
Akeson, Rachel
Measuring the occurrence rate of extrasolar planets is one of the most fundamental constraints on our understanding of planets throughout the Galaxy. By studying planet populations across a wide parameter space in stellar age, type, metallicity, and multiplicity, we can inform planet formation, migration and evolution theories. The ground-based ELTs and the flagship space missions that NASA is planning in the next decades and beyond will be designed to make the first observations of potential biomarkers in the atmospheres of extrasolar planets understanding how common these planets and how they are distributed will be crucial for this effort. One of the most important results of the main Kepler mission was a measurement of the frequency of planets orbiting FGK dwarfs. Although that result is crucial for estimating the frequency of planetary systems orbiting middle-aged Sun-like stars, the majority of stars in the galaxy have lower masses. We propose to extend the Kepler occurrence rates to lower stellar masses by using publicly available data from the second-generation K2 mission to estimate the frequency of planets orbiting low-mass stars. The confluence of the lower temperature, smaller size, and relative abundance of M dwarfs makes them attractive and efficient targets for habitable planet detection and characterization. The archived K2 data contain nearly an order of magnitude more M dwarfs than the original Kepler data set ( 30,000 compared to 3700), allowing us to constrain occurrence rates both more precisely and with more granularity across the M dwarf parameter range. We will also take advantage of the wide variety of stellar environments sampled by the community-driven K2 mission to estimate the frequency of planets orbiting stars with a range of metallicities and ages. The K2 mission has observed several clusters across a wide range of ages, including the Upper Scorpius OB association (10My old), the Pleiades cluster (115My old), and the Hyades and Praesepe clusters (600My old). One goal of this proposal is to pinpoint when and if the planet occurrence rate converges with that of the Kepler field, whose stars have a median age of 4Gy. This will inform the timescales of the dominant formation and migration mechanisms, and improve our ability to discriminate between competing proposed theories. The proposed work encompasses the following tasks: (1) Generating and publishing a uniform, repeatable, robust catalogue of planet candidates using the publicly available K2 data comprising the first 33 months of observations; (2) Measuring the completeness (false negative rate) and reliability (false positive rate) of the resulting candidate catalogue; (3) Systematically and accurately characterizing the properties of the stellar sample (both exoplanet hosts and non-hosts); (4) Calculating the distribution of the underlying planet population across a wide range of stellar host parameters. The proposed work is relevant to several of NASA s strategic goals, including ascertaining the content, origin, and evolution of the solar system and the potential for life elsewhere , and discovering how the universe works, exploring how it began and evolved, and searching for life on planets around other stars . With respect to the Astrophysics Data Analysis Program call, the proposed work builds on the legacy of Kepler occurrence rate calculations by placing them in the wider context afforded by the publicly available K2 data.
Scale Effects on Magnet Systems of Heliotron-Type Reactors
NASA Astrophysics Data System (ADS)
S, Imagawa; A, Sagara
2005-02-01
For power plants heliotron-type reactors have attractive advantages, such as no current-disruptions, no current-drive, and wide space between helical coils for the maintenance of in-vessel components. However, one disadvantage is that a major radius has to be large enough to obtain large Q-value or to produce sufficient space for blankets. Although the larger radius is considered to increase the construction cost, the influence has not been understood clearly, yet. Scale effects on superconducting magnet systems have been estimated under the conditions of a constant energy confinement time and similar geometrical parameters. Since the necessary magnetic field with a larger radius becomes lower, the increase rate of the weight of the coil support to the major radius is less than the square root. The necessary major radius will be determined mainly by the blanket space. The appropriate major radius will be around 13 m for a reactor similar to the Large Helical Device (LHD).
Long-Lag, Wide-pulse Gamma-Ray Bursts
NASA Technical Reports Server (NTRS)
Norris, J. P.; Bonnell, J. T.; Kazanas, D.; Scargie, J. D.; Hakkila, J.; Giblin, T. W.
2005-01-01
The best available probe of the early phase of gamma-ray burst (GRB) jet attributes is the prompt gamma-ray emission, in which several intrinsic and extrinsic variables determine observed GRB pulse evolution, including at least: jet opening angle, profiles of Lorentz factor and matter/field density, distance of emission region from central source, and viewing angle. Bright, usually complex bursts have many narrow pulses that are difficult to model due to overlap. However, the relatively simple, long spectral lag, wide-pulse bursts may have simpler physics and are easier to model. We have analyzed the temporal and spectral behavior of wide pulses in 24 long-lag bursts from the BATSE sample, using a pulse model with two shape parameters - width and asymmetry - and the Band spectral model with three shape parameters. We find that pulses in long-lag bursts are distinguished both temporally and spectrally from those in bright bursts: the pulses in long spectral lag bursts are few in number, and approximately 100 times wider (10s of seconds), have systemtically lower peaks in nu*F(nu), harder low-energy spectra and softer high-energy spectra. These five pulse descriptors are essentially uncorrelated for our long-lag sample, suggesting that at least approximately 5 parameters are needed to model burst temporal and spectral behavior, roughly commensurate with the theoretical phase space. However, we do find that pulse width is strongly correlated with spectral lag; hence these two parameters may be viewed as mutual surrogates. The prevalence of long-lag bursts near the BATSE trigger threshold, their predominantly low nu*F(nu) spectral peaks, and relatively steep upper power-law spectral indices indicate that Swiift will detect many such bursts.
Impact of relativistic effects on cosmological parameter estimation
NASA Astrophysics Data System (ADS)
Lorenz, Christiane S.; Alonso, David; Ferreira, Pedro G.
2018-01-01
Future surveys will access large volumes of space and hence very long wavelength fluctuations of the matter density and gravitational field. It has been argued that the set of secondary effects that affect the galaxy distribution, relativistic in nature, will bring new, complementary cosmological constraints. We study this claim in detail by focusing on a subset of wide-area future surveys: Stage-4 cosmic microwave background experiments and photometric redshift surveys. In particular, we look at the magnification lensing contribution to galaxy clustering and general-relativistic corrections to all observables. We quantify the amount of information encoded in these effects in terms of the tightening of the final cosmological constraints as well as the potential bias in inferred parameters associated with neglecting them. We do so for a wide range of cosmological parameters, covering neutrino masses, standard dark-energy parametrizations and scalar-tensor gravity theories. Our results show that, while the effect of lensing magnification to number counts does not contain a significant amount of information when galaxy clustering is combined with cosmic shear measurements, this contribution does play a significant role in biasing estimates on a host of parameter families if unaccounted for. Since the amplitude of the magnification term is controlled by the slope of the source number counts with apparent magnitude, s (z ), we also estimate the accuracy to which this quantity must be known to avoid systematic parameter biases, finding that future surveys will need to determine s (z ) to the ˜5 %- 10 % level. On the contrary, large-scale general-relativistic corrections are irrelevant both in terms of information content and parameter bias for most cosmological parameters but significant for the level of primordial non-Gaussianity.
Robust control with structured perturbations
NASA Technical Reports Server (NTRS)
Keel, Leehyun
1988-01-01
Two important problems in the area of control systems design and analysis are discussed. The first is the robust stability using characteristic polynomial, which is treated first in characteristic polynomial coefficient space with respect to perturbations in the coefficients of the characteristic polynomial, and then for a control system containing perturbed parameters in the transfer function description of the plant. In coefficient space, a simple expression is first given for the l(sup 2) stability margin for both monic and non-monic cases. Following this, a method is extended to reveal much larger stability region. This result has been extended to the parameter space so that one can determine the stability margin, in terms of ranges of parameter variations, of the closed loop system when the nominal stabilizing controller is given. The stability margin can be enlarged by a choice of better stabilizing controller. The second problem describes the lower order stabilization problem, the motivation of the problem is as follows. Even though the wide range of stabilizing controller design methodologies is available in both the state space and transfer function domains, all of these methods produce unnecessarily high order controllers. In practice, the stabilization is only one of many requirements to be satisfied. Therefore, if the order of a stabilizing controller is excessively high, one can normally expect to have a even higher order controller on the completion of design such as inclusion of dynamic response requirements, etc. Therefore, it is reasonable to have a lowest possible order stabilizing controller first and then adjust the controller to meet additional requirements. The algorithm for designing a lower order stabilizing controller is given. The algorithm does not necessarily produce the minimum order controller; however, the algorithm is theoretically logical and some simulation results show that the algorithm works in general.
Hubble Space Telescope: Faint object camera instrument handbook. Version 2.0
NASA Technical Reports Server (NTRS)
Paresce, Francesco (Editor)
1990-01-01
The Faint Object Camera (FOC) is a long focal ratio, photon counting device designed to take high resolution two dimensional images of areas of the sky up to 44 by 44 arcseconds squared in size, with pixel dimensions as small as 0.0007 by 0.0007 arcseconds squared in the 1150 to 6500 A wavelength range. The basic aim of the handbook is to make relevant information about the FOC available to a wide range of astronomers, many of whom may wish to apply for HST observing time. The FOC, as presently configured, is briefly described, and some basic performance parameters are summarized. Also included are detailed performance parameters and instructions on how to derive approximate FOC exposure times for the proposed targets.
Structural Equation Model Trees
Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman
2015-01-01
In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree structures that separate a data set recursively into subsets with significantly different parameter estimates in a SEM. SEM Trees provide means for finding covariates and covariate interactions that predict differences in structural parameters in observed as well as in latent space and facilitate theory-guided exploration of empirical data. We describe the methodology, discuss theoretical and practical implications, and demonstrate applications to a factor model and a linear growth curve model. PMID:22984789
Nonlinear helicons bearing multi-scale structures
NASA Astrophysics Data System (ADS)
Abdelhamid, Hamdi M.; Yoshida, Zensho
2017-02-01
The helicon waves exhibit varying characters depending on plasma parameters, geometry, and wave numbers. Here, we elucidate an intrinsic multi-scale property embodied by the combination of the dispersive effect and nonlinearity. The extended magnetohydrodynamics model (exMHD) is capable of describing a wide range of parameter space. By using the underlying Hamiltonian structure of exMHD, we construct an exact nonlinear solution, which turns out to be a combination of two distinct modes, the helicon and Trivelpiece-Gould (TG) waves. In the regime of relatively low frequency or high density, however, the combination is made of the TG mode and an ion cyclotron wave (slow wave). The energy partition between these modes is determined by the helicities carried by the wave fields.
Nondimensional Representations for Occulter Design and Performance Evaluation
NASA Technical Reports Server (NTRS)
Cady, Eric
2011-01-01
An occulter is a spacecraft with a precisely-shaped optical edges which ies in formation with a telescope, blocking light from a star while leaving light from nearby planets una ected. Using linear optimization, occulters can be designed for use with telescopes over a wide range of telescope aperture sizes, science bands, and starlight suppression levels. It can be shown that this optimization depends primarily on a small number of independent nondimensional parameters, which correspond to Fresnel numbers and physical scales and enter the optimization only as constraints. We show how these can be used to span the parameter space of possible optimized occulters; this data set can then be mined to determine occulter sizes for various mission scenarios and sets of engineering constraints.
NASA Technical Reports Server (NTRS)
Vincent, R. A.
1984-01-01
The Doppler, spaced-antenna and interferometric methods of measuring wind velocities all use the same basic information, the Doppler shifts imposed on backscattered radio waves, but they process it in different ways. The Doppler technique is most commonly used at VHF since the narrow radar beams are readily available. However, the spaced antenna (SA) method has been successfully used with the SOUSY and Adelaide radars. At MF/HF the spaced antenna method is widely used since the large antenna arrays (diameter 1 km) required to generate narrow beams are expensive to construct. Where such arrays of this size are available then the Doppler method has been successfully used (e.g., Adelaide and Brisbane). In principle, the factors which influence the choice of beam pointing angle, the optimum antenna spacing will be the same whether operation is at MF or VHF. Many of the parameters which govern the efficient use of wind measuring systems have been discussed at previous MST workshops. Some of the points raised by these workshops are summarized.
Higgs-portal assisted Higgs inflation with a sizeable tensor-to-scalar ratio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jinsu; Ko, Pyungwon; Park, Wan-Il, E-mail: kimjinsu@kias.re.kr, E-mail: pko@kias.re.kr, E-mail: Wanil.Park@uv.es
We show that the Higgs portal interactions involving extra dark Higgs field can save generically the original Higgs inflation of the standard model (SM) from the problem of a deep non-SM vacuum in the SM Higgs potential. Specifically, we show that such interactions disconnect the top quark pole mass from inflationary observables and allow multi-dimensional parameter space to save the Higgs inflation, thanks to the additional parameters (the dark Higgs boson mass m {sub φ}, the mixing angle α between the SM Higgs H and dark Higgs Φ, and the mixed quartic coupling) affecting RG-running of the Higgs quartic coupling.more » The effect of Higgs portal interactions may lead to a larger tensor-to-scalar ratio, 0.08 ∼< r ∼< 0.1, by adjusting relevant parameters in wide ranges of α and m {sub φ}, some region of which can be probed at future colliders. Performing a numerical analysis we find an allowed region of parameters, matching the latest Planck data.« less
On the sensitivity analysis of porous material models
NASA Astrophysics Data System (ADS)
Ouisse, Morvan; Ichchou, Mohamed; Chedly, Slaheddine; Collet, Manuel
2012-11-01
Porous materials are used in many vibroacoustic applications. Different available models describe their behaviors according to materials' intrinsic characteristics. For instance, in the case of porous material with rigid frame, and according to the Champoux-Allard model, five parameters are employed. In this paper, an investigation about this model sensitivity to parameters according to frequency is conducted. Sobol and FAST algorithms are used for sensitivity analysis. A strong parametric frequency dependent hierarchy is shown. Sensitivity investigations confirm that resistivity is the most influent parameter when acoustic absorption and surface impedance of porous materials with rigid frame are considered. The analysis is first performed on a wide category of porous materials, and then restricted to a polyurethane foam analysis in order to illustrate the impact of the reduction of the design space. In a second part, a sensitivity analysis is performed using the Biot-Allard model with nine parameters including mechanical effects of the frame and conclusions are drawn through numerical simulations.
Application of differential evolution algorithm on self-potential data.
Li, Xiangtao; Yin, Minghao
2012-01-01
Differential evolution (DE) is a population based evolutionary algorithm widely used for solving multidimensional global optimization problems over continuous spaces, and has been successfully used to solve several kinds of problems. In this paper, differential evolution is used for quantitative interpretation of self-potential data in geophysics. Six parameters are estimated including the electrical dipole moment, the depth of the source, the distance from the origin, the polarization angle and the regional coefficients. This study considers three kinds of data from Turkey: noise-free data, contaminated synthetic data, and Field example. The differential evolution and the corresponding model parameters are constructed as regards the number of the generations. Then, we show the vibration of the parameters at the vicinity of the low misfit area. Moreover, we show how the frequency distribution of each parameter is related to the number of the DE iteration. Experimental results show the DE can be used for solving the quantitative interpretation of self-potential data efficiently compared with previous methods.
Application of Differential Evolution Algorithm on Self-Potential Data
Li, Xiangtao; Yin, Minghao
2012-01-01
Differential evolution (DE) is a population based evolutionary algorithm widely used for solving multidimensional global optimization problems over continuous spaces, and has been successfully used to solve several kinds of problems. In this paper, differential evolution is used for quantitative interpretation of self-potential data in geophysics. Six parameters are estimated including the electrical dipole moment, the depth of the source, the distance from the origin, the polarization angle and the regional coefficients. This study considers three kinds of data from Turkey: noise-free data, contaminated synthetic data, and Field example. The differential evolution and the corresponding model parameters are constructed as regards the number of the generations. Then, we show the vibration of the parameters at the vicinity of the low misfit area. Moreover, we show how the frequency distribution of each parameter is related to the number of the DE iteration. Experimental results show the DE can be used for solving the quantitative interpretation of self-potential data efficiently compared with previous methods. PMID:23240004
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem
Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun
2016-01-01
Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms. PMID:27034650
Zimmermann, Johannes; Wright, Aidan G C
2017-01-01
The interpersonal circumplex is a well-established structural model that organizes interpersonal functioning within the two-dimensional space marked by dominance and affiliation. The structural summary method (SSM) was developed to evaluate the interpersonal nature of other constructs and measures outside the interpersonal circumplex. To date, this method has been primarily descriptive, providing no way to draw inferences when comparing SSM parameters across constructs or groups. We describe a newly developed resampling-based method for deriving confidence intervals, which allows for SSM parameter comparisons. In a series of five studies, we evaluated the accuracy of the approach across a wide range of possible sample sizes and parameter values, and demonstrated its utility for posing theoretical questions on the interpersonal nature of relevant constructs (e.g., personality disorders) using real-world data. As a result, the SSM is strengthened for its intended purpose of construct evaluation and theory building. © The Author(s) 2015.
Efficient Schmidt number scaling in dissipative particle dynamics
NASA Astrophysics Data System (ADS)
Krafnick, Ryan C.; García, Angel E.
2015-12-01
Dissipative particle dynamics is a widely used mesoscale technique for the simulation of hydrodynamics (as well as immersed particles) utilizing coarse-grained molecular dynamics. While the method is capable of describing any fluid, the typical choice of the friction coefficient γ and dissipative force cutoff rc yields an unacceptably low Schmidt number Sc for the simulation of liquid water at standard temperature and pressure. There are a variety of ways to raise Sc, such as increasing γ and rc, but the relative cost of modifying each parameter (and the concomitant impact on numerical accuracy) has heretofore remained undetermined. We perform a detailed search over the parameter space, identifying the optimal strategy for the efficient and accuracy-preserving scaling of Sc, using both numerical simulations and theoretical predictions. The composite results recommend a parameter choice that leads to a speed improvement of a factor of three versus previously utilized strategies.
Determining optimal parameters in magnetic spacecraft stabilization via attitude feedback
NASA Astrophysics Data System (ADS)
Bruni, Renato; Celani, Fabio
2016-10-01
The attitude control of a spacecraft using magnetorquers can be achieved by a feedback control law which has four design parameters. However, the practical determination of appropriate values for these parameters is a critical open issue. We propose here an innovative systematic approach for finding these values: they should be those that minimize the convergence time to the desired attitude. This a particularly diffcult optimization problem, for several reasons: 1) such time cannot be expressed in analytical form as a function of parameters and initial conditions; 2) design parameters may range over very wide intervals; 3) convergence time depends also on the initial conditions of the spacecraft, which are not known in advance. To overcome these diffculties, we present a solution approach based on derivative-free optimization. These algorithms do not need to write analytically the objective function: they only need to compute it in a number of points. We also propose a fast probing technique to identify which regions of the search space have to be explored densely. Finally, we formulate a min-max model to find robust parameters, namely design parameters that minimize convergence time under the worst initial conditions. Results are very promising.
Data Driven Ionospheric Modeling in Relation to Space Weather: Percent Cloud Coverage
NASA Astrophysics Data System (ADS)
Tulunay, Y.; Senalp, E. T.; Tulunay, E.
2009-04-01
Since 1990, a small group at METU has been developing data driven models in order to forecast some critical system parameters related with the near-Earth space processes. The background on the subject supports new achievements, which contributed the COST 724 activities, which will contribute to the new ES0803 activities. This work mentions one of the outstanding contributions, namely forecasting of meteorological parameters by considering the probable influence of cosmic rays (CR) and sunspot numbers (SSN). The data-driven method is generic and applicable to many Near-Earth Space processes including ionospheric/plasmaspheric interactions. It is believed that the EURIPOS initiative would be useful in supplying wide range reliable data to the models developed. Quantification of physical mechanisms, which causally link Space Weather to the Earth's Weather, has been a challenging task. In this basis, the percent cloud coverage (%CC) and cloud top temperatures (CTT) were forecast one month ahead of time between geographic coordinates of (22.5˚N; 57.5˚N); and (7.5˚W; 47.5˚E) at 96 grid locations and covering the years of 1983 to 2000 using the Middle East Technical University Fuzzy Neural Network Model (METU-FNN-M) [Tulunay, 2008]. The Near Earth Space variability at several different time scales arises from a number of separate factors and the physics of the variations cannot be modeled due to the lack of current information about the parameters of several natural processes. CR are shielded by the magnetosphere to a certain extent, but they can modulate the low level cloud cover. METU-FNN-M was developed, trained and applied for forecasting the %CC and CTT, by considering the history of those meteorological variables; Cloud Optical Depth (COD); the Ionization (I) value that is formulized and computed by using CR data and CTT; SSN; temporal variables; and defuzified cloudiness. The temporal and spatial variables and the cut off rigidity are used to compute the defuzified cloudiness. The forecast %CC and CTT values at uniformly spaced grids over the region of interest are used for mapping by Bezier surfaces. The major advantage of the fuzzy model is that it uses its inputs and the expert knowledge in coordination. Long-term cloud analysis was performed on a region having differences in terms of atmospheric activity, in order to show the generalization capability. Global and local parameters of the process were considered. Both CR Flux and SSN reflect the influence of Space Weather on general planetary situation; but other parameters in the inputs of the model reflect local situation. Error and correlation analysis on the forecast and observed parameters were performed. The correlations between the forecast and observed parameters are very promising. The model contributes to the dependence of the cloud formation process on CR Fluxes. The one-month in advance forecast values of the model can also be used as inputs to other models, which forecast some other local or global parameters in order to further test the hypothesis on possible link(s) between Space Weather and the Earth's Weather. The model based, theoretical and numerical works mentioned are promising and have potential for future research and developments. References Tulunay Y., E.T. Şenalp, Ş. Öz, L.I. Dorman, E. Tulunay, S.S. Menteş and M.E. Akcan (2008), A Fuzzy Neural Network Model to Forecast the Percent Cloud Coverage and Cloud Top Temperature Maps, Ann. Geophys., 26(12), 3945-3954, 2008.
Zoom-in Simulations of Protoplanetary Disks Starting from GMC Scales
NASA Astrophysics Data System (ADS)
Kuffmeier, Michael; Haugbølle, Troels; Nordlund, Åke
2017-09-01
We investigate the formation of protoplanetary disks around nine solar-mass stars formed in the context of a (40 pc)3 Giant Molecular Cloud model, using ramses adaptive mesh refinement simulations extending over a scale range of about 4 million, from an outer scale of 40 pc down to cell sizes of 2 au. Our most important result is that the accretion process is heterogeneous in multiple ways: in time, in space, and among protostars of otherwise similar mass. Accretion is heterogeneous in time, in the sense that accretion rates vary during the evolution, with generally decreasing profiles, whose slopes vary over a wide range, and where accretion can increase again if a protostar enters a region with increased density and low speed. Accretion is heterogeneous in space, because of the mass distribution, with mass approaching the accreting star-disk system in filaments and sheets. Finally, accretion is heterogeneous among stars, since the detailed conditions and dynamics in the neighborhood of each star can vary widely. We also investigate the sensitivity of disk formation to physical conditions and test their robustness by varying numerical parameters. We find that disk formation is robust even when choosing the least favorable sink particle parameters, and that turbulence cascading from larger scales is a decisive factor in disk formation. We also investigate the transport of angular momentum, finding that the net inward mechanical transport is compensated for mainly by an outward-directed magnetic transport, with a contribution from gravitational torques usually subordinate to the magnetic transport.
Cronin, Adam L; Loeuille, Nicolas; Monnin, Thibaud
2016-02-05
Offspring investment strategies vary markedly between and within taxa, and much of this variation is thought to stem from the trade-off between offspring size and number. While producing larger offspring can increase their competitive ability, this often comes at a cost to their colonization ability. This competition-colonization trade-off (CCTO) is thought to be an important mechanism supporting coexistence of alternative strategies in a wide range of taxa. However, the relative importance of an alternative and possibly synergistic mechanism-spatial structuring of the environment-remains the topic of some debate. In this study, we explore the influence of these mechanisms on metacommunity structure using an agent-based model built around variable life-history traits. Our model combines explicit resource competition and spatial dynamics, allowing us to tease-apart the influence of, and explore the interaction between, the CCTO and the spatial structure of the environment. We test our model using two reproductive strategies which represent extremes of the CCTO and are common in ants. Our simulations show that colonisers outperform competitors in environments subject to higher temporal and spatial heterogeneity and are favoured when agents mature late and invest heavily in reproduction, whereas competitors dominate in low-disturbance, high resource environments and when maintenance costs are low. Varying life-history parameters has a marked influence on coexistence conditions and yields evolutionary stable strategies for both modes of reproduction. Nonetheless, we show that these strategies can coexist over a wide range of life-history and environmental parameter values, and that coexistence can in most cases be explained by a CCTO. By explicitly considering space, we are also able to demonstrate the importance of the interaction between dispersal and landscape structure. The CCTO permits species employing different reproductive strategies to coexist over a wide range of life-history and environmental parameters, and is likely to be an important factor in structuring ant communities. Our consideration of space highlights the importance of dispersal, which can limit the success of low-dispersers through kin competition, and enhance coexistence conditions for different strategies in spatially structured environments.
Elastic wake instabilities in a creeping flow between two obstacles
NASA Astrophysics Data System (ADS)
Varshney, Atul; Steinberg, Victor
2017-05-01
It is shown that a channel flow of a dilute polymer solution between two widely spaced cylinders hindering the flow is an important paradigm of an unbounded flow in the case in which the channel wall is located sufficiently far from the cylinders. The quantitative characterization of instabilities in a creeping viscoelastic channel flow between two widely spaced cylinders reveals two elastically driven transitions, which are associated with the breaking of time-reversal and mirror symmetries: Hopf and forward bifurcations described by two order parameters vrms and ω ¯, respectively. We suggest that a decrease of the normalized distance between the obstacles leads to a collapse of the two bifurcations into a codimension-2 point, a situation general for many nonequilibrium systems. However, the striking and unexpected result is the discovery of a mechanism of the vorticity growth via an increase of a vortex length at the preserved streamline curvature in a viscoelastic flow, which is in sharp contrast to the well-known suppression of the vorticity in a Newtonian flow by polymer additives.
Absorption spectroscopy at the limb of small transiting exoplanets
NASA Astrophysics Data System (ADS)
Ehrenreich, D.; Lecavelier Des Etangs, A.
2005-12-01
Planetary transits are a tremendous tool to probe into exoplanet atmospheres using the light from their parent stars (from 0.2 μm to ˜1 μm). The detection of atmospheric components in an extra-solar giant planet was performed using the Hubble Space Telescope (HST) with a sensitivity reaching ˜10-4 in relative absorption depth over ˜1 Å-wide features (Charbonneau et al., 2002). The next step is the detection and the characterization of smaller, possibly Earth-like worlds, which will require a sensitivity of ˜10-6. Fortunately, ˜0.1 μm-wide absorption bands of particular interest for small exoplanets do exist in this spectral domain. We developed a model to quantify the detectability of a variety of Earth-size planets harboring different kind of atmospheres. Key parameters are the density of the planet and the thickness of the atmosphere. We also evaluate in consequence the number of potential targets for a future space mission, and also find that K stars are best candidates. See Ehrenreich et al. (2005) for a complete description.
Borgia, G C; Brown, R J; Fantazzini, P
2000-12-01
The basic method of UPEN (uniform penalty inversion of multiexponential decay data) is given in an earlier publication (Borgia et al., J. Magn. Reson. 132, 65-77 (1998)), which also discusses the effects of noise, constraints, and smoothing on the resolution or apparent resolution of features of a computed distribution of relaxation times. UPEN applies negative feedback to a regularization penalty, allowing stronger smoothing for a broad feature than for a sharp line. This avoids unnecessarily broadening the sharp line and/or breaking the wide peak or tail into several peaks that the relaxation data do not demand to be separate. The experimental and artificial data presented earlier were T(1) data, and all had fixed data spacings, uniform in log-time. However, for T(2) data, usually spaced uniformly in linear time, or for data spaced in any manner, we have found that the data spacing does not enter explicitly into the computation. The present work shows the extension of UPEN to T(2) data, including the averaging of data in windows and the use of the corresponding weighting factors in the computation. Measures are implemented to control portions of computed distributions extending beyond the data range. The input smoothing parameters in UPEN are normally fixed, rather than data dependent. A major problem arises, especially at high signal-to-noise ratios, when UPEN is applied to data sets with systematic errors due to instrumental nonidealities or adjustment problems. For instance, a relaxation curve for a wide line can be narrowed by an artificial downward bending of the relaxation curve. Diagnostic parameters are generated to help identify data problems, and the diagnostics are applied in several examples, with particular attention to the meaningful resolution of two closely spaced peaks in a distribution of relaxation times. Where feasible, processing with UPEN in nearly real time should help identify data problems while further instrument adjustments can still be made. The need for the nonnegative constraint is greatly reduced in UPEN, and preliminary processing without this constraint helps identify data sets for which application of the nonnegative constraint is too expensive in terms of error of fit for the data set to represent sums of decaying positive exponentials plus random noise. Copyright 2000 Academic Press.
NASA Astrophysics Data System (ADS)
Quan, Guo-zheng; Zhan, Zong-yang; Wang, Tong; Xia, Yu-feng
2017-01-01
The response of true stress to strain rate, temperature and strain is a complex three-dimensional (3D) issue, and the accurate description of such constitutive relationships significantly contributes to the optimum process design. To obtain the true stress-strain data of ultra-high-strength steel, BR1500HS, a series of isothermal hot tensile tests were conducted in a wide temperature range of 973-1,123 K and a strain rate range of 0.01-10 s-1 on a Gleeble 3800 testing machine. Then the constitutive relationships were modeled by an optimally constructed and well-trained backpropagation artificial neural network (BP-ANN). The evaluation of BP-ANN model revealed that it has admirable performance in characterizing and predicting the flow behaviors of BR1500HS. A comparison on improved Arrhenius-type constitutive equation and BP-ANN model shows that the latter has higher accuracy. Consequently, the developed BP-ANN model was used to predict abundant stress-strain data beyond the limited experimental conditions. Then a 3D continuous interaction space for temperature, strain rate, strain and stress was constructed based on these predicted data. The developed 3D continuous interaction space for hot working parameters contributes to fully revealing the intrinsic relationships of BR1500HS steel.
Quantitative dual-probe microdialysis: mathematical model and analysis.
Chen, Kevin C; Höistad, Malin; Kehr, Jan; Fuxe, Kjell; Nicholson, Charles
2002-04-01
Steady-state microdialysis is a widely used technique to monitor the concentration changes and distributions of substances in tissues. To obtain more information about brain tissue properties from microdialysis, a dual-probe approach was applied to infuse and sample the radiotracer, [3H]mannitol, simultaneously both in agar gel and in the rat striatum. Because the molecules released by one probe and collected by the other must diffuse through the interstitial space, the concentration profile exhibits dynamic behavior that permits the assessment of the diffusion characteristics in the brain extracellular space and the clearance characteristics. In this paper a mathematical model for dual-probe microdialysis was developed to study brain interstitial diffusion and clearance processes. Theoretical expressions for the spatial distribution of the infused tracer in the brain extracellular space and the temporal concentration at the probe outlet were derived. A fitting program was developed using the simplex algorithm, which finds local minima of the standard deviations between experiments and theory by adjusting the relevant parameters. The theoretical curves accurately fitted the experimental data and generated realistic diffusion parameters, implying that the mathematical model is capable of predicting the interstitial diffusion behavior of [3H]mannitol and that it will be a valuable quantitative tool in dual-probe microdialysis.
NASA Astrophysics Data System (ADS)
Carrano, Charles S.; Groves, Keith M.; Rino, Charles L.; Doherty, Patricia H.
2016-08-01
The zonal drift of ionospheric irregularities at low latitudes is most commonly measured by cross-correlating observations of a scintillating satellite signal made with a pair of closely spaced antennas. The Air Force Research Laboratory-Scintillation Network Decision Aid (AFRL-SCINDA) network operates a small number of very high frequency (VHF) spaced-receiver systems at low latitudes for this purpose. A far greater number of Global Navigation Satellite System (GNSS) scintillation monitors are operated by the AFRL-SCINDA network (25-30) and the Low-Latitude Ionospheric Sensor Network (35-50), but the receivers are too widely separated from each other for cross-correlation techniques to be effective. In this paper, we present an alternative approach that leverages the weak scatter scintillation theory to infer the zonal irregularity drift from single-station GNSS measurements of S4, σφ, and the propagation geometry. Unlike the spaced-receiver technique, this approach requires assumptions regarding the height of the scattering layer (which introduces a bias in the drift estimates) and the spectral index of the irregularities (which affects the spread of the drift estimates about the mean). Nevertheless, theory and experiment suggest that the ratio of σφ to S4 is less sensitive to these parameters than it is to the zonal drift. We validate the technique using VHF spaced-receiver measurements of zonal irregularity drift obtained from the AFRL-SCINDA network. While the spaced-receiver technique remains the preferred way to monitor the drift when closely spaced antenna pairs are available, our technique provides a new opportunity to monitor zonal irregularity drift using regional or global networks of widely separated GNSS scintillation monitors.
Wang, Xiaohua; Li, Xi; Rong, Mingzhe; Xie, Dingli; Ding, Dan; Wang, Zhixiang
2017-01-01
The ultra-high frequency (UHF) method is widely used in insulation condition assessment. However, UHF signal processing algorithms are complicated and the size of the result is large, which hinders extracting features and recognizing partial discharge (PD) patterns. This article investigated the chromatic methodology that is novel in PD detection. The principle of chromatic methodologies in color science are introduced. The chromatic processing represents UHF signals sparsely. The UHF signals obtained from PD experiments were processed using chromatic methodology and characterized by three parameters in chromatic space (H, L, and S representing dominant wavelength, signal strength, and saturation, respectively). The features of the UHF signals were studied hierarchically. The results showed that the chromatic parameters were consistent with conventional frequency domain parameters. The global chromatic parameters can be used to distinguish UHF signals acquired by different sensors, and they reveal the propagation properties of the UHF signal in the L-shaped gas-insulated switchgear (GIS). Finally, typical PD defect patterns had been recognized by using novel chromatic parameters in an actual GIS tank and good performance of recognition was achieved. PMID:28106806
Wang, Xiaohua; Li, Xi; Rong, Mingzhe; Xie, Dingli; Ding, Dan; Wang, Zhixiang
2017-01-18
The ultra-high frequency (UHF) method is widely used in insulation condition assessment. However, UHF signal processing algorithms are complicated and the size of the result is large, which hinders extracting features and recognizing partial discharge (PD) patterns. This article investigated the chromatic methodology that is novel in PD detection. The principle of chromatic methodologies in color science are introduced. The chromatic processing represents UHF signals sparsely. The UHF signals obtained from PD experiments were processed using chromatic methodology and characterized by three parameters in chromatic space ( H , L , and S representing dominant wavelength, signal strength, and saturation, respectively). The features of the UHF signals were studied hierarchically. The results showed that the chromatic parameters were consistent with conventional frequency domain parameters. The global chromatic parameters can be used to distinguish UHF signals acquired by different sensors, and they reveal the propagation properties of the UHF signal in the L-shaped gas-insulated switchgear (GIS). Finally, typical PD defect patterns had been recognized by using novel chromatic parameters in an actual GIS tank and good performance of recognition was achieved.
Analysis of delay reducing and fuel saving sequencing and spacing algorithms for arrival traffic
NASA Technical Reports Server (NTRS)
Neuman, Frank; Erzberger, Heinz
1991-01-01
The air traffic control subsystem that performs sequencing and spacing is discussed. The function of the sequencing and spacing algorithms is to automatically plan the most efficient landing order and to assign optimally spaced landing times to all arrivals. Several algorithms are described and their statistical performance is examined. Sequencing brings order to an arrival sequence for aircraft. First-come-first-served sequencing (FCFS) establishes a fair order, based on estimated times of arrival, and determines proper separations. Because of the randomness of the arriving traffic, gaps will remain in the sequence of aircraft. Delays are reduced by time-advancing the leading aircraft of each group while still preserving the FCFS order. Tightly spaced groups of aircraft remain with a mix of heavy and large aircraft. Spacing requirements differ for different types of aircraft trailing each other. Traffic is reordered slightly to take advantage of this spacing criterion, thus shortening the groups and reducing average delays. For heavy traffic, delays for different traffic samples vary widely, even when the same set of statistical parameters is used to produce each sample. This report supersedes NASA TM-102795 on the same subject. It includes a new method of time-advance as well as an efficient method of sequencing and spacing for two dependent runways.
Vortex-Induced Vibrations of a Flexibly-Mounted Cyber-Physical Rectangular Plate
NASA Astrophysics Data System (ADS)
Onoue, Kyohei; Strom, Benjamin; Song, Arnold; Breuer, Kenneth
2013-11-01
We have developed a cyber-physical system to explore the vortex-induced vibration (VIV) behavior of a flat plate mounted on a virtual spring damper support. The plate is allowed to oscillate about its mid-chord and the measured angular position, velocity, and torque are used as inputs to a feedback control system that provides a restoring torque and can simulate a wide range of structural dynamic behavior. A series of experiments were carried out using different sized plates, and over a range of freestream velocities, equilibrium angles of attack, and simulated stiffness and damping. We observe a synchronization phenomenon over a wide range of parameter space, wherein the plate oscillates at moderate to large amplitude with a frequency dictated by the natural structural frequency of the system. Additionally, the existence of bistable states is reflected in the hysteretic response of the system. The cyber-physical damping extracts energy from the flow and the efficiency of this harvesting mechanism is characterized over a range of dimensionless stiffness and damping parameters. This research is funded by the Air Force Office of Scientific Research (AFOSR).
Pázmándi, Tamás; Deme, Sándor; Láng, Edit
2006-01-01
One of the many risks of long-duration space flights is the excessive exposure to cosmic radiation, which has great importance particularly during solar flares and higher sun activity. Monitoring of the cosmic radiation on board space vehicles is carried out on the basis of wide international co-operation. Since space radiation consists mainly of charged heavy particles (protons, alpha and heavier particles), the equivalent dose differs significantly from the absorbed dose. A radiation weighting factor (w(R)) is used to convert absorbed dose (Gy) to equivalent dose (Sv). w(R) is a function of the linear energy transfer of the radiation. Recently used equipment is suitable for measuring certain radiation field parameters changing in space and over time, so a combination of different measurements and calculations is required to characterise the radiation field in terms of dose equivalent. The objectives of this project are to develop and manufacture a three-axis silicon detector telescope, called Tritel, and to develop software for data evaluation of the measured energy deposition spectra. The device will be able to determine absorbed dose and dose equivalent of the space radiation.
Predicting Instability Timescales in Closely-Packed Planetary Systems
NASA Astrophysics Data System (ADS)
Tamayo, Daniel; Hadden, Samuel; Hussain, Naireen; Silburt, Ari; Gilbertson, Christian; Rein, Hanno; Menou, Kristen
2018-04-01
Many of the multi-planet systems discovered around other stars are maximally packed. This implies that simulations with masses or orbital parameters too far from the actual values will destabilize on short timescales; thus, long-term dynamics allows one to constrain the orbital architectures of many closely packed multi-planet systems. A central challenge in such efforts is the large computational cost of N-body simulations, which preclude a full survey of the high-dimensional parameter space of orbital architectures allowed by observations. I will present our recent successes in training machine learning models capable of reliably predicting orbital stability a million times faster than N-body simulations. By engineering dynamically relevant features that we feed to a gradient-boosted decision tree algorithm (XGBoost), we are able to achieve a precision and recall of 90% on a holdout test set of N-body simulations. This opens a wide discovery space for characterizing new exoplanet discoveries and for elucidating how orbital architectures evolve through time as the next generation of spaceborne exoplanet surveys prepare for launch this year.
Higgs portal dark matter in non-standard cosmological histories
NASA Astrophysics Data System (ADS)
Hardy, Edward
2018-06-01
A scalar particle with a relic density set by annihilations through a Higgs portal operator is a simple and minimal possibility for dark matter. However, assuming a thermal cosmological history this model is ruled out over most of parameter space by collider and direct detection constraints. We show that in theories with a non-thermal cosmological history Higgs portal dark matter is viable for a wide range of dark matter masses and values of the portal coupling, evading existing limits. In particular, we focus on the string theory motivated scenario of a period of matter domination due to a light modulus with a decay rate that is suppressed by the Planck scale. Dark matter with a mass ≲ GeV is possible without additional hidden sector states, and this can have astrophysically relevant self-interactions. We also study the signatures of such models at future direct, indirect, and collider experiments. Searches for invisible Higgs decays at the high luminosity LHC or an e + e - collider could cover a significant proportion of the parameter space for low mass dark matter, and future direct detection experiments will play a complementary role.
Magnetosphere simulations with a high-performance 3D AMR MHD Code
NASA Astrophysics Data System (ADS)
Gombosi, Tamas; Dezeeuw, Darren; Groth, Clinton; Powell, Kenneth; Song, Paul
1998-11-01
BATS-R-US is a high-performance 3D AMR MHD code for space physics applications running on massively parallel supercomputers. In BATS-R-US the electromagnetic and fluid equations are solved with a high-resolution upwind numerical scheme in a tightly coupled manner. The code is very robust and it is capable of spanning a wide range of plasma parameters (such as β, acoustic and Alfvénic Mach numbers). Our code is highly scalable: it achieved a sustained performance of 233 GFLOPS on a Cray T3E-1200 supercomputer with 1024 PEs. This talk reports results from the BATS-R-US code for the GGCM (Geospace General Circularculation Model) Phase 1 Standard Model Suite. This model suite contains 10 different steady-state configurations: 5 IMF clock angles (north, south, and three equally spaced angles in- between) with 2 IMF field strengths for each angle (5 nT and 10 nT). The other parameters are: solar wind speed =400 km/sec; solar wind number density = 5 protons/cc; Hall conductance = 0; Pedersen conductance = 5 S; parallel conductivity = ∞.
AGN neutrino flux estimates for a realistic hybrid model
NASA Astrophysics Data System (ADS)
Richter, S.; Spanier, F.
2018-07-01
Recent reports of possible correlations between high energy neutrinos observed by IceCube and Active Galactic Nuclei (AGN) activity sparked a burst of publications that attempt to predict the neutrino flux of these sources. However, often rather crude estimates are used to derive the neutrino rate from the observed photon spectra. In this work neutrino fluxes were computed in a wide parameter space. The starting point of the model was a representation of the full spectral energy density (SED) of 3C 279. The time-dependent hybrid model that was used for this study takes into account the full pγ reaction chain as well as proton synchrotron, electron-positron-pair cascades and the full SSC scheme. We compare our results to estimates frequently used in the literature. This allows to identify regions in the parameter space for which such estimates are still valid and those in which they can produce significant errors. Furthermore, if estimates for the Doppler factor, magnetic field, proton and electron densities of a source exist, the expected IceCube detection rate is readily available.
Models Archive and ModelWeb at NSSDC
NASA Astrophysics Data System (ADS)
Bilitza, D.; Papitashvili, N.; King, J. H.
2002-05-01
In addition to its large data holdings, NASA's National Space Science Data Center (NSSDC) also maintains an archive of space physics models for public use (ftp://nssdcftp.gsfc.nasa.gov/models/). The more than 60 model entries cover a wide range of parameters from the atmosphere, to the ionosphere, to the magnetosphere, to the heliosphere. The models are primarily empirical models developed by the respective model authors based on long data records from ground and space experiments. An online model catalog (http://nssdc.gsfc.nasa.gov/space/model/) provides information about these and other models and links to the model software if available. We will briefly review the existing model holdings and highlight some of its usages and users. In response to a growing need by the user community, NSSDC began to develop web-interfaces for the most frequently requested models. These interfaces enable users to compute and plot model parameters online for the specific conditions that they are interested in. Currently included in the Modelweb system (http://nssdc.gsfc.nasa.gov/space/model/) are the following models: the International Reference Ionosphere (IRI) model, the Mass Spectrometer Incoherent Scatter (MSIS) E90 model, the International Geomagnetic Reference Field (IGRF) and the AP/AE-8 models for the radiation belt electrons and protons. User accesses to both systems have been steadily increasing over the last years with occasional spikes prior to large scientific meetings. The current monthly rate is between 5,000 to 10,000 accesses for either system; in February 2002 13,872 accesses were recorded to the Modelsweb and 7092 accesses to the models archive.
A New Method for Wide-field Near-IR Imaging with the Hubble Space Telescope
NASA Astrophysics Data System (ADS)
Momcheva, Ivelina G.; van Dokkum, Pieter G.; van der Wel, Arjen; Brammer, Gabriel B.; MacKenty, John; Nelson, Erica J.; Leja, Joel; Muzzin, Adam; Franx, Marijn
2017-01-01
We present a new technique for wide and shallow observations using the near-infrared channel of Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST). Wide-field near-IR surveys with HST are generally inefficient, as guide star acquisitions make it impractical to observe more than one pointing per orbit. This limitation can be circumvented by guiding with gyros alone, which is possible as long as the telescope has three functional gyros. The method presented here allows us to observe mosaics of eight independent WFC3-IR pointings in a single orbit by utilizing the fact that HST drifts by only a very small amount in the 25 s between non-destructive reads of unguided exposures. By shifting the reads and treating them as independent exposures the full resolution of WFC3 can be restored. We use this “drift and shift” (DASH) method in the Cycle 23 COSMOS-DASH program, which will obtain 456 WFC3 H 160 pointings in 57 orbits, covering an area of 0.6 degree in the COSMOS field down to H 160 = 25. When completed, the program will more than triple the area of extra-galactic survey fields covered by near-IR imaging at HST resolution. We demonstrate the viability of the method with the first four orbits (32 pointings) of this program. We show that the resolution of the WFC3 camera is preserved, and that structural parameters of galaxies are consistent with those measured in guided observations.
NASA Astrophysics Data System (ADS)
Tuller, Markus; Or, Dani
2001-05-01
Many models for hydraulic conductivity of partially saturated porous media rely on oversimplified representation of the pore space as a bundle of cylindrical capillaries and disregard flow in liquid films. Recent progress in modeling liquid behavior in angular pores of partially saturated porous media offers an alternative framework. We assume that equilibrium liquid-vapor interfaces provide well-defined and stable boundaries for slow laminar film and corner flow regimes in pore space comprised of angular pores connected to slit-shaped spaces. Knowledge of liquid configuration in the assumed geometry facilitates calculation of average liquid velocities in films and corners and enables derivation of pore-scale hydraulic conductivity as a function of matric potential. The pore-scale model is statistically upscaled to represent hydraulic conductivity for a sample of porous medium. Model parameters for the analytical sample-scale expressions are estimated from measured liquid retention data and other measurable medium properties. Model calculations illustrate the important role of film flow, whose contribution dominates capillary flow (in full pores and corners) at relatively high matric potentials (approximately -100 to -300 J kg-1, or -1 to 3 bars). The crossover region between film and capillary flow is marked by a significant change in the slope of the hydraulic conductivity function as often observed in measurements. Model predictions are compared with the widely applied van Genuchten-Mualem model and yield reasonable agreement with measured retention and hydraulic conductivity data over a wide range of soil textural classes.
Simulation of MEMS for the Next Generation Space Telescope
NASA Technical Reports Server (NTRS)
Mott, Brent; Kuhn, Jonathan; Broduer, Steve (Technical Monitor)
2001-01-01
The NASA Goddard Space Flight Center (GSFC) is developing optical micro-electromechanical system (MEMS) components for potential application in Next Generation Space Telescope (NGST) science instruments. In this work, we present an overview of the electro-mechanical simulation of three MEMS components for NGST, which include a reflective micro-mirror array and transmissive microshutter array for aperture control for a near infrared (NIR) multi-object spectrometer and a large aperture MEMS Fabry-Perot tunable filter for a NIR wide field camera. In all cases the device must operate at cryogenic temperatures with low power consumption and low, complementary metal oxide semiconductor (CMOS) compatible, voltages. The goal of our simulation efforts is to adequately predict both the performance and the reliability of the devices during ground handling, launch, and operation to prevent failures late in the development process and during flight. This goal requires detailed modeling and validation of complex electro-thermal-mechanical interactions and very large non-linear deformations, often involving surface contact. Various parameters such as spatial dimensions and device response are often difficult to measure reliably at these small scales. In addition, these devices are fabricated from a wide variety of materials including surface micro-machined aluminum, reactive ion etched (RIE) silicon nitride, and deep reactive ion etched (DRIE) bulk single crystal silicon. The above broad set of conditions combine to be a formidable challenge for space flight qualification analysis. These simulations represent NASA/GSFC's first attempts at implementing a comprehensive strategy to address complex MEMS structures.
Properties of two-temperature dissipative accretion flow around black holes
NASA Astrophysics Data System (ADS)
Dihingia, Indu K.; Das, Santabrata; Mandal, Samir
2018-04-01
We study the properties of two-temperature accretion flow around a non-rotating black hole in presence of various dissipative processes where pseudo-Newtonian potential is adopted to mimic the effect of general relativity. The flow encounters energy loss by means of radiative processes acted on the electrons and at the same time, flow heats up as a consequence of viscous heating effective on ions. We assumed that the flow is exposed with the stochastic magnetic fields that leads to Synchrotron emission of electrons and these emissions are further strengthen by Compton scattering. We obtain the two-temperature global accretion solutions in terms of dissipation parameters, namely, viscosity (α) and accretion rate ({\\dot{m}}), and find for the first time in the literature that such solutions may contain standing shock waves. Solutions of this kind are multitransonic in nature, as they simultaneously pass through both inner critical point (xin) and outer critical point (xout) before crossing the black hole horizon. We calculate the properties of shock-induced global accretion solutions in terms of the flow parameters. We further show that two-temperature shocked accretion flow is not a discrete solution, instead such solution exists for wide range of flow parameters. We identify the effective domain of the parameter space for standing shock and observe that parameter space shrinks as the dissipation is increased. Since the post-shock region is hotter due to the effect of shock compression, it naturally emits hard X-rays, and therefore, the two-temperature shocked accretion solution has the potential to explain the spectral properties of the black hole sources.
CONSTRAINTS FROM ASYMMETRIC HEATING: INVESTIGATING THE EPSILON AURIGAE DISK
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pearson, Richard L. III; Stencel, Robert E., E-mail: richard.pearson@du.edu, E-mail: robert.stencel@du.edu
2015-01-01
Epsilon Aurigae is a long-period eclipsing binary that likely contains an F0Ia star and a circumstellar disk enshrouding a hidden companion, assumed to be a main-sequence B star. High uncertainty in its parallax has kept the evolutionary status of the system in question and, hence, the true nature of each component. This unknown, as well as the absence of solid state spectral features in the infrared, requires an investigation of a wide parameter space by means of both analytic and Monte Carlo radiative transfer (MCRT) methods. The first MCRT models of epsilon Aurigae that include all three system components aremore » presented here. We seek additional system parameter constraints by melding analytic approximations with MCRT outputs (e.g., dust temperatures) on a first-order level. The MCRT models investigate the effects of various parameters on the disk-edge temperatures; these include two distances, three particle size distributions, three compositions, and two disk masses, resulting in 36 independent models. Specifically, the MCRT temperatures permit analytic calculations of effective heating and cooling curves along the disk edge. These are used to calculate representative observed fluxes and corresponding temperatures. This novel application of thermal properties provides the basis for utilization of other binary systems containing disks. We find degeneracies in the model fits for the various parameter sets. However, the results show a preference for a carbon disk with particle size distributions ≥10 μm. Additionally, a linear correlation between the MCRT noon and basal temperatures serves as a tool for effectively eliminating portions of the parameter space.« less
Dissipative advective accretion disc solutions with variable adiabatic index around black holes
NASA Astrophysics Data System (ADS)
Kumar, Rajiv; Chattopadhyay, Indranil
2014-10-01
We investigated accretion on to black holes in presence of viscosity and cooling, by employing an equation of state with variable adiabatic index and multispecies fluid. We obtained the expression of generalized Bernoulli parameter which is a constant of motion for an accretion flow in presence of viscosity and cooling. We obtained all possible transonic solutions for a variety of boundary conditions, viscosity parameters and accretion rates. We identified the solutions with their positions in the parameter space of generalized Bernoulli parameter and the angular momentum on the horizon. We showed that a shocked solution is more luminous than a shock-free one. For particular energies and viscosity parameters, we obtained accretion disc luminosities in the range of 10- 4 - 1.2 times Eddington luminosity, and the radiative efficiency seemed to increase with the mass accretion rate too. We found steady state shock solutions even for high-viscosity parameters, high accretion rates and for wide range of composition of the flow, starting from purely electron-proton to lepton-dominated accretion flow. However, similar to earlier studies of inviscid flow, accretion shock was not obtained for electron-positron pair plasma.
The nonlinear wave equation for higher harmonics in free-electron lasers
NASA Technical Reports Server (NTRS)
Colson, W. B.
1981-01-01
The nonlinear wave equation and self-consistent pendulum equation are generalized to describe free-electron laser operation in higher harmonics; this can significantly extend their tunable range to shorter wavelengths. The dynamics of the laser field's amplitude and phase are explored for a wide range of parameters using families of normalized gain curves applicable to both the fundamental and harmonics. The electron phase-space displays the fundamental physics driving the wave, and this picture is used to distinguish between the effects of high gain and Coulomb forces.
Experimental Investigation on Thermal Physical Properties of an Advanced Polyester Material
NASA Astrophysics Data System (ADS)
Guangfa, Gao; Shujie, Yuan; Ruiyuan, Huang; Yongchi, Li
Polyester materials were applied widely in aircraft and space vehicles engineering. Aimed to an advanced polyester material, a series of experiments for thermal physical properties of this material were conducted, and the corresponding performance curves were obtained through statistic analyzing. The experimental results showed good consistency. And then the thermal physical parameters such as thermal expansion coefficient, engineering specific heat and sublimation heat were solved and calculated. This investigation provides an important foundation for the further research on the heat resistance and thermodynamic performance of this material.
Bound states and interactions of vortex solitons in the discrete Ginzburg-Landau equation
NASA Astrophysics Data System (ADS)
Mejía-Cortés, C.; Soto-Crespo, J. M.; Vicencio, Rodrigo A.; Molina, Mario I.
2012-08-01
By using different continuation methods, we unveil a wide region in the parameter space of the discrete cubic-quintic complex Ginzburg-Landau equation, where several families of stable vortex solitons coexist. All these stationary solutions have a symmetric amplitude profile and two different topological charges. We also observe the dynamical formation of a variety of “bound-state” solutions composed of two or more of these vortex solitons. All of these stable composite structures persist in the conservative cubic limit for high values of their power content.
Analytic theory of orbit contraction and ballistic entry into planetary atmospheres
NASA Technical Reports Server (NTRS)
Longuski, J. M.; Vinh, N. X.
1980-01-01
A space object traveling through an atmosphere is governed by two forces: aerodynamic and gravitational. On this premise, equations of motion are derived to provide a set of universal entry equations applicable to all regimes of atmospheric flight from orbital motion under the dissipate force of drag through the dynamic phase of reentry, and finally to the point of contact with the planetary surface. Rigorous mathematical techniques such as averaging, Poincare's method of small parameters, and Lagrange's expansion, applied to obtain a highly accurate, purely analytic theory for orbit contraction and ballistic entry into planetary atmospheres. The theory has a wide range of applications to modern problems including orbit decay of artificial satellites, atmospheric capture of planetary probes, atmospheric grazing, and ballistic reentry of manned and unmanned space vehicles.
NASA Technical Reports Server (NTRS)
Metcalf, David
1995-01-01
Multimedia Information eXchange (MIX) is a multimedia information system that accommodates multiple data types and provides consistency across platforms. Information from all over the world can be accessed quickly and efficiently with the Internet-based system. I-NET's MIX uses the World Wide Web and Mosaic graphical user interface. Mosaic is available on all platforms used at I-NET's Kennedy Space Center (KSC) facilities. Key information system design concepts and benefits are reviewed. The MIX system also defines specific configuration and helper application parameters to ensure consistent operations across the entire organization. Guidelines and procedures for other areas of importance in information systems design are also addressed. Areas include: code of ethics, content, copyright, security, system administration, and support.
NASA Astrophysics Data System (ADS)
Kuznetsova, Maria
The Community Coordinated Modeling Center (CCMC, http://ccmc.gsfc.nasa.gov) was established at the dawn of the new millennium as a long-term flexible solution to the problem of transition of progress in space environment modeling to operational space weather forecasting. CCMC hosts an expanding collection of state-of-the-art space weather models developed by the international space science community. Over the years the CCMC acquired the unique experience in preparing complex models and model chains for operational environment and developing and maintaining custom displays and powerful web-based systems and tools ready to be used by researchers, space weather service providers and decision makers. In support of space weather needs of NASA users CCMC is developing highly-tailored applications and services that target specific orbits or locations in space and partnering with NASA mission specialists on linking CCMC space environment modeling with impacts on biological and technological systems in space. Confidence assessment of model predictions is an essential element of space environment modeling. CCMC facilitates interaction between model owners and users in defining physical parameters and metrics formats relevant to specific applications and leads community efforts to quantify models ability to simulate and predict space environment events. Interactive on-line model validation systems developed at CCMC make validation a seamless part of model development circle. The talk will showcase innovative solutions for space weather research, validation, anomaly analysis and forecasting and review on-going community-wide model validation initiatives enabled by CCMC applications.
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...
2016-02-05
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less
On the orbital stability of pendulum-like vibrations of a rigid body carrying a rotor
NASA Astrophysics Data System (ADS)
Yehia, Hamad M.; El-Hadidy, E. G.
2013-09-01
One of the most notable effects in mechanics is the stabilization of the unstable upper equilibrium position of a symmetric body fixed from one point on its axis of symmetry, either by giving the body a suitable angular velocity or by adding a suitably spinned rotor along its axis. This effect is widely used in technology and in space dynamics. The aim of the present article is to explore the effect of the presence of a rotor on a simple periodic motion of the rigid body and its motion as a physical pendulum. The equation in the variation for pendulum vibrations takes the form in which α depends on the moments of inertia, ρ on the gyrostatic momentum of the rotor and ν (the modulus of the elliptic function) depends on the total energy of the motion. This equation, which reduces to Lame's equation when ρ = 0, has not been studied to any extent in the literature. The determination of the zones of stability and instability of plane motion reduces to finding conditions for the existence of primitive periodic solutions (with periods 4 K( ν), 8 K( ν)) with those parameters. Complete analysis of primitive periodic solutions of this equation is performed analogously to that of Ince for Lame's equation. Zones of stability and instability are determined analytically and illustrated in a graphical form by plotting surfaces separating them in the three-dimensional space of parameters. The problem is also solved numerically in certain regions of the parameter space, and results are compared to analytical ones.
Reducing the uncertainty in robotic machining by modal analysis
NASA Astrophysics Data System (ADS)
Alberdi, Iñigo; Pelegay, Jose Angel; Arrazola, Pedro Jose; Ørskov, Klaus Bonde
2017-10-01
The use of industrial robots for machining could lead to high cost and energy savings for the manufacturing industry. Machining robots offer several advantages respect to CNC machines such as flexibility, wide working space, adaptability and relatively low cost. However, there are some drawbacks that are preventing a widespread adoption of robotic solutions namely lower stiffness, vibration/chatter problems and lower accuracy and repeatability. Normally due to these issues conservative cutting parameters are chosen, resulting in a low material removal rate (MRR). In this article, an example of a modal analysis of a robot is presented. For that purpose the Tap-testing technology is introduced, which aims at maximizing productivity, reducing the uncertainty in the selection of cutting parameters and offering a stable process free from chatter vibrations.
Interactive visualization of multi-data-set Rietveld analyses using Cinema:Debye-Scherrer.
Vogel, Sven C; Biwer, Chris M; Rogers, David H; Ahrens, James P; Hackenberg, Robert E; Onken, Drew; Zhang, Jianzhong
2018-06-01
A tool named Cinema:Debye-Scherrer to visualize the results of a series of Rietveld analyses is presented. The multi-axis visualization of the high-dimensional data sets resulting from powder diffraction analyses allows identification of analysis problems, prediction of suitable starting values, identification of gaps in the experimental parameter space and acceleration of scientific insight from the experimental data. The tool is demonstrated with analysis results from 59 U-Nb alloy samples with different compositions, annealing times and annealing temperatures as well as with a high-temperature study of the crystal structure of CsPbBr 3 . A script to extract parameters from a series of Rietveld analyses employing the widely used GSAS Rietveld software is also described. Both software tools are available for download.
Interactive visualization of multi-data-set Rietveld analyses using Cinema:Debye-Scherrer
Biwer, Chris M.; Rogers, David H.; Ahrens, James P.; Hackenberg, Robert E.; Onken, Drew; Zhang, Jianzhong
2018-01-01
A tool named Cinema:Debye-Scherrer to visualize the results of a series of Rietveld analyses is presented. The multi-axis visualization of the high-dimensional data sets resulting from powder diffraction analyses allows identification of analysis problems, prediction of suitable starting values, identification of gaps in the experimental parameter space and acceleration of scientific insight from the experimental data. The tool is demonstrated with analysis results from 59 U–Nb alloy samples with different compositions, annealing times and annealing temperatures as well as with a high-temperature study of the crystal structure of CsPbBr3. A script to extract parameters from a series of Rietveld analyses employing the widely used GSAS Rietveld software is also described. Both software tools are available for download. PMID:29896062
Evolution of the squeezing-enhanced vacuum state in the amplitude dissipative channel
NASA Astrophysics Data System (ADS)
Ren, Gang; Du, Jian-ming; Zhang, Wen-hai
2018-05-01
We study the evolution of the squeezing-enhanced vacuum state (SEVS) in the amplitude dissipative channel by using the two-mode entangled state in the Fock space and Kraus operator. The explicit formulation of the output state is also given. It is found that the output state does not exhibit sub-Poissonian behavior for the nonnegative value of the Mandel's Q-parameters in a wide range of values of squeezing parameter and dissipation factor. It is interesting to see that second-order correlation function is independent of the dissipation factor. However, the photon-number distribution of the output quantum state shows remarkable oscillations with respect to the dissipation factor. The shape of Wigner function and the degree of squeezing show that the initial SEVS is dissipated by the amplitude dissipative channel.
NASA Astrophysics Data System (ADS)
Krenn, Julia; Zangerl, Christian; Mergili, Martin
2017-04-01
r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This strategy is best demonstrated for two input parameters, but can be extended arbitrarily. We use a set of small rock avalanches from western Austria, and some larger ones from Canada and New Zealand, to optimize the basal friction coefficient and the mass-to-drag ratio of the two-parameter friction model implemented with r.randomwalk. Thereby we repeat the optimization procedure with conservative and non-conservative assumptions of a set of complementary parameters and with different raster cell sizes. Our preliminary results indicate that the model performance in terms of AUROC achieved with broad parameter spaces is hardly surpassed by the performance achieved with narrow parameter spaces. However, broad spaces may result in very conservative or very non-conservative predictions. Therefore, guiding parameter spaces have to be (i) broad enough to avoid the risk of being off target; and (ii) narrow enough to ensure a reasonable level of conservativeness of the results. The next steps will consist in (i) extending the study to other types of mass flow processes in order to support forward calculations using r.randomwalk; and (ii) in applying the same strategy to the more complex, dynamic model r.avaflow.
Spacing, Thinning, and Pruning Practices for Young Cottonwood Plantations
Leon S. Minckler
1970-01-01
The 5-year growth of cottonwood trees planted at five spacing levels is summarized. Wide spacing resulted in better diameter and height growth, but less total wood production per acre than close spacing. Early thinning of closely spaced trees did not maintain diameter growth equal to that of trees with initial wide spacing.
Dynamics of a neuron model in different two-dimensional parameter-spaces
NASA Astrophysics Data System (ADS)
Rech, Paulo C.
2011-03-01
We report some two-dimensional parameter-space diagrams numerically obtained for the multi-parameter Hindmarsh-Rose neuron model. Several different parameter planes are considered, and we show that regardless of the combination of parameters, a typical scenario is preserved: for all choice of two parameters, the parameter-space presents a comb-shaped chaotic region immersed in a large periodic region. We also show that exist regions close these chaotic region, separated by the comb teeth, organized themselves in period-adding bifurcation cascades.
Transformation to equivalent dimensions—a new methodology to study earthquake clustering
NASA Astrophysics Data System (ADS)
Lasocki, Stanislaw
2014-05-01
A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.
Mouse Drawer System (MDS): An autonomous hardware for supporting mice space research
NASA Astrophysics Data System (ADS)
Liu, Y.; Biticchi, R.; Alberici, G.; Tenconi, C.; Cilli, M.; Fontana, V.; Cancedda, R.; Falcetti, G.
2005-08-01
For the scientific community the ability of flying mice under weightless conditions in space, compared to other rodents, offers many valuable advantages. These include the option of testing a wide range of wild-type and mutant animals, an increased animal number for flight, and a reduced demand on shuttle resources and crew time. In this study, we describe a spaceflight hardware for mice, the Mouse Drawer System (MDS). MDS can interface with Space Shuttle middeck and International Space Station Express Rack. It consists of Mice Chamber, Liquid Handling Subsystem, Food Delivery Subsystem, Air Conditioning Subsystem, Illumination Subsystem, Observation Subsystem and Payload Control Unit. It offers single or paired containment for 6-8 mice with a mean weight of 40 grams/mouse for a period of up to 3 months. Animal tests were conducted in a MDS breadboard to validate the biocompatibility of various subsystems. Mice survived in all tests of short and long duration. Results of blood parameters, histology and air/waste composition analysis showed that MDS subsystems meet the NIH guidelines for temperature, humidity, food and water access, air quality, odour and waste management.
Probabilistic Mass Growth Uncertainties
NASA Technical Reports Server (NTRS)
Plumer, Eric; Elliott, Darren
2013-01-01
Mass has been widely used as a variable input parameter for Cost Estimating Relationships (CER) for space systems. As these space systems progress from early concept studies and drawing boards to the launch pad, their masses tend to grow substantially, hence adversely affecting a primary input to most modeling CERs. Modeling and predicting mass uncertainty, based on historical and analogous data, is therefore critical and is an integral part of modeling cost risk. This paper presents the results of a NASA on-going effort to publish mass growth datasheet for adjusting single-point Technical Baseline Estimates (TBE) of masses of space instruments as well as spacecraft, for both earth orbiting and deep space missions at various stages of a project's lifecycle. This paper will also discusses the long term strategy of NASA Headquarters in publishing similar results, using a variety of cost driving metrics, on an annual basis. This paper provides quantitative results that show decreasing mass growth uncertainties as mass estimate maturity increases. This paper's analysis is based on historical data obtained from the NASA Cost Analysis Data Requirements (CADRe) database.
NASA Astrophysics Data System (ADS)
Bin, Wang; Dong, Shiyun; Yan, Shixing; Gang, Xiao; Xie, Zhiwei
2018-03-01
Picosecond laser has ultrashort pulse width and ultrastrong peak power, which makes it widely used in the field of micro-nanoscale fabrication. polydimethylsiloxane (PDMS) is a typical silicone elastomer with good hydrophobicity. In order to further improve the hydrophobicity of PDMS, the picosecond laser was used to fabricate a grid-like microstructure on the surface of PDMS, and the relationship between hydrophobicity of PDMS with surface microstructure and laser processing parameters, such as processing times and cell spacing was studied. The results show that: compared with the unprocessed PDMS, the presence of surface microstructure significantly improved the hydrophobicity of PDMS. When the number of processing is constant, the hydrophobicity of PDMS decreases with the increase of cell spacing. However, when the cell spacing is fixed, the hydrophobicity of PDMS first increases and then decreases with the increase of processing times. In particular, when the times of laser processing is 6 and the cell spacing is 50μm, the contact angle of PDMS increased from 113° to 154°, which reached the level of superhydrophobic.
Ionospheric research for space weather service support
NASA Astrophysics Data System (ADS)
Stanislawska, Iwona; Gulyaeva, Tamara; Dziak-Jankowska, Beata
2016-07-01
Knowledge of the behavior of the ionosphere is very important for space weather services. A wide variety of ground based and satellite existing and future systems (communications, radar, surveillance, intelligence gathering, satellite operation, etc) is affected by the ionosphere. There are the needs for reliable and efficient support for such systems against natural hazard and minimalization of the risk failure. The joint research Project on the 'Ionospheric Weather' of IZMIRAN and SRC PAS is aimed to provide on-line the ionospheric parameters characterizing the space weather in the ionosphere. It is devoted to science, techniques and to more application oriented areas of ionospheric investigation in order to support space weather services. The studies based on data mining philosophy increasing the knowledge of ionospheric physical properties, modelling capabilities and gain applications of various procedures in ionospheric monitoring and forecasting were concerned. In the framework of the joint Project the novel techniques for data analysis, the original system of the ionospheric disturbance indices and their implementation for the ionosphere and the ionospheric radio wave propagation are developed since 1997. Data of ionosonde measurements and results of their forecasting for the ionospheric observatories network, the regional maps and global ionospheric maps of total electron content from the navigational satellite system (GNSS) observations, the global maps of the F2 layer peak parameters (foF2, hmF2) and W-index of the ionospheric variability are provided at the web pages of SRC PAS and IZMIRAN. The data processing systems include analysis and forecast of geomagnetic indices ap and kp and new eta index applied for the ionosphere forecasting. For the first time in the world the new products of the W-index maps analysis are provided in Catalogues of the ionospheric storms and sub-storms and their association with the global geomagnetic Dst storms is investigated. The products of the Project web sites at http://www.cbk.waw.pl/rwc and http://www.izmiran.ru/services/iweather are widely used in scientific investigations and numerous applications by the telecommunication and navigation operators and users whose number at the web sites is growing substantially from month to month.
Penetration experiments in aluminum and Teflon targets of widely variable thickness
NASA Technical Reports Server (NTRS)
Hoerz, F.; Cintala, Mark J.; Bernhard, R. P.; See, T. H.
1994-01-01
The morphologies and detailed dimensions of hypervelocity craters and penetration holes on space-exposed surfaces faithfully reflect the initial impact conditions. However, current understanding of this postmortem evidence and its relation to such first-order parameters as impact velocity or projectile size and mass is incomplete. While considerable progress is being made in the numerical simulation of impact events, continued impact simulations in the laboratory are needed to obtain empirical constraints and insights. This contribution summarizes such experiments with Al and Teflon targets that were carried out in order to provide a better understanding of the crater and penetration holes reported from the Solar Maximum Mission (SMM) and the Long Duration Exposure Facility (LDEF) satellites. A 5-mm light gas gun was used to fire spherical soda-lime glass projectiles from 50 to 3175 microns in diameter (D(sub P)), at a nominal 6 km/s, into Al (1100 series; annealed) and Teflon (Teflon(sup TFE)) targets. Targets ranged in thickness (T) from infinite halfspace targets (T approx. equals cm) to ultrathin foils (T approx. equals micron), yielding up to 3 degrees of magnitude variation in absolute and relative (D(sub P)/T) target thickness. This experimental matrix simulates the wide range in D(sub P)/T experienced by a space-exposed membrane of constant T that is being impacted by projectiles of widely varying sizes.
The Grid[Way] Job Template Manager, a tool for parameter sweeping
NASA Astrophysics Data System (ADS)
Lorca, Alejandro; Huedo, Eduardo; Llorente, Ignacio M.
2011-04-01
Parameter sweeping is a widely used algorithmic technique in computational science. It is specially suited for high-throughput computing since the jobs evaluating the parameter space are loosely coupled or independent. A tool that integrates the modeling of a parameter study with the control of jobs in a distributed architecture is presented. The main task is to facilitate the creation and deletion of job templates, which are the elements describing the jobs to be run. Extra functionality relies upon the GridWay Metascheduler, acting as the middleware layer for job submission and control. It supports interesting features like multi-dimensional sweeping space, wildcarding of parameters, functional evaluation of ranges, value-skipping and job template automatic indexation. The use of this tool increases the reliability of the parameter sweep study thanks to the systematic bookkeeping of job templates and respective job statuses. Furthermore, it simplifies the porting of the target application to the grid reducing the required amount of time and effort. Program summaryProgram title: Grid[Way] Job Template Manager (version 1.0) Catalogue identifier: AEIE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Apache license 2.0 No. of lines in distributed program, including test data, etc.: 3545 No. of bytes in distributed program, including test data, etc.: 126 879 Distribution format: tar.gz Programming language: Perl 5.8.5 and above Computer: Any (tested on PC x86 and x86_64) Operating system: Unix, GNU/Linux (tested on Ubuntu 9.04, Scientific Linux 4.7, centOS 5.4), Mac OS X (tested on Snow Leopard 10.6) RAM: 10 MB Classification: 6.5 External routines: The GridWay Metascheduler [1]. Nature of problem: To parameterize and manage an application running on a grid or cluster. Solution method: Generation of job templates as a cross product of the input parameter sets. Also management of the job template files including the job submission to the grid, control and information retrieval. Restrictions: The parameter sweep is limited by disk space during generation of the job templates. The wild-carding of parameters cannot be done in decreasing order. Job submission, control and information is delegated to the GridWay Metascheduler. Running time: From half a second in the simplest operation to a few minutes for thousands of exponential sampling parameters.
Electrically Guided Assembly of Colloidal Particles
NASA Astrophysics Data System (ADS)
Ristenpart, W. D.; Aksay, I. A.; Saville, D. A.
2002-11-01
In earlier work it was shown that the strength and frequency of an applied electric field alters the dynamic arrangement of particles on an electrode. Two-dimensional 'gas,' 'liquid' and 'solid' arrangements were formed, depending on the field strength and frequency. Since the particles are similarly charged, yet migrate over large distances under the influence of steady or oscillatory fields, it is clear that both hydrodynamic and electrical processes are involved. Here we report on an extensive study of electrically induced ordering in a parallel electrode cell. First, we discuss the kinetics of aggregation in a DC field as measured using video microscopy and digital image analysis. Rate constants were determined as a function of applied electric field strength and particle zeta potential. The kinetic parameters are compared to models based on electrohydrodynamic and electroosmotic fluid flow mechanisms Second, using monodisperse micron-sized particles, we examined the average interparticle spacing over a wide range of applied frequencies and field strengths. Variation of these parameters allows formation of closely-spaced arrangements and ordered arrays of widely separated particles. We find that there is a strong dependence on frequency, but there is surprisingly little influence of the electric field strength past a small threshold. Last, we present experiments with binary suspensions of similarly sized particles with negative but unequal surface potentials. A long-range lateral attraction is observed in an AC field. Depending on the frequency, this attractive interaction results in a diverse set of aggregate morphologies, including superstructured hexagonal lattices. These results are discussed in terms of induced dipole-dipole interactions and electrohydrodynamic flow. Finally, we explore the implications for practical applications.
Zoom-in Simulations of Protoplanetary Disks Starting from GMC Scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuffmeier, Michael; Haugbølle, Troels; Nordlund, Åke, E-mail: kueffmeier@nbi.ku.dk
2017-09-01
We investigate the formation of protoplanetary disks around nine solar-mass stars formed in the context of a (40 pc){sup 3} Giant Molecular Cloud model, using ramses adaptive mesh refinement simulations extending over a scale range of about 4 million, from an outer scale of 40 pc down to cell sizes of 2 au. Our most important result is that the accretion process is heterogeneous in multiple ways: in time, in space, and among protostars of otherwise similar mass. Accretion is heterogeneous in time, in the sense that accretion rates vary during the evolution, with generally decreasing profiles, whose slopes varymore » over a wide range, and where accretion can increase again if a protostar enters a region with increased density and low speed. Accretion is heterogeneous in space, because of the mass distribution, with mass approaching the accreting star–disk system in filaments and sheets. Finally, accretion is heterogeneous among stars, since the detailed conditions and dynamics in the neighborhood of each star can vary widely. We also investigate the sensitivity of disk formation to physical conditions and test their robustness by varying numerical parameters. We find that disk formation is robust even when choosing the least favorable sink particle parameters, and that turbulence cascading from larger scales is a decisive factor in disk formation. We also investigate the transport of angular momentum, finding that the net inward mechanical transport is compensated for mainly by an outward-directed magnetic transport, with a contribution from gravitational torques usually subordinate to the magnetic transport.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaname, Julio; Ramirez, Ivan
2012-02-10
We present a program designed to obtain age-rotation measurements of solar-type dwarfs to be used in the calibration of gyrochronology relations at ages of several Gyr. This is a region of parameter space crucial for the large-scale study of the Milky Way, and where the only constraint available today is that provided by the Sun. Our program takes advantage of a set of wide binaries selected so that one component is an evolved star and the other is a main-sequence star of FGK type. In this way, we obtain the age of the system from the evolved star, while themore » rotational properties of the main-sequence component provide the information relevant for gyrochronology regarding the spin-down of solar-type stars. By mining currently available catalogs of wide binaries, we assemble a sample of 37 pairs well positioned for our purposes: 19 with turnoff or subgiant primaries and 18 with white dwarf components. Using high-resolution optical spectroscopy, we measure precise stellar parameters for a subset of 15 of the pairs with turnoff/subgiant components and use these to derive isochronal ages for the corresponding systems. Ages for 16 of the 18 pairs with white dwarf components are taken from the literature. The ages of this initial sample of 31 wide binaries range from 1 to 9 Gyr, with precisions better than {approx}20% for almost half of these systems. When combined with measurements of the rotation period of their main-sequence components, these wide binary systems would potentially provide a similar number of points useful for the calibration of gyrochronology relations at very old ages.« less
Picture of the Week: Biocrusts: small organisms, big impacts
and mosses, cover the soil between the widely spaced plants. November 20, 2015 biocrusts Arid lands biocrusts, comprising bacteria, fungi, lichens and mosses, cover the soil between the widely spaced plants mosses, cover the soil between the widely spaced plants. These organisms play vital roles in arid
Axisymmetric shell analysis of the Space Shuttle solid rocket booster field joint
NASA Technical Reports Server (NTRS)
Nemeth, Michael P.; Anderson, Melvin S.
1989-01-01
The Space Shuttle Challenger (STS 51-L) accident led to an intense investigation of the structural behavior of the solid rocket booster (SRB) tang and clevis field joints. The presence of structural deformations between the clevis inner leg and the tang, substantial enough to prevent the O-ring seals from eliminating hot gas flow through the joints, has emerged as a likely cause of the vehicle failure. This paper presents results of axisymmetric shell analyses that parametrically assess the structural behavior of SRB field joints subjected to quasi-steady-state internal pressure loading for both the original joint flown on mission STS 51-L and the redesigned joint recently flown on the Space Shuttle Discovery. Discussion of axisymmetric shell modeling issues and details is presented and a generic method for simulating contact between adjacent shells of revolution is described. Results are presented that identify the performance trends of the joints for a wide range of joint parameters.
NASA Astrophysics Data System (ADS)
Peixoto, Leandro C.; Osório, Wislei R.; Garcia, Amauri
It is well known that there is a strong influence of thermal processing variables on the solidification structure and as a direct consequence on the casting final properties. The morphological microstructural parameters such as grain size and cellular or dendritic spacings will depend on the heat transfer conditions imposed by the metal/mould system. There is a need to improve the understanding of the interrelation between the microstructure, mechanical properties and corrosion resistance of dilute Pb-Sn casting alloys which are widely used in the manufacture of battery components. The present study has established correlations between cellular microstructure, ultimate tensile strength and corrosion resistance of Pb-1 wt% Sn and Pb-2.5 wt% Sn alloys by providing a combined plot of these properties as a function of cell spacing. It was found that a compromise between good corrosion resistance and good mechanical properties can be attained by choosing an appropriate cell spacing range.
Validation of Ionospheric Measurements from the International Space Station (ISS)
NASA Technical Reports Server (NTRS)
Coffey, Victoria; Minow, Joseph; Wright, Kenneth
2009-01-01
The International Space Station orbit provides an ideal platform for in-situ studies of space weather effects on the mid and low-latitude F-2 region ionosphere. The Floating Potential Measurement Unit (FPMU) operating on the ISS since Aug 2006, is a suite of plasma instruments: a Floating Potential Probe (FPP), a Plasma Impedance Probe (PIP), a Wide-sweep Langmuir Probe (WLP), and a Narrow-Sweep Langmuir Probe. This instrument package provides a new opportunity for collaborative multi-instrument studies of the F-region ionosphere during both quiet and disturbed periods. This presentation first describes the operational parameters for each of the FPMU probes and shows examples of an intra-instrument validation. We then show comparisons with the plasma density and temperature measurements derived from the TIMED GUVI ultraviolet imager, the Millstone Hill ground based incoherent scatter radar, and DIAS digisondes, Finally we show one of several observations of night-time equatorial density holes demonstrating the capabilities of the probes for monitoring mid and low latitude plasma processes.
Hypervelocity Impact Test Facility: A gun for hire
NASA Technical Reports Server (NTRS)
Johnson, Calvin R.; Rose, M. F.; Hill, D. C.; Best, S.; Chaloupka, T.; Crawford, G.; Crumpler, M.; Stephens, B.
1994-01-01
An affordable technique has been developed to duplicate the types of impacts observed on spacecraft, including the Shuttle, by use of a certified Hypervelocity Impact Facility (HIF) which propels particulates using capacitor driven electric gun techniques. The fully operational facility provides a flux of particles in the 10-100 micron diameter range with a velocity distribution covering the space debris and interplanetary dust particle environment. HIF measurements of particle size, composition, impact angle and velocity distribution indicate that such parameters can be controlled in a specified, tailored test designed for or by the user. Unique diagnostics enable researchers to fully describe the impact for evaluating the 'targets' under full power or load. Users regularly evaluate space hardware, including solar cells, coatings, and materials, exposing selected portions of space-qualified items to a wide range of impact events and environmental conditions. Benefits include corroboration of data obtained from impact events, flight simulation of designs, accelerated aging of systems, and development of manufacturing techniques.
In-situ Observations of the Ionospheric F2-Region from the International Space Station
NASA Technical Reports Server (NTRS)
Coffey, Victoria N.; Wright, Kenneth H.; Minow, Joseph I.; Chandler, Michael O.; Parker, Linda N.
2008-01-01
The International Space Station orbit provides an ideal platform for in-situ studies of space weather effects on the mid and low latitude F-2 region ionosphere. The Floating Potential Measurement Unit (FPMU) operating on the ISS since Aug 2006, is a suite of plasma instruments: a Floating Potential Probe (FPP), a Plasma Impedance Probe (PIP), a Wide-sweep Langmuir Probe (WLP), and a Narrow-sweep Langmuir Probe (NLP). This instrument package provides a new opportunity for collaborative multi-instrument studies of the F-region ionosphere during both quiet and disturbed periods. This presentation first describes the operational parameters for each of the FPMU probes and shows examples of an intra-instrument validation. We then show comparisons with the plasma density and temperature measurements derived from the TIMED GUVI ultraviolet imager, the Millstone Hill ground based incoherent scatter radar, and DIAS digisondes, Finally we show one of several observations of night-time equatorial density holes demonstrating the capabilities of the probes for monitoring mid and low latitude plasma processes.
The MASSIVE Survey. IX. Photometric Analysis of 35 High-mass Early-type Galaxies with HST WFC3/IR
NASA Astrophysics Data System (ADS)
Goullaud, Charles F.; Jensen, Joseph B.; Blakeslee, John P.; Ma, Chung-Pei; Greene, Jenny E.; Thomas, Jens
2018-03-01
We present near-infrared observations of 35 of the most massive early-type galaxies in the local universe. The observations were made using the infrared channel of the Hubble Space Telescope Wide Field Camera 3 (WFC3) in the F110W (1.1 μm) filter. We measured surface brightness profiles and elliptical isophotal fit parameters from the nuclear regions out to a radius of ∼10 kpc in most cases. We find that 37% (13) of the galaxies in our sample have isophotal position-angle rotations greater than 20° over the radial range imaged by WFC3/IR, which is often due to the presence of neighbors or multiple nuclei. Most galaxies in our sample are significantly rounder near the center than in the outer regions. This sample contains 6 fast rotators and 28 slow rotators. We find that all fast rotators are either disky or show no measurable deviation from purely elliptical isophotes. Among slow rotators, significantly disky and boxy galaxies occur with nearly equal frequency. The galaxies in our sample often exhibit changing isophotal shapes, sometimes showing both significantly disky and boxy isophotes at different radii. The fact that parameters vary widely between galaxies and within individual galaxies is evidence that these massive galaxies have complicated formation histories, and some of them have experienced recent mergers and have not fully relaxed. These data demonstrate the value of IR imaging of galaxies at high spatial resolution and provide measurements necessary for determining stellar masses, dynamics, and black hole masses in high-mass galaxies. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with Program GO-14219.
NASA Astrophysics Data System (ADS)
Lemaitre, Gerard R.; Montiel, Pierre; Joulie, Patrice; Dohlen, Kjetil; Lanzoni, Patrick
2004-09-01
Wide-field astronomy requires larger size telescopes. Compared to the catadioptric Schmidt, the optical properties of a three mirror telescope provides significant advantages. (1) The flat field design is anastigmatic at any wavelength, (2) the system is extremely compact -- four times shorter than a Schmidt -- and, (3) compared to a Schmidt with refractive corrector -- requiring the polishing of three optical surfaces --, the presently proposed Modified-Rumsey design uses all of eight available free parameters of a flat fielded anastigmatic three mirror telescope for mirrors generated by active optics methods. Compared to a Rumsey design, these parameters include the additional slope continuity condition at the primary-tertiary link for in-situ stressing and aspherization from a common sphere. Then, active optics allows the polishing of only two spherical surfaces: the combined primary-tertiary mirror and the secondary mirror. All mirrors are spheroids of the hyperboloid type. This compact system is of interest for space and ground-based astronomy and allows to built larger wide-field telescopes such as demonstrated by the design and construction of identical telescopes MINITRUST-1 and -2, f/5 - 2° FOV, consisting of an in-situ stressed double vase form primary-tertiary and of a stress polished tulip form secondary. Optical tests of these telescopes, showing diffraction limited images, are presented.
General ecological models for human subsistence, health and poverty.
Ngonghala, Calistus N; De Leo, Giulio A; Pascual, Mercedes M; Keenan, Donald C; Dobson, Andrew P; Bonds, Matthew H
2017-08-01
The world's rural poor rely heavily on their immediate natural environment for subsistence and suffer high rates of morbidity and mortality from infectious diseases. We present a general framework for modelling subsistence and health of the rural poor by coupling simple dynamic models of population ecology with those for economic growth. The models show that feedbacks between the biological and economic systems can lead to a state of persistent poverty. Analyses of a wide range of specific systems under alternative assumptions show the existence of three possible regimes corresponding to a globally stable development equilibrium, a globally stable poverty equilibrium and bistability. Bistability consistently emerges as a property of generalized disease-economic systems for about a fifth of the feasible parameter space. The overall proportion of parameters leading to poverty is larger than that resulting in healthy/wealthy development. All the systems are found to be most sensitive to human disease parameters. The framework highlights feedbacks, processes and parameters that are important to measure in studies of rural poverty to identify effective pathways towards sustainable development.
A panning DLT procedure for three-dimensional videography.
Yu, B; Koh, T J; Hay, J G
1993-06-01
The direct linear transformation (DLT) method [Abdel-Aziz and Karara, APS Symposium on Photogrammetry. American Society of Photogrammetry, Falls Church, VA (1971)] is widely used in biomechanics to obtain three-dimensional space coordinates from film and video records. This method has some major shortcomings when used to analyze events which take place over large areas. To overcome these shortcomings, a three-dimensional data collection method based on the DLT method, and making use of panning cameras, was developed. Several small single control volumes were combined to construct a large total control volume. For each single control volume, a regression equation (calibration equation) is developed to express each of the 11 DLT parameters as a function of camera orientation, so that the DLT parameters can then be estimated from arbitrary camera orientations. Once the DLT parameters are known for at least two cameras, and the associated two-dimensional film or video coordinates of the event are obtained, the desired three-dimensional space coordinates can be computed. In a laboratory test, five single control volumes (in a total control volume of 24.40 x 2.44 x 2.44 m3) were used to test the effect of the position of the single control volume on the accuracy of the computed three dimensional space coordinates. Linear and quadratic calibration equations were used to test the effect of the order of the equation on the accuracy of the computed three dimensional space coordinates. For four of the five single control volumes tested, the mean resultant errors associated with the use of the linear calibration equation were significantly larger than those associated with the use of the quadratic calibration equation. The position of the single control volume had no significant effect on the mean resultant errors in computed three dimensional coordinates when the quadratic calibration equation was used. Under the same data collection conditions, the mean resultant errors in the computed three dimensional coordinates associated with the panning and stationary DLT methods were 17 and 22 mm, respectively. The major advantages of the panning DLT method lie in the large image sizes obtained and in the ease with which the data can be collected. The method also has potential for use in a wide variety of contexts. The major shortcoming of the method is the large amount of digitizing necessary to calibrate the total control volume. Adaptations of the method to reduce the amount of digitizing required are being explored.
Theoretical analysis of single molecule spectroscopy lineshapes of conjugated polymers
NASA Astrophysics Data System (ADS)
Devi, Murali
Conjugated Polymers(CPs) exhibit a wide range of highly tunable optical properties. Quantitative and detailed understanding of the nature of excitons responsible for such a rich optical behavior has significant implications for better utilization of CPs for more efficient plastic solar cells and other novel optoelectronic devices. In general, samples of CPs are plagued with substantial inhomogeneous broadening due to various sources of disorder. Single molecule emission spectroscopy (SMES) offers a unique opportunity to investigate the energetics and dynamics of excitons and their interactions with phonon modes. The major subject of the present thesis is to analyze and understand room temperature SMES lineshapes for a particular CP, called poly(2,5-di-(2'-ethylhexyloxy)-1,4-phenylenevinylene) (DEH-PPV). A minimal quantum mechanical model of a two-level system coupled to a Brownian oscillator bath is utilized. The main objective is to identify the set of model parameters best fitting a SMES lineshape for each of about 200 samples of DEH-PPV, from which new insight into the nature of exciton-bath coupling can be gained. This project also entails developing a reliable computational methodology for quantum mechanical modeling of spectral lineshapes in general. Well-known optimization techniques such as gradient descent, genetic algorithms, and heuristic searches have been tested, employing an L2 measure between theoretical and experimental lineshapes for guiding the optimization. However, all of these tend to result in theoretical lineshapes qualitatively different from experimental ones. This is attributed to the ruggedness of the parameter space and inadequateness of the L2 measure. On the other hand, when the dynamic reduction of the original parameter space to a 2-parameter space through feature searching and visualization of the search space paths using directed acyclic graphs(DAGs), the qualitative nature of the fitting improved significantly. For a more satisfactory fitting, it is shown that the inclusion of an additional energetic disorder is essential, representing the effect of quasi-static disorder accumulated during the SMES of each polymer. Various technical details, ambiguous issues, and implication of the present work are discussed.
Prospects for Chemically Tagging Stars in the Galaxy
NASA Astrophysics Data System (ADS)
Ting, Yuan-Sen; Conroy, Charlie; Goodman, Alyssa
2015-07-01
It is now well-established that the elemental abundance patterns of stars hold key clues not only to their formation, but also to the assembly histories of galaxies. One of the most exciting possibilities is the use of stellar abundance patterns as “chemical tags” to identify stars that were born in the same molecular cloud. In this paper, we assess the prospects of chemical tagging as a function of several key underlying parameters. We show that in the fiducial case of 104 distinct cells in chemical space and {10}5-{10}6 stars in the survey, one can expect to detect ∼ {10}2-{10}3 groups that are ≥slant 5σ overdensities in the chemical space. However, we find that even very large overdensities in chemical space do not guarantee that the overdensity is due to a single set of stars from a common birth cloud. In fact, for our fiducial model parameters, the typical 5σ overdensity is comprised of stars from a wide range of clusters with the most dominant cluster contributing only 25% of the stars. The most important factors limiting the identification of disrupted clusters via chemical tagging are the number of chemical cells in the chemical space and the survey sampling rate of the underlying stellar population. Both of these factors can be improved through strategic observational plans. While recovering individual clusters through chemical tagging may prove challenging, we show, in agreement with previous work, that different CMFs imprint different degrees of clumpiness in chemical space. These differences provide the opportunity to statistically reconstruct the slope and high-mass cutoff of CMF and its evolution through cosmic time.
van Oostrom, Conny T.; Jonker, Martijs J.; de Jong, Mark; Dekker, Rob J.; Rauwerda, Han; Ensink, Wim A.; de Vries, Annemieke; Breit, Timo M.
2014-01-01
In transcriptomics research, design for experimentation by carefully considering biological, technological, practical and statistical aspects is very important, because the experimental design space is essentially limitless. Usually, the ranges of variable biological parameters of the design space are based on common practices and in turn on phenotypic endpoints. However, specific sub-cellular processes might only be partially reflected by phenotypic endpoints or outside the associated parameter range. Here, we provide a generic protocol for range finding in design for transcriptomics experimentation based on small-scale gene-expression experiments to help in the search for the right location in the design space by analyzing the activity of already known genes of relevant molecular mechanisms. Two examples illustrate the applicability: in-vitro UV-C exposure of mouse embryonic fibroblasts and in-vivo UV-B exposure of mouse skin. Our pragmatic approach is based on: framing a specific biological question and associated gene-set, performing a wide-ranged experiment without replication, eliminating potentially non-relevant genes, and determining the experimental ‘sweet spot’ by gene-set enrichment plus dose-response correlation analysis. Examination of many cellular processes that are related to UV response, such as DNA repair and cell-cycle arrest, revealed that basically each cellular (sub-) process is active at its own specific spot(s) in the experimental design space. Hence, the use of range finding, based on an affordable protocol like this, enables researchers to conveniently identify the ‘sweet spot’ for their cellular process of interest in an experimental design space and might have far-reaching implications for experimental standardization. PMID:24823911
NASA Technical Reports Server (NTRS)
Hartley, Craig S.
1990-01-01
To augment the capabilities of the Space Transportation System, NASA has funded studies and developed programs aimed at developing reusable, remotely piloted spacecraft and satellite servicing systems capable of delivering, retrieving, and servicing payloads at altitudes and inclinations beyond the reach of the present Shuttle Orbiters. Since the mid 1970's, researchers at the Martin Marietta Astronautics Group Space Operations Simulation (SOS) Laboratory have been engaged in investigations of remotely piloted and supervised autonomous spacecraft operations. These investigations were based on high fidelity, real-time simulations and have covered a wide range of human factors issues related to controllability. Among these are: (1) mission conditions, including thruster plume impingements and signal time delays; (2) vehicle performance variables, including control authority, control harmony, minimum impulse, and cross coupling of accelerations; (3) maneuvering task requirements such as target distance and dynamics; (4) control parameters including various control modes and rate/displacement deadbands; and (5) display parameters involving camera placement and function, visual aids, and presentation of operational feedback from the spacecraft. This presentation includes a brief description of the capabilities of the SOS Lab to simulate real-time free-flyer operations using live video, advanced technology ground and on-orbit workstations, and sophisticated computer models of on-orbit spacecraft behavior. Sample results from human factors studies in the five categories cited above are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaikuad, Apirat, E-mail: apirat.chaikuad@sgc.ox.ac.uk; Knapp, Stefan; Johann Wolfgang Goethe-University, Building N240 Room 3.03, Max-von-Laue-Strasse 9, 60438 Frankfurt am Main
An alternative strategy for PEG sampling is suggested through the use of four newly defined PEG smears to enhance chemical space in reduced screens with a benefit towards protein crystallization. The quest for an optimal limited set of effective crystallization conditions remains a challenge in macromolecular crystallography, an issue that is complicated by the large number of chemicals which have been deemed to be suitable for promoting crystal growth. The lack of rational approaches towards the selection of successful chemical space and representative combinations has led to significant overlapping conditions, which are currently present in a multitude of commercially availablemore » crystallization screens. Here, an alternative approach to the sampling of widely used PEG precipitants is suggested through the use of PEG smears, which are mixtures of different PEGs with a requirement of either neutral or cooperatively positive effects of each component on crystal growth. Four newly defined smears were classified by molecular-weight groups and enabled the preservation of specific properties related to different polymer sizes. These smears not only allowed a wide coverage of properties of these polymers, but also reduced PEG variables, enabling greater sampling of other parameters such as buffers and additives. The efficiency of the smear-based screens was evaluated on more than 220 diverse recombinant human proteins, which overall revealed a good initial crystallization success rate of nearly 50%. In addition, in several cases successful crystallizations were only obtained using PEG smears, while various commercial screens failed to yield crystals. The defined smears therefore offer an alternative approach towards PEG sampling, which will benefit the design of crystallization screens sampling a wide chemical space of this key precipitant.« less
Scarduelli, Lucia; Giacchini, Roberto; Parenti, Paolo; Migliorati, Sonia; Di Brisco, Agnese Maria; Vighi, Marco
2017-11-01
Biomarkers are widely used in ecotoxicology as indicators of exposure to toxicants. However, their ability to provide ecologically relevant information remains controversial. One of the major problems is understanding whether the measured responses are determined by stress factors or lie within the natural variability range. In a previous work, the natural variability of enzymatic levels in invertebrates sampled in pristine rivers was proven to be relevant across both space and time. In the present study, the experimental design was improved by considering different life stages of the selected taxa and by measuring more environmental parameters. The experimental design considered sampling sites in 2 different rivers, 8 sampling dates covering the whole seasonal cycle, 4 species from 3 different taxonomic groups (Plecoptera, Perla grandis; Ephemeroptera, Baetis alpinus and Epeorus alpicula; Tricoptera, Hydropsyche pellucidula), different life stages for each species, and 4 enzymes (acetylcholinesterase, glutathione S-transferase, alkaline phosphatase, and catalase). Biomarker levels were related to environmental (physicochemical) parameters to verify any kind of dependence. Data were statistically elaborated using hierarchical multilevel Bayesian models. Natural variability was found to be relevant across both space and time. The results of the present study proved that care should be paid when interpreting biomarker results. Further research is needed to better understand the dependence of the natural variability on environmental parameters. Environ Toxicol Chem 2017;36:3158-3167. © 2017 SETAC. © 2017 SETAC.
Stellar photometry with the Wide Field/Planetary Camera of the Hubble Space Telescope
NASA Astrophysics Data System (ADS)
Holtzman, Jon A.
1990-07-01
Simulations of Wide Field/Planetary Camera (WF/PC) images are analyzed in order to discover the most effective techniques for stellar photometry and to evaluate the accuracy and limitations of these techniques. The capabilities and operation of the WF/PC and the simulations employed in the study are described. The basic techniques of stellar photometry and methods to improve these techniques for the WF/PC are discussed. The correct parameters for star detection, aperture photometry, and point-spread function (PSF) fitting with the DAOPHOT software of Stetson (1987) are determined. Consideration is given to undersampling of the stellar images by the detector; variations in the PSF; and the crowding of the stellar images. It is noted that, with some changes DAOPHOT, is able to generate photometry almost to the level of photon statistics.
Morizane, Kazuaki; Takemoto, Mitsuru; Neo, Masashi; Fujibayashi, Shunsuke; Otsuki, Bungo; Kawata, Tomotoshi; Matsuda, Shuichi
2018-05-01
The occipito-C2 angle (O-C2a) influences the oropharyngeal space. However, O-C2a has several limitations. There is no normal value of O-C2a because of the wide individual variations, and O-C2a does not reflect translation of the cranium to the axis, another factor influencing the oropharyngeal space in patients with atlantoaxial subluxation. The objective of this study was to propose a novel parameter that accounts for craniocervical junction alignment (CJA) and the oropharyngeal space. This is a post hoc analysis of craniocervical radiological parameters from another study. Forty healthy volunteers were included in the study. Craniocervical measurement parameters included the occipital and external acoustic meatus to axis angle (O-EAa), the C2 tilting angle (C2Ta), O-C2a, and the anterior-posterior distance of the narrowest oropharyngeal airway space (nPAS). We collected 40 healthy volunteers' lateral cervical radiographs in neutral, flexion, extension, protrusion, and retraction positions. We measured O-C2a, C2Ta (formed by the inferior end plate of C2 and a line connecting the external acoustic meatus and the midpoint of the inferior end plate of C2 [EA-line]), O-EAa (formed by the McGregor line and the EA-line), and nPAS. We evaluated the inter-rater and intrarater reliability of O-EAa and C2Ta, and the associations between each of the measured parameters. The inter-rater and intrarater reliabilities of measuring O-EAa and C2Ta were excellent. The neutral position O-EAa values remained in a narrower range (mean±standard deviation, 90.0°±5.0°) than O-C2a (15.6°±6.7°) (Levene test of equality of variances, p=.044). In the linear mixed-effects models, sex, O-C2a, C2Ta, and O-EAa were significantly associated with nPAS. The marginal R 2 values for the mixed-effect models, which express the variance explained by fixed effects, were 0.605 and 0.632 for the O-C2a and O-EAa models, respectively. In all models, the subaxial alignment (C2-C6a) had no significant association with nPAS. The O-EAa may be a useful parameter of CJA with several advantages over O-C2a, including less individual variation, easier visual recognition during surgery, and improved prediction of postoperative nPAS after occipitocervical fusion. Copyright © 2017 Elsevier Inc. All rights reserved.
Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework.
El-Assady, Mennatallah; Sevastjanova, Rita; Sperrle, Fabian; Keim, Daniel; Collins, Christopher
2018-01-01
Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.
NASA Astrophysics Data System (ADS)
Cisneros, Sophia
2013-04-01
We present a new, heuristic, two-parameter model for predicting the rotation curves of disc galaxies. The model is tested on (22) randomly chosen galaxies, represented in 35 data sets. This Lorentz Convolution [LC] model is derived from a non-linear, relativistic solution of a Kerr-type wave equation, where small changes in the photon's frequencies, resulting from the curved space time, are convolved into a sequence of Lorentz transformations. The LC model is parametrized with only the diffuse, luminous stellar and gaseous masses reported with each data set of observations used. The LC model predicts observed rotation curves across a wide range of disk galaxies. The LC model was constructed to occupy the same place in the explanation of rotation curves that Dark Matter does, so that a simple investigation of the relation between luminous and dark matter might be made, via by a parameter (a). We find the parameter (a) to demonstrate interesting structure. We compare the new model prediction to both the NFW model and MOND fits when available.
Space shuttle OMS helium regulator design and development
NASA Technical Reports Server (NTRS)
Wichmann, H.; Kelly, T. L.; Lynch, R.
1974-01-01
Analysis, design, fabrication and design verification testing was conducted on the technological feasiblity of the helium pressurization regulator for the space shuttle orbital maneuvering system application. A prototype regulator was fabricated which was a single-stage design featuring the most reliable and lowest cost concept. A tradeoff study on regulator concepts indicated that a single-stage regulator with a lever arm between the valve and the actuator section would offer significant weight savings. Damping concepts were tested to determine the amount of damping required to restrict actuator travel during vibration. Component design parameters such as spring rates, effective area, contamination cutting, and damping were determined by test prior to regulator final assembly. The unit was subjected to performance testing at widely ranging flow rates, temperatures, inlet pressures, and random vibration levels. A test plan for propellant compatibility and extended life tests is included.
Galileo 1989 VEEGA trajectory design. [Venus-Earth-Earth-Gravity-Assist
NASA Technical Reports Server (NTRS)
D'Amario, Louis A.; Byrnes, Dennis V.; Johannesen, Jennie R.; Nolan, Brian G.
1989-01-01
The new baseline for the Galileo Mission is a 1989 Venus-earth-earth gravity-assist (VEEGA) trajectory, which utilizes three gravity-assist planetary flybys in order to reduce launch energy requirements significantly compared to other earth-Jupiter transfer modes. The launch period occurs during October-November 1989. The total flight time is about 6 years, with November 1995 as the most likely choice for arrival at Jupiter. Optimal 1989 VEEGA trajectories have been generated for a wide range of earth launch dates and Jupiter arrival dates. Launch/arrival space contour plots are presented for various trajectory parameters, including propellant margin, which is used to measure mission performance. The accessible region of the launch/arrival space is defined by propellant margin and launch energy constraints; the available launch period is approximately 1.5 months long.
Manimegalai, C T; Gauni, Sabitha; Kalimuthu, K
2017-12-04
Wireless body area network (WBAN) is a breakthrough technology in healthcare areas such as hospital and telemedicine. The human body has a complex mixture of different tissues. It is expected that the nature of propagation of electromagnetic signals is distinct in each of these tissues. This forms the base for the WBAN, which is different from other environments. In this paper, the knowledge of Ultra Wide Band (UWB) channel is explored in the WBAN (IEEE 802.15.6) system. The measurements of parameters in frequency range from 3.1-10.6 GHz are taken. The proposed system, transmits data up to 480 Mbps by using LDPC coded APSK Modulated Differential Space-Time-Frequency Coded MB-OFDM to increase the throughput and power efficiency.
Space Flight Plasma Data Analysis
NASA Technical Reports Server (NTRS)
Wright, Kenneth H.; Minow, Joseph I.
2009-01-01
This slide presentation reviews a method to analyze the plasma data that is reported on board the International Space station (ISS). The Floating Potential Measurement Unit (FPMU), the role of which is to obtain floating potential and ionosphere plasma measurements for validation of the ISS charging model, assess photo voltaic array variability and interpreting IRI predictions, is composed of four probes: Floating Potential Probe (FPP), Wide-sweep Langmuir Probe (WLP), Narrow-sweep Langmuir Probe (NLP) and the Plasma Impedance Probe (PIP). This gives redundant measurements of each parameter. There are also many 'boxes' that the data must pass through before being captured by the ground station, which leads to telemetry noise. Methods of analysis for the various signals from the different sets are reviewed. There is also a brief discussion of LP analysis of Low Earth Orbit plasma simulation source.
Size-density scaling in protists and the links between consumer-resource interaction parameters.
DeLong, John P; Vasseur, David A
2012-11-01
Recent work indicates that the interaction between body-size-dependent demographic processes can generate macroecological patterns such as the scaling of population density with body size. In this study, we evaluate this possibility for grazing protists and also test whether demographic parameters in these models are correlated after controlling for body size. We compiled data on the body-size dependence of consumer-resource interactions and population density for heterotrophic protists grazing algae in laboratory studies. We then used nested dynamic models to predict both the height and slope of the scaling relationship between population density and body size for these protists. We also controlled for consumer size and assessed links between model parameters. Finally, we used the models and the parameter estimates to assess the individual- and population-level dependence of resource use on body-size and prey-size selection. The predicted size-density scaling for all models matched closely to the observed scaling, and the simplest model was sufficient to predict the pattern. Variation around the mean size-density scaling relationship may be generated by variation in prey productivity and area of capture, but residuals are relatively insensitive to variation in prey size selection. After controlling for body size, many consumer-resource interaction parameters were correlated, and a positive correlation between residual prey size selection and conversion efficiency neutralizes the apparent fitness advantage of taking large prey. Our results indicate that widespread community-level patterns can be explained with simple population models that apply consistently across a range of sizes. They also indicate that the parameter space governing the dynamics and the steady states in these systems is structured such that some parts of the parameter space are unlikely to represent real systems. Finally, predator-prey size ratios represent a kind of conundrum, because they are widely observed but apparently have little influence on population size and fitness, at least at this level of organization. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.
Color-Space Outliers in DPOSS: Quasars and Peculiar Objects
NASA Astrophysics Data System (ADS)
Djorgovski, S. G.; Gal, R. R.; Mahabal, A.; Brunner, R.; Castro, S. M.; Odewahn, S. C.; de Carvalho, R. R.; DPOSS Team
2000-12-01
The processing of DPOSS, a digital version of the POSS-II sky atlas, is now nearly complete. The resulting Palomar--Norris Sky Catalog (PNSC) is expected to contain > 5 x 107 galaxies and > 109 stars, including large numbers of quasars and other unresolved sources. For objects morphologically classified as stellar (i.e., PSF-like), colors and magnitudes provide the only additional source of discriminating information. We investigate the distribution of objects in the parameter space of (g-r) and (r-i) colors as a function of magnitude. Normal stars form a well-defined (temperature) sequence in this parameter space, and we explore the nature of the objects which deviate significantly from this stellar locus. The causes of the deviations include: non-thermal or peculiar spectra, interagalactic absorption (for high-z quasars), presence of strong emission lines in one or more of the bandpasses, or strong variability (because the plates are taken at widely separated epochs). In addition to minor contamination by misclassified compact galaxies, we find the following: (1) Quasars at z > 4; to date, ~ 100 of these objects have been found, and used for a variety of follow-up studies. They are made publicly available immediately after discovery, through http://astro.caltech.edu/ ~george/z4.qsos. (2) Type-2 quasars in the redshift interval z ~ 0.31 - 0.38. (3) Other quasars, starburst and emission-line galaxies, and emission-line stars. (4) Objects with highly peculiar spectra, some or all of which may be rare subtypes of BAL QSOs. (5) Highly variable stars and optical transients, some of which may be GRB ``orphan afterglows''. To date, systematic searches have been made only for (1) and (2); other types of objects were found serendipitously. However, we plan to explore systematically all of the statistically significant outliers in this parameter space. This illustrates the potential of large digital sky surveys for discovery of rare types of objects, both known (e.g., high-z quasars) and as yet unknown.
On the Nature of People's Reaction to Space Weather and Meteorological Weather Changes
NASA Astrophysics Data System (ADS)
Khabarova, O. V.; Dimitrova, S.
2009-12-01
Our environment includes many natural and artificial agents affecting any person on the Earth in one way or other. This work is focused on two of them - weather and space weather, which are permanently effective. Their cumulative effect is proved by means of the modeling. It is shown that combination of geomagnetic and solar indices and weather strength parameter (which includes six main meteorological parameters) correlates with health state significantly better (up to R=0.7), than separate environmental parameters do. The typical shape of any health characteristics' time-series during human body reaction to any negative impact represents a curve, well-known in medicine as a General Adaptation Syndrome curve by Hans Selye. We demonstrate this on the base of blood pressure time-series and acupunctural experiment data, averaged by group. The first stage of adaptive stress-reaction (resistance to stress) is sometimes observed 1-2 days before geomagnetic storm onset. The effect of "outstripping reaction to magnetic storm", named Tchizhevsky- Velkhover effect, had been known for many years, but its explanation was obtained recently due to the consideration of the near-Earth space plasma processes. It was shown that lowfrequency variations of the solar wind density on a background of the density growth can stimulate the development of the geomagnetic filed (GMF) variations of the wide frequency range. These variations seem to have "bioeffective frequencies", resonant with own frequencies of body organs and systems. The mechanism of human body reaction is supposed to be a parametrical resonance in low-frequency range (which is determined by the resonance in large-scale organs and systems) and a simple forced resonance in GHz-range of variations (the resonance of micro-objects in the organism such as DNA, cell membranes, blood ions etc.) Given examples of mass-reaction of the objects to ULF-range GMF variations during quiet space weather time prove this hypothesis.
Improved digital filters for evaluating Fourier and Hankel transform integrals
Anderson, Walter L.
1975-01-01
New algorithms are described for evaluating Fourier (cosine, sine) and Hankel (J0,J1) transform integrals by means of digital filters. The filters have been designed with extended lengths so that a variable convolution operation can be applied to a large class of integral transforms having the same system transfer function. A f' lagged-convolution method is also presented to significantly decrease the computation time when computing a series of like-transforms over a parameter set spaced the same as the filters. Accuracy of the new filters is comparable to Gaussian integration, provided moderate parameter ranges and well-behaved kernel functions are used. A collection of Fortran IV subprograms is included for both real and complex functions for each filter type. The algorithms have been successfully used in geophysical applications containing a wide variety of integral transforms
A Necessary Condition for Coexistence of Autocatalytic Replicators in a Prebiotic Environment
Hernandez, Andres F.; Grover, Martha A.
2013-01-01
A necessary, but not sufficient, mathematical condition for the coexistence of short replicating species is presented here. The mathematical condition is obtained for a prebiotic environment, simulated as a fed-batch reactor, which combines monomer recycling, variable reaction order and a fixed monomer inlet flow with two replicator types and two monomer types. An extensive exploration of the parameter space in the model validates the robustness and efficiency of the mathematical condition, with nearly 1.7% of parameter sets meeting the condition and half of those exhibiting sustained coexistence. The results show that it is possible to generate a condition of coexistence, where two replicators sustain a linear growth simultaneously for a wide variety of chemistries, under an appropriate environment. The presence of multiple monomer types is critical to sustaining the coexistence of multiple replicator types. PMID:25369813
A necessary condition for coexistence of autocatalytic replicators in a prebiotic environment.
Hernandez, Andres F; Grover, Martha A
2013-07-24
A necessary, but not sufficient, mathematical condition for the coexistence of short replicating species is presented here. The mathematical condition is obtained for a prebiotic environment, simulated as a fed-batch reactor, which combines monomer recycling, variable reaction order and a fixed monomer inlet flow with two replicator types and two monomer types. An extensive exploration of the parameter space in the model validates the robustness and efficiency of the mathematical condition, with nearly 1.7% of parameter sets meeting the condition and half of those exhibiting sustained coexistence. The results show that it is possible to generate a condition of coexistence, where two replicators sustain a linear growth simultaneously for a wide variety of chemistries, under an appropriate environment. The presence of multiple monomer types is critical to sustaining the coexistence of multiple replicator types.
Mapping magnetized geologic structures from space: The effect of orbital and body parameters
NASA Technical Reports Server (NTRS)
Schnetzler, C. C.; Taylor, P. T.; Langel, R. A.
1984-01-01
When comparing previous satellite magnetometer missions (such as MAGSAT) with proposed new programs (for example, Geopotential Research Mission, GRM) it is important to quantify the difference in scientific information obtained. The ability to resolve separate magnetic blocks (simulating geological units) is used as a parameter for evaluating the expected geologic information from each mission. The effect of satellite orbital altitude on the ability to resolve two magnetic blocks with varying separations is evaluated and quantified. A systematic, nonlinear, relationship exists between resolution and distance between magnetic blocks as a function of orbital altitude. The proposed GRM would provide an order-of-magnitude greater anomaly resolution than the earlier MAGSAT mission for widely separated bodies. The resolution achieved at any particular altitude varies depending on the location of the bodies and orientation.
Stabilizing embedology: Geometry-preserving delay-coordinate maps
NASA Astrophysics Data System (ADS)
Eftekhari, Armin; Yap, Han Lun; Wakin, Michael B.; Rozell, Christopher J.
2018-02-01
Delay-coordinate mapping is an effective and widely used technique for reconstructing and analyzing the dynamics of a nonlinear system based on time-series outputs. The efficacy of delay-coordinate mapping has long been supported by Takens' embedding theorem, which guarantees that delay-coordinate maps use the time-series output to provide a reconstruction of the hidden state space that is a one-to-one embedding of the system's attractor. While this topological guarantee ensures that distinct points in the reconstruction correspond to distinct points in the original state space, it does not characterize the quality of this embedding or illuminate how the specific parameters affect the reconstruction. In this paper, we extend Takens' result by establishing conditions under which delay-coordinate mapping is guaranteed to provide a stable embedding of a system's attractor. Beyond only preserving the attractor topology, a stable embedding preserves the attractor geometry by ensuring that distances between points in the state space are approximately preserved. In particular, we find that delay-coordinate mapping stably embeds an attractor of a dynamical system if the stable rank of the system is large enough to be proportional to the dimension of the attractor. The stable rank reflects the relation between the sampling interval and the number of delays in delay-coordinate mapping. Our theoretical findings give guidance to choosing system parameters, echoing the tradeoff between irrelevancy and redundancy that has been heuristically investigated in the literature. Our initial result is stated for attractors that are smooth submanifolds of Euclidean space, with extensions provided for the case of strange attractors.
Stabilizing embedology: Geometry-preserving delay-coordinate maps.
Eftekhari, Armin; Yap, Han Lun; Wakin, Michael B; Rozell, Christopher J
2018-02-01
Delay-coordinate mapping is an effective and widely used technique for reconstructing and analyzing the dynamics of a nonlinear system based on time-series outputs. The efficacy of delay-coordinate mapping has long been supported by Takens' embedding theorem, which guarantees that delay-coordinate maps use the time-series output to provide a reconstruction of the hidden state space that is a one-to-one embedding of the system's attractor. While this topological guarantee ensures that distinct points in the reconstruction correspond to distinct points in the original state space, it does not characterize the quality of this embedding or illuminate how the specific parameters affect the reconstruction. In this paper, we extend Takens' result by establishing conditions under which delay-coordinate mapping is guaranteed to provide a stable embedding of a system's attractor. Beyond only preserving the attractor topology, a stable embedding preserves the attractor geometry by ensuring that distances between points in the state space are approximately preserved. In particular, we find that delay-coordinate mapping stably embeds an attractor of a dynamical system if the stable rank of the system is large enough to be proportional to the dimension of the attractor. The stable rank reflects the relation between the sampling interval and the number of delays in delay-coordinate mapping. Our theoretical findings give guidance to choosing system parameters, echoing the tradeoff between irrelevancy and redundancy that has been heuristically investigated in the literature. Our initial result is stated for attractors that are smooth submanifolds of Euclidean space, with extensions provided for the case of strange attractors.
Investigation of Key Parameters of Rock Cracking Using the Expansion of Vermiculite Materials
Ahn, Chi-Hyung; Hu, Jong Wan
2015-01-01
The demand for the development of underground spaces has been sharply increased in lieu of saturated ground spaces because the residents of cities have steadily increased since the 1980s. The traditional widely used excavation methods (i.e., explosion and shield) have caused many problems, such as noise, vibration, extended schedule, and increased costs. The vibration-free (and explosion-free) excavation method has currently attracted attention in the construction site because of the advantage of definitively solving these issues. For such reason, a new excavation method that utilizes the expansion of vermiculite with relatively fewer defects is proposed in this study. In general, vermiculite materials are rapidly expanded in volume when they receive thermal energy. Expansion pressure can be produced by thermal expansion of vermiculite in a steel tube, and measured by laboratory tests. The experimental tests are performed with various influencing parameters in an effort to seek the optimal condition to effectively increase expansion pressure at the same temperature. Then, calibrated expansion pressure is estimated, and compared to each model. After analyzing test results for expansion pressure, it is verified that vermiculite expanded by heat can provide enough internal pressure to break hard rock during tunneling work. PMID:28793610
Dark Matter in B – L supersymmetric Standard Model with inverse seesaw
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdallah, W.; Khalil, S., E-mail: awaleed@sci.cu.edu.eg, E-mail: s.khalil@zewailcity.edu.eg
We show that the B – L Supersymmetric Standard Model with Inverse Seesaw (BLSSMIS) provides new Dark Matter (DM) candidates (lightest right-handed sneutrino and lightest B – L neutralino) with mass of order few hundreds GeV, while most of other SUSY spectrum can be quite heavy, consistently with the current Large Hadron Collider (LHC) constraints. We emphasize that the thermal relic abundance and direct detection experiments via relic neutralino scattering with nuclei impose stringent constraints on the B – L neutralinos. These constraints can be satisfied by few points in the parameter space where the B – L lightest neutralinomore » is higgsino-like, which cannot explain the observed Galactic Center (GC) gamma-ray excess measured by Fermi-LAT. The lightest right-handed sneutrino DM is analysed. We show that for a wide region of parameter space the lightest right-handed sneutrino, with mass between 80 GeV and 1.2 TeV, can satisfy the limits of the relic abundance and the scattering cross section with nuclei. We also show that the lightest right-handed sneutrino with mass O(100) GeV can account for the observed GC gamma-ray results.« less
Investigation of Key Parameters of Rock Cracking Using the Expansion of Vermiculite Materia.
Ahn, Chi-Hyung; Hu, Jong Wan
2015-10-12
The demand for the development of underground spaces has been sharply increased in lieu of saturated ground spaces because the residents of cities have steadily increased since the 1980s. The traditional widely used excavation methods ( i.e ., explosion and shield) have caused many problems, such as noise, vibration, extended schedule, and increased costs. The vibration-free (and explosion-free) excavation method has currently attracted attention in the construction site because of the advantage of definitively solving these issues. For such reason, a new excavation method that utilizes the expansion of vermiculite with relatively fewer defects is proposed in this study. In general, vermiculite materials are rapidly expanded in volume when they receive thermal energy. Expansion pressure can be produced by thermal expansion of vermiculite in a steel tube, and measured by laboratory tests. The experimental tests are performed with various influencing parameters in an effort to seek the optimal condition to effectively increase expansion pressure at the same temperature. Then, calibrated expansion pressure is estimated, and compared to each model. After analyzing test results for expansion pressure, it is verified that vermiculite expanded by heat can provide enough internal pressure to break hard rock during tunneling work.
Generalization of fewest-switches surface hopping for coherences
NASA Astrophysics Data System (ADS)
Tempelaar, Roel; Reichman, David R.
2018-03-01
Fewest-switches surface hopping (FSSH) is perhaps the most widely used mixed quantum-classical approach for the modeling of non-adiabatic processes, but its original formulation is restricted to (adiabatic) population terms of the quantum density matrix, leaving its implementations with an inconsistency in the treatment of populations and coherences. In this article, we propose a generalization of FSSH that treats both coherence and population terms on equal footing and which formally reduces to the conventional FSSH algorithm for the case of populations. This approach, coherent fewest-switches surface hopping (C-FSSH), employs a decoupling of population relaxation and pure dephasing and involves two replicas of the classical trajectories interacting with two active surfaces. Through extensive benchmark calculations of a spin-boson model involving a Debye spectral density, we demonstrate the potential of C-FSSH to deliver highly accurate results for a large region of parameter space. Its uniform description of populations and coherences is found to resolve incorrect behavior observed for conventional FSSH in various cases, in particular at low temperature, while the parameter space regions where it breaks down are shown to be quite limited. Its computational expenses are virtually identical to conventional FSSH.
Costagli, Mauro; Waggoner, R Allen; Ueno, Kenichi; Tanaka, Keiji; Cheng, Kang
2009-04-15
In functional magnetic resonance imaging (fMRI), even subvoxel motion dramatically corrupts the blood oxygenation level-dependent (BOLD) signal, invalidating the assumption that intensity variation in time is primarily due to neuronal activity. Thus, correction of the subject's head movements is a fundamental step to be performed prior to data analysis. Most motion correction techniques register a series of volumes assuming that rigid body motion, characterized by rotational and translational parameters, occurs. Unlike the most widely used applications for fMRI data processing, which correct motion in the image domain by numerically estimating rotational and translational components simultaneously, the algorithm presented here operates in a three-dimensional k-space, to decouple and correct rotations and translations independently, offering new ways and more flexible procedures to estimate the parameters of interest. We developed an implementation of this method in MATLAB, and tested it on both simulated and experimental data. Its performance was quantified in terms of square differences and center of mass stability across time. Our data show that the algorithm proposed here successfully corrects for rigid-body motion, and its employment in future fMRI studies is feasible and promising.
The swiss army knife of job submission tools: grid-control
NASA Astrophysics Data System (ADS)
Stober, F.; Fischer, M.; Schleper, P.; Stadie, H.; Garbers, C.; Lange, J.; Kovalchuk, N.
2017-10-01
grid-control is a lightweight and highly portable open source submission tool that supports all common workflows in high energy physics (HEP). It has been used by a sizeable number of HEP analyses to process tasks that sometimes consist of up to 100k jobs. grid-control is built around a powerful plugin and configuration system, that allows users to easily specify all aspects of the desired workflow. Job submission to a wide range of local or remote batch systems or grid middleware is supported. Tasks can be conveniently specified through the parameter space that will be processed, which can consist of any number of variables and data sources with complex dependencies on each other. Dataset information is processed through a configurable pipeline of dataset filters, partition plugins and partition filters. The partition plugins can take the number of files, size of the work units, metadata or combinations thereof into account. All changes to the input datasets or variables are propagated through the processing pipeline and can transparently trigger adjustments to the parameter space and the job submission. While the core functionality is completely experiment independent, full integration with the CMS computing environment is provided by a small set of plugins.
VizieR Online Data Catalog: The HIZOA-S survey (Staveley-Smith+, 2016)
NASA Astrophysics Data System (ADS)
Staveley-Smith, L.; Kraan-Korteweg, R. C.; Schroder, A. C.; Henning, P. A.; Koribalski, B. S.; Stewart, I. M.; Heald, G.
2016-07-01
The observations described here were taken with the 21cm multibeam receiver at the 64m Parkes radio telescope between 1997 March 22 and 2000 June 8, contemporaneously with the southern component of HIPASS. The observations cover the Galactic longitude range 212°
Understanding Epileptiform After-Discharges as Rhythmic Oscillatory Transients.
Baier, Gerold; Taylor, Peter N; Wang, Yujiang
2017-01-01
Electro-cortical activity in patients with epilepsy may show abnormal rhythmic transients in response to stimulation. Even when using the same stimulation parameters in the same patient, wide variability in the duration of transient response has been reported. These transients have long been considered important for the mapping of the excitability levels in the epileptic brain but their dynamic mechanism is still not well understood. To investigate the occurrence of abnormal transients dynamically, we use a thalamo-cortical neural population model of epileptic spike-wave activity and study the interaction between slow and fast subsystems. In a reduced version of the thalamo-cortical model, slow wave oscillations arise from a fold of cycles (FoC) bifurcation. This marks the onset of a region of bistability between a high amplitude oscillatory rhythm and the background state. In vicinity of the bistability in parameter space, the model has excitable dynamics, showing prolonged rhythmic transients in response to suprathreshold pulse stimulation. We analyse the state space geometry of the bistable and excitable states, and find that the rhythmic transient arises when the impending FoC bifurcation deforms the state space and creates an area of locally reduced attraction to the fixed point. This area essentially allows trajectories to dwell there before escaping to the stable steady state, thus creating rhythmic transients. In the full thalamo-cortical model, we find a similar FoC bifurcation structure. Based on the analysis, we propose an explanation of why stimulation induced epileptiform activity may vary between trials, and predict how the variability could be related to ongoing oscillatory background activity. We compare our dynamic mechanism with other mechanisms (such as a slow parameter change) to generate excitable transients, and we discuss the proposed excitability mechanism in the context of stimulation responses in the epileptic cortex.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dehghani, M.H.; Research Institute for Astrophysics and Astronomy of Maragha; Khodam-Mohammadi, A.
First, we construct the Taub-NUT/bolt solutions of (2k+2)-dimensional Einstein-Maxwell gravity, when all the factor spaces of 2k-dimensional base space B have positive curvature. These solutions depend on two extra parameters, other than the mass and the NUT charge. These are electric charge q and electric potential at infinity V. We investigate the existence of Taub-NUT solutions and find that in addition to the two conditions of uncharged NUT solutions, there exist two extra conditions. These two extra conditions come from the regularity of vector potential at r=N and the fact that the horizon at r=N should be the outer horizonmore » of the NUT charged black hole. We find that the NUT solutions in 2k+2 dimensions have no curvature singularity at r=N, when the 2k-dimensional base space is chosen to be CP{sup 2k}. For bolt solutions, there exists an upper limit for the NUT parameter which decreases as the potential parameter increases. Second, we study the thermodynamics of these spacetimes. We compute temperature, entropy, charge, electric potential, action and mass of the black hole solutions, and find that these quantities satisfy the first law of thermodynamics. We perform a stability analysis by computing the heat capacity, and show that the NUT solutions are not thermally stable for even k's, while there exists a stable phase for odd k's, which becomes increasingly narrow with increasing dimensionality and wide with increasing V. We also study the phase behavior of the 4 and 6 dimensional bolt solutions in canonical ensemble and find that these solutions have a stable phase, which becomes smaller as V increases.« less
NASA Astrophysics Data System (ADS)
Carrano, C. S.; Groves, K. M.; Basu, S.; Mackenzie, E.; Sheehan, R. E.
2013-12-01
In a previous work, we demonstrated that ionospheric turbulence parameters may be inferred from amplitude scintillations well into in the strong scatter regime [Carrano et al., International Journal of Geophysics, 2012]. This technique, called Iterative Parameter Estimation (IPE), uses the strong scatter theory and numerical inversion to estimate the parameters of an ionospheric phase screen (turbulent intensity, phase spectral index, and irregularity zonal drift) consistent with the observed scintillations. The optimal screen parameters are determined such that the theoretical intensity spectrum on the ground best matches the measured intensity spectrum in a least squares sense. We use this technique to interpret scintillation measurements collected during a campaign at Ascension Island (7.96°S, 14.41°W) in March 2000, led by Santimay Basu and his collaborators from Air Force Research Laboratory. Geostationary satellites broadcasting radio signals at VHF and L-band were monitored along nearly co-linear links, enabling a multi-frequency analysis of scintillations with the same propagation geometry. The VHF data were acquired using antennas spaced in the magnetic east-west direction, which enabled direct measurement of the zonal irregularity drift. We show that IPE analysis of the VHF and L-Band scintillations, which exhibited very different statistics due to the wide frequency separation, yields similar estimates of the phase screen parameters that specify the disturbed ionospheric medium. This agreement provides confidence in our phase screen parameter estimates. It also suggests a technique for extrapolating scintillation measurements to frequencies other than those observed that is valid in the case of strong scatter. We find that IPE estimates of the zonal irregularity drift, made using scintillation observations along single space-to-ground link, are consistent with those measured independently using the spaced antenna technique. This encouraging result suggests one may measure the zonal irregularity drift at scintillation monitoring stations equipped with only a single channel receiver, so that the spaced-antenna technique cannot be employed. We noted that the scintillation index (S4) at L-band commonly exceeded that at VHF early in the evening when the irregularities were most intense, followed by one or more reversals of this trend at later local times as aging irregularities decayed and newly formed bubbles drifted over the station. We use the strong scatter theory to explain this perhaps counter-intuitive situation (one would normally expect a higher S4 at the lower frequency) in terms of strong refractive focusing.
Some features of the fabrication of multilayer fiber composites by explosive welding
NASA Technical Reports Server (NTRS)
Kotov, V. A.; Mikhaylov, A. N.; Cabelka, D.
1985-01-01
The fabrication of multilayer fiber composites by explosive welding is characterized by intense plastic deformation of the matrix material as it fills the spaces between fibers and by high velocity of the collision between matrix layers due to acceleration in the channels between fibers. The plastic deformation of the matrix layers and fiber-matrix friction provide mechanical and thermal activation of the contact surfaces, which contributes to the formation of a bond. An important feature of the process is that the fiber-matrix adhesion strength can be varied over a wide range by varying the parameters of impulsive loading.
High pressure water jet cutting and stripping
NASA Technical Reports Server (NTRS)
Hoppe, David T.; Babai, Majid K.
1991-01-01
High pressure water cutting techniques have a wide range of applications to the American space effort. Hydroblasting techniques are commonly used during the refurbishment of the reusable solid rocket motors. The process can be controlled to strip a thermal protective ablator without incurring any damage to the painted surface underneath by using a variation of possible parameters. Hydroblasting is a technique which is easily automated. Automation removes personnel from the hostile environment of the high pressure water. Computer controlled robots can perform the same task in a fraction of the time that would be required by manual operation.
Information at the edge of chaos in fluid neural networks
NASA Astrophysics Data System (ADS)
Solé, Ricard V.; Miramontes, Octavio
1995-01-01
Fluid neural networks, defined as neural nets of mobile elements with random activation, are studied by means of several approaches. They are proposed as a theoretical framework for a wide class of systems as insect societies, collectives of robots or the immune system. The critical properties of this model are also analysed, showing the existence of a critical boundary in parameter space where maximum information transfer occurs. In this sense, this boundary is in fact an example of the “edge of chaos” in systems like those described in our approach. Recent experiments with ant colonies seem to confirm our result.
NASA Astrophysics Data System (ADS)
Guangfa, Gao; Yongchi, Li; Zheng, Jing; Shujie, Yuan
Fiber reinforced composite materials were applied widely in aircraft and space vehicles engineering. Aimed to an advanced glass fiber reinforced composite material, a series of experiments for measuring thermal physical properties of this material were conducted, and the corresponding performance curves were obtained through statistic analyzing. The experimental results showed good consistency. And then the thermal physical parameters such as thermal expansion coefficient, engineering specific heat and sublimation heat were solved and calculated. This investigation provides an important foundation for the further research on the heat resistance and thermodynamic performance of this material.
Charged mediators in dark matter scattering
NASA Astrophysics Data System (ADS)
Stengel, Patrick
2017-11-01
We consider a scenario, within the framework of the MSSM, in which dark matter is bino-like and dark matter-nucleon spin-independent scattering occurs via the exchange of light squarks which exhibit left-right mixing. We show that direct detection experiments such as LUX and SuperCDMS will be sensitive to a wide class of such models through spin-independent scattering. The dominant nuclear physics uncertainty is the quark content of the nucleon, particularly the strangeness content. We also investigate parameter space with nearly degenerate neutralino and squark masses, thus enhancing dark matter annihilation and nucleon scattering event rates.
Cell and Particle Interactions and Aggregation During Electrophoretic Motion
NASA Technical Reports Server (NTRS)
Wang, Hua; Zeng, Shulin; Loewenberg, Michael; Todd, Paul; Davis, Robert H.
1996-01-01
The stability and pairwise aggregation rates of small spherical particles under the collective effects of buoyancy-driven motion and electrophoretic migration are analyzed. The particles are assumed to be non-Brownian, with thin double-layers and different zeta potentials. The particle aggregation rates may be enhanced or reduced, respectively, by parallel and antiparallel alignments of the buoyancy-driven and electrophoretic velocities. For antiparallel alignments, with the buoyancy-driven relative velocity exceeding the electrophoretic relative velocity between two widely-separated particles, there is a 'collision-forbidden region' in parameter space due to hydrodynamic interactions; thus, the suspension becomes stable against aggregation.
Millimeter- and submillimeter-wave characterization of various fabrics.
Dunayevskiy, Ilya; Bortnik, Bartosz; Geary, Kevin; Lombardo, Russell; Jack, Michael; Fetterman, Harold
2007-08-20
Transmission measurements of 14 fabrics are presented in the millimeter-wave and submillimeter-wave electromagnetic regions from 130 GHz to 1.2 THz. Three independent sources and experimental set-ups were used to obtain accurate results over a wide spectral range. Reflectivity, a useful parameter for imaging applications, was also measured for a subset of samples in the submillimeter-wave regime along with polarization sensitivity of the transmitted beam and transmission through doubled layers. All of the measurements were performed in free space. Details of these experimental set-ups along with their respective challenges are presented.
Design of collection optics and polychromators for a JT-60SA Thomson scattering system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tojo, H.; Hatae, T.; Sakuma, T.
2010-10-15
This paper presents designs of collection optics for a JT-60SA Thomson scattering system. By using tangential (to the toroidal direction) YAG laser injection, three collection optics without strong chromatic aberration generated by the wide viewing angle and small design volume were found to measure almost all the radial space. For edge plasma measurements, the authors optimized the channel number and wavelength ranges of band-pass filters in a polychromator to reduce the relative error in T{sub e} by considering all spatial channels and a double-pass laser system with different geometric parameters.
Parameter redundancy in discrete state-space and integrated models.
Cole, Diana J; McCrea, Rachel S
2016-09-01
Discrete state-space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state-space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state-space models using discrete analogues of methods for continuous state-space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The H,G_1,G_2 photometric system with scarce observational data
NASA Astrophysics Data System (ADS)
Penttilä, A.; Granvik, M.; Muinonen, K.; Wilkman, O.
2014-07-01
The H,G_1,G_2 photometric system was officially adopted at the IAU General Assembly in Beijing, 2012. The system replaced the H,G system from 1985. The 'photometric system' is a parametrized model V(α; params) for the magnitude-phase relation of small Solar System bodies, and the main purpose is to predict the magnitude at backscattering, H := V(0°), i.e., the (absolute) magnitude of the object. The original H,G system was designed using the best available data in 1985, but since then new observations have been made showing certain features, especially near backscattering, to which the H,G function has troubles adjusting to. The H,G_1,G_2 system was developed especially to address these issues [1]. With a sufficient number of high-accuracy observations and with a wide phase-angle coverage, the H,G_1,G_2 system performs well. However, with scarce low-accuracy data the system has troubles producing a reliable fit, as would any other three-parameter nonlinear function. Therefore, simultaneously with the H,G_1,G_2 system, a two-parameter version of the model, the H,G_{12} system, was introduced [1]. The two-parameter version ties the parameters G_1,G_2 into a single parameter G_{12} by a linear relation, and still uses the H,G_1,G_2 system in the background. This version dramatically improves the possibility to receive a reliable phase-curve fit to scarce data. The amount of observed small bodies is increasing all the time, and so is the need to produce estimates for the absolute magnitude/diameter/albedo and other size/composition related parameters. The lack of small-phase-angle observations is especially topical for near-Earth objects (NEOs). With these, even the two- parameter version faces problems. The previous procedure with the H,G system in such circumstances has been that the G-parameter has been fixed to some constant value, thus only fitting a single-parameter function. In conclusion, there is a definitive need for a reliable procedure to produce photometric fits to very scarce and low-accuracy data. There are a few details that should be considered with the H,G_1,G_2 or H,G_{12} systems with scarce data. The first point is the distribution of errors in the fit. The original H,G system allowed linear regression in the flux space, thus making the estimation computationally easier. The same principle was repeated with the H,G_1,G_2 system. There is, however, a major hidden assumption in the transformation. With regression modeling, the residuals should be distributed symmetrically around zero. If they are normally distributed, even better. We have noticed that, at least with some NEO observations, the residuals in the flux space are far from symmetric, and seem to be much more symmetric in the magnitude space. The result is that the nonlinear fit in magnitude space is far more reliable than the linear fit in the flux space. Since the computers and nonlinear regression algorithms are efficient enough, we conclude that, in many cases, with low-accuracy data the nonlinear fit should be favored. In fact, there are statistical procedures that should be employed with the photometric fit. At the moment, the choice between the three-parameter and two-parameter versions is simply based on subjective decision-making. By checking parameter error and model comparison statistics, the choice could be done objectively. Similarly, the choice between the linear fit in flux space and the nonlinear fit in magnitude space should be based on a statistical test of unbiased residuals. Furthermore, the so-called Box-Cox transform could be employed to find an optimal transformation somewhere between the magnitude and flux spaces. The H,G_1,G_2 system is based on cubic splines, and is therefore a bit more complicated to implement than a system with simpler basis functions. The same applies to a complete program that would automatically choose the best transforms to data, test if two- or three-parameter version of the model should be fitted, and produce the fitted parameters with their error estimates. Our group has already made implementations of the H,G_1,G_2 system publicly available [2]. We plan to implement the abovementioned improvements to the system and make also these tools public.
An all-reflective wide-angle flat-field telescope for space
NASA Technical Reports Server (NTRS)
Hallam, K. L.; Howell, B. J.; Wilson, M. E.
1984-01-01
An all-reflective wide-angle flat-field telescope (WAFFT) designed and built at Goddard Space Flight Center demonstrates the markedly improved wide-angle imaging capability which can be achieved with a design based on a recently announced class of unobscured 3-mirror optical systems. Astronomy and earth observation missions in space dictate the necessity or preference for wide-angle all-reflective systems which can provide UV through IR wavelength coverage and tolerate the space environment. An initial prototype unit has been designed to meet imaging requirements suitable for monitoring the ultraviolet sky from space. The unobscured f/4, 36 mm efl system achieves a full 20 x 30 deg field of view with resolution over a flat focal surface that is well matched for use with advanced ultraviolet image array detectors. Aspects of the design and fabrication approach, which have especially important bearing on the system solution, are reviewed; and test results are compared with the analytic performance predictions. Other possible applications of the WAFFT class of imaging system are briefly discussed. The exceptional wide-angle, high quality resolution, and very wide spectral coverage of the WAFFT-type optical system could make it a very important tool for future space research.
Overview of Photonic Materials for Application in Space Environments
NASA Technical Reports Server (NTRS)
Taylor, E. W.; Osinski, M.; Svimonishvili, Tengiz; Watson, M.; Bunton, P.; Pearson, S. D.; Bilbro, J.
1999-01-01
Future space systems will he based on components evolving from the development and refinement of new and existing photonic materials. Optically based sensors, inertial guidance, tracking systems, communications, diagnostics, imaging and high speed optical processing are but a few of the applications expected to widely utilize photonic materials. The response of these materials to space environment effects (SEE) such as spacecraft charging, orbital debris, atomic oxygen, ultraviolet irradiation, temperature and ionizing radiation will be paramount to ensuring successful space applications. The intent of this paper is to, address the latter two environments via a succinct comparison of the known sensitivities of selected photonic materials to the temperature and ionizing radiation conditions found in space and enhanced space environments Delineation of the known temperature and radiation induced responses in LiNbO3, AlGaN, AlGsAs,TeO2, Si:Ge, and several organic polymers are presented. Photonic materials are realizing rapid transition into applications for many proposed space components and systems including: optical interconnects, optical gyros, waveguide and spatial light modulators, light emitting diodes, lasers, optical fibers and fiber optic amplifiers. Changes to material parameters such as electrooptic coefficients, absorption coefficients, polarization, conductivity, coupling coefficients, diffraction efficiencies, and other pertinent material properties examined for thermo-optic and radiation induced effect. Conclusions and recommendations provide the reader with an understanding of the limitations or attributes of material choices for specific applications.
Preliminary results on the dynamics of large and flexible space structures in Halo orbits
NASA Astrophysics Data System (ADS)
Colagrossi, Andrea; Lavagna, Michèle
2017-05-01
The global exploration roadmap suggests, among other ambitious future space programmes, a possible manned outpost in lunar vicinity, to support surface operations and further astronaut training for longer and deeper space missions and transfers. In particular, a Lagrangian point orbit location - in the Earth- Moon system - is suggested for a manned cis-lunar infrastructure; proposal which opens an interesting field of study from the astrodynamics perspective. Literature offers a wide set of scientific research done on orbital dynamics under the Three-Body Problem modelling approach, while less of it includes the attitude dynamics modelling as well. However, whenever a large space structure (ISS-like) is considered, not only the coupled orbit-attitude dynamics should be modelled to run more accurate analyses, but the structural flexibility should be included too. The paper, starting from the well-known Circular Restricted Three-Body Problem formulation, presents some preliminary results obtained by adding a coupled orbit-attitude dynamical model and the effects due to the large structure flexibility. In addition, the most relevant perturbing phenomena, such as the Solar Radiation Pressure (SRP) and the fourth-body (Sun) gravity, are included in the model as well. A multi-body approach has been preferred to represent possible configurations of the large cis-lunar infrastructure: interconnected simple structural elements - such as beams, rods or lumped masses linked by springs - build up the space segment. To better investigate the relevance of the flexibility effects, the lumped parameters approach is compared with a distributed parameters semi-analytical technique. A sensitivity analysis of system dynamics, with respect to different configurations and mechanical properties of the extended structure, is also presented, in order to highlight drivers for the lunar outpost design. Furthermore, a case study for a large and flexible space structure in Halo orbits around one of the Earth-Moon collinear Lagrangian points, L1 or L2, is discussed to point out some relevant outcomes for the potential implementation of such a mission.
NASA Astrophysics Data System (ADS)
Kalanov, Temur Z.
2003-04-01
A new theory of space is suggested. It represents the new point of view which has arisen from the critical analysis of the foundations of physics (in particular the theory of relativity and quantum mechanics), mathematics, cosmology and philosophy. The main idea following from the analysis is that the concept of movement represents a key to understanding of the essence of space. The starting-point of the theory is represented by the following philosophical (dialectical materialistic) principles. (a) The principle of the materiality (of the objective reality) of the Nature: the Nature (the Universe) is a system (a set) of material objects (particles, bodies, fields); each object has properties, features, and the properties, the features are inseparable characteristics of material object and belong only to material object. (b) The principle of the existence of material object: an object exists as the objective reality, and movement is a form of existence of object. (c) The principle (definition) of movement of object: the movement is change (i.e. transition of some states into others) in general; the movement determines a direction, and direction characterizes the movement. (d) The principle of existence of time: the time exists as the parameter of the system of reference. These principles lead to the following statements expressing the essence of space. (1) There is no space in general, and there exist space only as a form of existence of the properties and features of the object. It means that the space is a set of the measures of the object (the measure is the philosophical category meaning unity of the qualitative and quantitative determinacy of the object). In other words, the space of the object is a set of the states of the object. (2) The states of the object are manifested only in a system of reference. The main informational property of the unitary system researched physical object + system of reference is that the system of reference determines (measures, calculates) the parameters of the subsystem researched physical object (for example, the coordinates of the object M); the parameters characterize the system of reference (for example, the system of coordinates S). (3) Each parameter of the object is its measure. Total number of the mutually independent parameters of the object is called dimension of the space of the object. (4) The set of numerical values (i.e. the range, the spectrum) of each parameter is the subspace of the object. (The coordinate space, the momentum space and the energy space are examples of the subspaces of the object). (5) The set of the parameters of the object is divided into two non intersecting (opposite) classes: the class of the internal parameters and the class of the non internal (i.e. external) parameters. The class of the external parameters is divided into two non intersecting (opposite) subclasses: the subclass of the absolute parameters (characterizing the form, the sizes of the object) and the subclass of the non absolute (relative) parameters (characterizing the position, the coordinates of the object). (6) Set of the external parameters forms the external space of object. It is called geometrical space of object. (7) Since a macroscopic object has three mutually independent sizes, the dimension of its external absolute space is equal to three. Consequently, the dimension of its external relative space is also equal to three. Thus, the total dimension of the external space of the macroscopic object is equal to six. (8) In general case, the external absolute space (i.e. the form, the sizes) and the external relative space (i.e. the position, the coordinates) of any object are mutually dependent because of influence of a medium. The geometrical space of such object is called non Euclidean space. If the external absolute space and the external relative space of some object are mutually independent, then the external relative space of such object is the homogeneous and isotropic geometrical space. It is called Euclidean space of the object. Consequences: (i) the question of true geometry of the Universe is incorrect; (ii) the theory of relativity has no physical meaning.
NASA Astrophysics Data System (ADS)
Atanasov, Victor
2017-07-01
We extend the superconductor's free energy to include an interaction of the order parameter with the curvature of space-time. This interaction leads to geometry dependent coherence length and Ginzburg-Landau parameter which suggests that the curvature of space-time can change the superconductor's type. The curvature of space-time doesn't affect the ideal diamagnetism of the superconductor but acts as chemical potential. In a particular circumstance, the geometric field becomes order-parameter dependent, therefore the superconductor's order parameter dynamics affects the curvature of space-time and electrical or internal quantum mechanical energy can be channelled into the curvature of space-time. Experimental consequences are discussed.
Fuzzy Stochastic Petri Nets for Modeling Biological Systems with Uncertain Kinetic Parameters
Liu, Fei; Heiner, Monika; Yang, Ming
2016-01-01
Stochastic Petri nets (SPNs) have been widely used to model randomness which is an inherent feature of biological systems. However, for many biological systems, some kinetic parameters may be uncertain due to incomplete, vague or missing kinetic data (often called fuzzy uncertainty), or naturally vary, e.g., between different individuals, experimental conditions, etc. (often called variability), which has prevented a wider application of SPNs that require accurate parameters. Considering the strength of fuzzy sets to deal with uncertain information, we apply a specific type of stochastic Petri nets, fuzzy stochastic Petri nets (FSPNs), to model and analyze biological systems with uncertain kinetic parameters. FSPNs combine SPNs and fuzzy sets, thereby taking into account both randomness and fuzziness of biological systems. For a biological system, SPNs model the randomness, while fuzzy sets model kinetic parameters with fuzzy uncertainty or variability by associating each parameter with a fuzzy number instead of a crisp real value. We introduce a simulation-based analysis method for FSPNs to explore the uncertainties of outputs resulting from the uncertainties associated with input parameters, which works equally well for bounded and unbounded models. We illustrate our approach using a yeast polarization model having an infinite state space, which shows the appropriateness of FSPNs in combination with simulation-based analysis for modeling and analyzing biological systems with uncertain information. PMID:26910830
Neuner, Ralf; Seidel, Hans-Joachim
2006-07-01
The focus of our study was the assessment of the effects of spatial relocation on office staff. Our aim was to investigate whether psychosocial or personal factors are better predictors of the occurrence of impaired well-being. Before relocation the administration of the university hospital of Ulm (Germany) was located in ten different buildings. Chemical and physical parameters of the indoor air were measured. The employees were surveyed with a questionnaire for their health status and psychosocial determinants. After moving to a new wide-spaced building, the same procedure was reapplied shortly afterwards and half a year later. Only respondents who had taken part in all three surveys are taken into account (n=84). The definition of impaired well-being as defined by the ProKlimA-study group was used as the criterion variable. The overall prevalence of impaired well-being rose from 24% to 36% after relocation. Contrarily, persons who were formerly accommodated in a wide spaced-building showed a reduced risk (OR(post1)=0.3). Affected persons had at all times a more negative response pattern. Chemical and physical parameters did not have any influence in this context. The adaptation to a new environment is influenced by the old "socialization" of the former buildings. Impaired well-being is not limited to bodily complaints, it rather has a systemic character in the form of a distinctive overall response pattern. For an adequate analysis of impaired well-being - and the sick-building-syndrome in consequence - the elucidation of individual and other potentially intervening factors is essential. Taking this into consideration, the search for norm values or a framework seems to be of limited value.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yifan; Apai, Dániel; Schneider, Glenn
The Hubble Space Telescope Wide Field Camera 3 (WFC3) near-IR channel is extensively used in time-resolved observations, especially for transiting exoplanet spectroscopy as well as brown dwarf and directly imaged exoplanet rotational phase mapping. The ramp effect is the dominant source of systematics in the WFC3 for time-resolved observations, which limits its photometric precision. Current mitigation strategies are based on empirical fits and require additional orbits to help the telescope reach a thermal equilibrium . We show that the ramp-effect profiles can be explained and corrected with high fidelity using charge trapping theories. We also present a model for this processmore » that can be used to predict and to correct charge trap systematics. Our model is based on a very small number of parameters that are intrinsic to the detector. We find that these parameters are very stable between the different data sets, and we provide best-fit values. Our model is tested with more than 120 orbits (∼40 visits) of WFC3 observations and is proved to be able to provide near photon noise limited corrections for observations made with both staring and scanning modes of transiting exoplanets as well as for starting-mode observations of brown dwarfs. After our model correction, the light curve of the first orbit in each visit has the same photometric precision as subsequent orbits, so data from the first orbit no longer need to be discarded. Near-IR arrays with the same physical characteristics (e.g., JWST/NIRCam ) may also benefit from the extension of this model if similar systematic profiles are observed.« less
NASA Astrophysics Data System (ADS)
Zhou, Yifan; Apai, Dániel; Lew, Ben W. P.; Schneider, Glenn
2017-06-01
The Hubble Space Telescope Wide Field Camera 3 (WFC3) near-IR channel is extensively used in time-resolved observations, especially for transiting exoplanet spectroscopy as well as brown dwarf and directly imaged exoplanet rotational phase mapping. The ramp effect is the dominant source of systematics in the WFC3 for time-resolved observations, which limits its photometric precision. Current mitigation strategies are based on empirical fits and require additional orbits to help the telescope reach a thermal equilibrium. We show that the ramp-effect profiles can be explained and corrected with high fidelity using charge trapping theories. We also present a model for this process that can be used to predict and to correct charge trap systematics. Our model is based on a very small number of parameters that are intrinsic to the detector. We find that these parameters are very stable between the different data sets, and we provide best-fit values. Our model is tested with more than 120 orbits (∼40 visits) of WFC3 observations and is proved to be able to provide near photon noise limited corrections for observations made with both staring and scanning modes of transiting exoplanets as well as for starting-mode observations of brown dwarfs. After our model correction, the light curve of the first orbit in each visit has the same photometric precision as subsequent orbits, so data from the first orbit no longer need to be discarded. Near-IR arrays with the same physical characteristics (e.g., JWST/NIRCam) may also benefit from the extension of this model if similar systematic profiles are observed.
Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.
Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang
2017-11-01
Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.
Frequency Domain Beamforming for a Deep Space Network Downlink Array
NASA Technical Reports Server (NTRS)
Navarro, Robert
2012-01-01
This paper describes a frequency domain beamformer to array up to 8 antennas of NASA's Deep Space Network currently in development. The objective of this array is to replace and enhance the capability of the DSN 70m antennas with multiple 34m antennas for telemetry, navigation and radio science use. The array will coherently combine the entire 500 MHz of usable bandwidth available to DSN receivers. A frequency domain beamforming architecture was chosen over a time domain based architecture to handle the large signal bandwidth and efficiently perform delay and phase calibration. The antennas of the DSN are spaced far enough apart that random atmospheric and phase variations between antennas need to be calibrated out on an ongoing basis in real-time. The calibration is done using measurements obtained from a correlator. This DSN Downlink Array expands upon a proof of concept breadboard array built previously to develop the technology and will become an operational asset of the Deep Space Network. Design parameters for frequency channelization, array calibration and delay corrections will be presented as well a method to efficiently calibrate the array for both wide and narrow bandwidth telemetry.
Recent Geoeffective Space Weather Events and Technological System Impacts
NASA Astrophysics Data System (ADS)
Redmon, R. J.; Denig, W. F.; Loto'aniu, P. T. M.; Singer, H. J.; Wilkinson, D. C.; Knipp, D. J.; Kilcommons, L. M.
2015-12-01
We review the state of the space environment for three recent intense geoeffective storms using NOAA observations and model predictions. On February 27, 2014, the US Wide Area Augmentation System (WAAS) navigation service over eastern Alaska and northeastern continental US was degraded due to a strong ionospheric storm. Similarly, on March 17, the St. Patrick's Day geomagnetic storm commenced, resulting in the most intense storm of the solar cycle to date with mid-latitude auroral sightings, intense ionospheric irregularities and WAAS degradation. On June 22, a strong (G4) geomagnetic storm commenced following the impact of 3 coronal mass ejections (CMEs). Late on June 22, solar protons entered the polar regions along open magnetic field lines producing intense radio absorption. We summarize, compare and contrast the space environmental state for each of these events from the perspective of NOAA observations and model predictions. We do so by leveraging GOES and POES/MetOp observations of the space radiation environment, DMSP observations of precipitating particles and bulk plasma parameters, OVATION Prime predictions of the auroral energy input and the US Total Electron Content (USTEC) and D-Region Absorption Prediction (DRAP) modeled response of the ionosphere. We discuss impacts to technological systems as available.
A programmable power processor for high power space applications
NASA Technical Reports Server (NTRS)
Lanier, J. R., Jr.; Graves, J. R.; Kapustka, R. E.; Bush, J. R., Jr.
1982-01-01
A Programmable Power Processor (P3) has been developed for application in future large space power systems. The P3 is capable of operation over a wide range of input voltage (26 to 375 Vdc) and output voltage (24 to 180 Vdc). The peak output power capability is 18 kW (180 V at 100 A). The output characteristics of the P3 can be programmed to any voltage and/or current level within the limits of the processor and may be controlled as a function of internal or external parameters. Seven breadboard P3s and one 'flight-type' engineering model P3 have been built and tested both individually and in electrical power systems. The programmable feature allows the P3 to be used in a variety of applications by changing the output characteristics. Test results, including efficiency at various input/output combinations, transient response, and output impedance, are presented.
The orbital TUS detector simulation
NASA Astrophysics Data System (ADS)
Grinyuk, A.; Grebenyuk, V.; Khrenov, B.; Klimov, P.; Lavrova, M.; Panasyuk, M.; Sharakin, S.; Shirokov, A.; Tkachenko, A.; Tkachev, L.; Yashin, I.
2017-04-01
The TUS space experiment is aimed at studying energy and arrival distribution of UHECR at E > 7 × 1019 eV by using the data of EAS fluorescent radiation in atmosphere. The TUS mission was launched at the end of April 2016 on board the dedicated ;Lomonosov; satellite. The TUSSIM software package has been developed to simulate performance of the TUS detector for the Fresnel mirror optical parameters, the light concentrator of the photo detector, the front end and trigger electronics. Trigger efficiency crucially depends on the background level which varies in a wide range: from 0.2 × 106 to 15 × 106 ph/(m2 μ s sr) at moonless and full moon nights respectively. The TUSSIM algorithms are described and the expected TUS statistics is presented for 5 years of data collection from the 500 km solar-synchronized orbit with allowance for the variability of the background light intensity during the space flight.
A real-time MTFC algorithm of space remote-sensing camera based on FPGA
NASA Astrophysics Data System (ADS)
Zhao, Liting; Huang, Gang; Lin, Zhe
2018-01-01
A real-time MTFC algorithm of space remote-sensing camera based on FPGA was designed. The algorithm can provide real-time image processing to enhance image clarity when the remote-sensing camera running on-orbit. The image restoration algorithm adopted modular design. The MTF measurement calculation module on-orbit had the function of calculating the edge extension function, line extension function, ESF difference operation, normalization MTF and MTFC parameters. The MTFC image filtering and noise suppression had the function of filtering algorithm and effectively suppressing the noise. The algorithm used System Generator to design the image processing algorithms to simplify the design structure of system and the process redesign. The image gray gradient dot sharpness edge contrast and median-high frequency were enhanced. The image SNR after recovery reduced less than 1 dB compared to the original image. The image restoration system can be widely used in various fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundgren, Stina; Andersen, Birgit; Piškur, Jure
2007-10-01
β-Alanine synthase catalyzes the last step in the reductive degradation pathway for uracil and thymine. Crystals of the recombinant enzyme from D. melanogaster belong to space group C2. Diffraction data to 3.3 Å resolution were collected and analyzed. β-Alanine synthase catalyzes the last step in the reductive degradation pathway for uracil and thymine, which represents the main clearance route for the widely used anticancer drug 5-fluorouracil. Crystals of the recombinant enzyme from Drosophila melanogaster, which is closely related to the human enzyme, were obtained by the hanging-drop vapour-diffusion method. They diffracted to 3.3 Å at a synchrotron-radiation source, belong tomore » space group C2 (unit-cell parameters a = 278.9, b = 95.0, c = 199.3 Å, β = 125.8°) and contain 8–10 molecules per asymmetric unit.« less
Contamination control of the space shuttle Orbiter crew compartment
NASA Technical Reports Server (NTRS)
Bartelson, Donald W.
1986-01-01
Effective contamination control as applied to manned space flight environments is a discipline characterized and controlled by many parameters. An introduction is given to issues involving Orbiter crew compartment contamination control. An effective ground processing contamination control program is an essential building block to a successful shuttle mission. Personnel are required to don cleanroom-grade clothing ensembles before entering the crew compartment and follow cleanroom rules and regulations. Prior to crew compartment entry, materials and equipment must be checked by an orbiter integrity clerk stationed outside the white-room entrance for compliance to program requirements. Analysis and source identification of crew compartment debris studies have been going on for two years. The objective of these studies is to determine and identify particulate generating materials and activities in the crew compartment. Results show a wide spectrum of many different types of materials. When source identification is made, corrective action is implemented to minimize or curtail further contaminate generation.
Characteristics, Process Parameters, and Inner Components of Anaerobic Bioreactors
Abdelgadir, Awad; Chen, Xiaoguang; Liu, Jianshe; Xie, Xuehui; Zhang, Jian; Zhang, Kai; Wang, Heng; Liu, Na
2014-01-01
The anaerobic bioreactor applies the principles of biotechnology and microbiology, and nowadays it has been used widely in the wastewater treatment plants due to their high efficiency, low energy use, and green energy generation. Advantages and disadvantages of anaerobic process were shown, and three main characteristics of anaerobic bioreactor (AB), namely, inhomogeneous system, time instability, and space instability were also discussed in this work. For high efficiency of wastewater treatment, the process parameters of anaerobic digestion, such as temperature, pH, Hydraulic retention time (HRT), Organic Loading Rate (OLR), and sludge retention time (SRT) were introduced to take into account the optimum conditions for living, growth, and multiplication of bacteria. The inner components, which can improve SRT, and even enhance mass transfer, were also explained and have been divided into transverse inner components, longitudinal inner components, and biofilm-packing material. At last, the newly developed special inner components were discussed and found more efficient and productive. PMID:24672798
Characteristics, process parameters, and inner components of anaerobic bioreactors.
Abdelgadir, Awad; Chen, Xiaoguang; Liu, Jianshe; Xie, Xuehui; Zhang, Jian; Zhang, Kai; Wang, Heng; Liu, Na
2014-01-01
The anaerobic bioreactor applies the principles of biotechnology and microbiology, and nowadays it has been used widely in the wastewater treatment plants due to their high efficiency, low energy use, and green energy generation. Advantages and disadvantages of anaerobic process were shown, and three main characteristics of anaerobic bioreactor (AB), namely, inhomogeneous system, time instability, and space instability were also discussed in this work. For high efficiency of wastewater treatment, the process parameters of anaerobic digestion, such as temperature, pH, Hydraulic retention time (HRT), Organic Loading Rate (OLR), and sludge retention time (SRT) were introduced to take into account the optimum conditions for living, growth, and multiplication of bacteria. The inner components, which can improve SRT, and even enhance mass transfer, were also explained and have been divided into transverse inner components, longitudinal inner components, and biofilm-packing material. At last, the newly developed special inner components were discussed and found more efficient and productive.
Vertical-Substrate MPCVD Epitaxial Nanodiamond Growth
Tzeng, Yan-Kai; Zhang, Jingyuan Linda; Lu, Haiyu; ...
2017-02-09
Color center-containing nanodiamonds have many applications in quantum technologies and biology. Diamondoids, molecular-sized diamonds have been used as seeds in chemical vapor deposition (CVD) growth. However, optimizing growth conditions to produce high crystal quality nanodiamonds with color centers requires varying growth conditions that often leads to ad-hoc and time-consuming, one-at-a-time testing of reaction conditions. In order to rapidly explore parameter space, we developed a microwave plasma CVD technique using a vertical, rather than horizontally oriented stage-substrate geometry. With this configuration, temperature, plasma density, and atomic hydrogen density vary continuously along the vertical axis of the substrate. Finally, this variation allowedmore » rapid identification of growth parameters that yield single crystal diamonds down to 10 nm in size and 75 nm diameter optically active center silicon-vacancy (Si-V) nanoparticles. Furthermore, this method may provide a means of incorporating a wide variety of dopants in nanodiamonds without ion irradiation damage.« less
NASA Technical Reports Server (NTRS)
McGuire, Robert E.; Candey, Robert M.
2007-01-01
SPDF now supports a broad range of data, user services and other activities. These include: CDAWeb current multi-mission data graphics, listings, file subsetting and supersetting by time and parameters; SSCWeb and 3-D Java client orbit graphics, listings and conjunction queries; OMNIWeb 1/5/60 minute interplanetary parameters at Earth; product-level SPASE descriptions of data including holdings of nssdcftp; VSPO SPASE-based heliophysics-wide product site finding and data use;, standard Data format Translation Webservices (DTWS); metrics software and others. These data and services are available through standard user and application webservices interfaces, so middleware services such as the Heliophysics VxOs, and externally-developed clients or services, can readily leverage our data and capabilities. Beyond a short summary of the above, we will then conduct the talk as a conversation to evolving VxO needs and planned approach to leverage such existing and ongoing services.
Polarization-resolved time-delay signatures of chaos induced by FBG-feedback in VCSEL.
Zhong, Zhu-Qiang; Li, Song-Sui; Chan, Sze-Chun; Xia, Guang-Qiong; Wu, Zheng-Mao
2015-06-15
Polarization-resolved chaotic emission intensities from a vertical-cavity surface-emitting laser (VCSEL) subject to feedback from a fiber Bragg grating (FBG) are numerically investigated. Time-delay (TD) signatures of the feedback are examined through various means including self-correlations of intensity time-series of individual polarizations, cross-correlation of intensities time-series between both polarizations, and permutation entropies calculated for the individual polarizations. The results show that the TD signatures can be clearly suppressed by selecting suitable operation parameters such as the feedback strength, FBG bandwidth, and Bragg frequency. Also, in the operational parameter space, numerical maps of TD signatures and effective bandwidths are obtained, which show regions of chaotic signals with both wide bandwidths and weak TD signatures. Finally, by comparing with a VCSEL subject to feedback from a mirror, the VCSEL subject to feedback from the FBG generally shows better concealment of the TD signatures with similar, or even wider, bandwidths.
Electricity Load Forecasting Using Support Vector Regression with Memetic Algorithms
Hu, Zhongyi; Xiong, Tao
2013-01-01
Electricity load forecasting is an important issue that is widely explored and examined in power systems operation literature and commercial transactions in electricity markets literature as well. Among the existing forecasting models, support vector regression (SVR) has gained much attention. Considering the performance of SVR highly depends on its parameters; this study proposed a firefly algorithm (FA) based memetic algorithm (FA-MA) to appropriately determine the parameters of SVR forecasting model. In the proposed FA-MA algorithm, the FA algorithm is applied to explore the solution space, and the pattern search is used to conduct individual learning and thus enhance the exploitation of FA. Experimental results confirm that the proposed FA-MA based SVR model can not only yield more accurate forecasting results than the other four evolutionary algorithms based SVR models and three well-known forecasting models but also outperform the hybrid algorithms in the related existing literature. PMID:24459425
Electricity load forecasting using support vector regression with memetic algorithms.
Hu, Zhongyi; Bao, Yukun; Xiong, Tao
2013-01-01
Electricity load forecasting is an important issue that is widely explored and examined in power systems operation literature and commercial transactions in electricity markets literature as well. Among the existing forecasting models, support vector regression (SVR) has gained much attention. Considering the performance of SVR highly depends on its parameters; this study proposed a firefly algorithm (FA) based memetic algorithm (FA-MA) to appropriately determine the parameters of SVR forecasting model. In the proposed FA-MA algorithm, the FA algorithm is applied to explore the solution space, and the pattern search is used to conduct individual learning and thus enhance the exploitation of FA. Experimental results confirm that the proposed FA-MA based SVR model can not only yield more accurate forecasting results than the other four evolutionary algorithms based SVR models and three well-known forecasting models but also outperform the hybrid algorithms in the related existing literature.
Observable gravitational waves in pre-big bang cosmology: an update
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gasperini, M., E-mail: gasperini@ba.infn.it
In the light of the recent results concerning CMB observations and GW detection we address the question of whether it is possible, in a self-consistent inflationary framework, to simultaneously generate a spectrum of scalar metric perturbations in agreement with Planck data and a stochastic background of primordial gravitational radiation compatible with the design sensitivity of aLIGO/Virgo and/or eLISA. We suggest that this is possible in a string cosmology context, for a wide region of the parameter space of the so-called pre-big bang models. We also discuss the associated values of the tensor-to-scalar ratio relevant to the CMB polarization experiments. Wemore » conclude that future, cross-correlated results from CMB observations and GW detectors will be able to confirm or disprove pre-big bang models and—in any case—will impose new significant constraints on the basic string theory/cosmology parameters.« less
Correlation of the tokamak H-mode density limit with ballooning stability at the separatrix
NASA Astrophysics Data System (ADS)
Eich, T.; Goldston, R. J.; Kallenbach, A.; Sieglin, B.; Sun, H. J.; ASDEX Upgrade Team; Contributors, JET
2018-03-01
We show for JET and ASDEX Upgrade, based on Thomson-scattering measurements, a clear correlation of the density limit of the tokamak H-mode high-confinement regime with the approach to the ideal ballooning instability threshold at the periphery of the plasma. It is shown that the MHD ballooning parameter at the separatrix position α_sep increases about linearly with the separatrix density normalized to Greenwald density, n_e, sep/n_GW for a wide range of discharge parameters in both devices. The observed operational space is found to reach at maximum n_e, sep/n_GW≈ 0.4 -0.5 at values for α_sep≈ 2 -2.5, in the range of theoretical predictions for ballooning instability. This work supports the hypothesis that the H-mode density limit may be set by ballooning stability at the separatrix.
Universal structural parameter to quantitatively predict metallic glass properties
Ding, Jun; Cheng, Yong-Qiang; Sheng, Howard; ...
2016-12-12
Quantitatively correlating the amorphous structure in metallic glasses (MGs) with their physical properties has been a long-sought goal. Here we introduce flexibility volume' as a universal indicator, to bridge the structural state the MG is in with its properties, on both atomic and macroscopic levels. The flexibility volume combines static atomic volume with dynamics information via atomic vibrations that probe local configurational space and interaction between neighbouring atoms. We demonstrate that flexibility volume is a physically appropriate parameter that can quantitatively predict the shear modulus, which is at the heart of many key properties of MGs. Moreover, the new parametermore » correlates strongly with atomic packing topology, and also with the activation energy for thermally activated relaxation and the propensity for stress-driven shear transformations. These correlations are expected to be robust across a very wide range of MG compositions, processing conditions and length scales.« less
Nishino, Ko; Lombardi, Stephen
2011-01-01
We introduce a novel parametric bidirectional reflectance distribution function (BRDF) model that can accurately encode a wide variety of real-world isotropic BRDFs with a small number of parameters. The key observation we make is that a BRDF may be viewed as a statistical distribution on a unit hemisphere. We derive a novel directional statistics distribution, which we refer to as the hemispherical exponential power distribution, and model real-world isotropic BRDFs as mixtures of it. We derive a canonical probabilistic method for estimating the parameters, including the number of components, of this novel directional statistics BRDF model. We show that the model captures the full spectrum of real-world isotropic BRDFs with high accuracy, but a small footprint. We also demonstrate the advantages of the novel BRDF model by showing its use for reflection component separation and for exploring the space of isotropic BRDFs.
In silico modeling of the yeast protein and protein family interaction network
NASA Astrophysics Data System (ADS)
Goh, K.-I.; Kahng, B.; Kim, D.
2004-03-01
Understanding of how protein interaction networks of living organisms have evolved or are organized can be the first stepping stone in unveiling how life works on a fundamental ground. Here we introduce an in silico ``coevolutionary'' model for the protein interaction network and the protein family network. The essential ingredient of the model includes the protein family identity and its robustness under evolution, as well as the three previously proposed: gene duplication, divergence, and mutation. This model produces a prototypical feature of complex networks in a wide range of parameter space, following the generalized Pareto distribution in connectivity. Moreover, we investigate other structural properties of our model in detail with some specific values of parameters relevant to the yeast Saccharomyces cerevisiae, showing excellent agreement with the empirical data. Our model indicates that the physical constraints encoded via the domain structure of proteins play a crucial role in protein interactions.
Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.
2014-12-01
Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.
NASA Astrophysics Data System (ADS)
Acer, Emine; Çadırlı, Emin; Erol, Harun; Kaya, Hasan; Gündüz, Mehmet
2017-12-01
Dendritic spacing can affect microsegregation profiles and also the formation of secondary phases within interdendritic regions, which influences the mechanical properties of cast structures. To understand dendritic spacings, it is important to understand the effects of growth rate and composition on primary dendrite arm spacing ( λ 1) and secondary dendrite arm spacing ( λ 2). In this study, aluminum alloys with concentrations of (1, 3, and 5 wt pct) Zn were directionally solidified upwards using a Bridgman-type directional solidification apparatus under a constant temperature gradient (10.3 K/mm), resulting in a wide range of growth rates (8.3-165.0 μm/s). Microstructural parameters, λ 1 and λ 2 were measured and expressed as functions of growth rate and composition using a linear regression analysis method. The values of λ 1 and λ 2 decreased with increasing growth rates. However, the values of λ 1 increased with increasing concentration of Zn in the Al-Zn alloy, but the values of λ 2 decreased systematically with an increased Zn concentration. In addition, a transition from a cellular to a dendritic structure was observed at a relatively low growth rate (16.5 μm/s) in this study of binary alloys. The experimental results were compared with predictive theoretical models as well as experimental works for dendritic spacing.
14 CFR 1214.813 - Computation of sharing and pricing parameters.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Computation of sharing and pricing parameters. 1214.813 Section 1214.813 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION SPACE FLIGHT Reimbursement for Spacelab Services § 1214.813 Computation of sharing and pricing...
GMC COLLISIONS AS TRIGGERS OF STAR FORMATION. I. PARAMETER SPACE EXPLORATION WITH 2D SIMULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Benjamin; Loo, Sven Van; Tan, Jonathan C.
We utilize magnetohydrodynamic (MHD) simulations to develop a numerical model for giant molecular cloud (GMC)–GMC collisions between nearly magnetically critical clouds. The goal is to determine if, and under what circumstances, cloud collisions can cause pre-existing magnetically subcritical clumps to become supercritical and undergo gravitational collapse. We first develop and implement new photodissociation region based heating and cooling functions that span the atomic to molecular transition, creating a multiphase ISM and allowing modeling of non-equilibrium temperature structures. Then in 2D and with ideal MHD, we explore a wide parameter space of magnetic field strength, magnetic field geometry, collision velocity, andmore » impact parameter and compare isolated versus colliding clouds. We find factors of ∼2–3 increase in mean clump density from typical collisions, with strong dependence on collision velocity and magnetic field strength, but ultimately limited by flux-freezing in 2D geometries. For geometries enabling flow along magnetic field lines, greater degrees of collapse are seen. We discuss observational diagnostics of cloud collisions, focussing on {sup 13}CO(J = 2–1), {sup 13}CO(J = 3–2), and {sup 12}CO(J = 8–7) integrated intensity maps and spectra, which we synthesize from our simulation outputs. We find that the ratio of J = 8–7 to lower-J emission is a powerful diagnostic probe of GMC collisions.« less
One-Dimensional, Two-Phase Flow Modeling Toward Interpreting Motor Slag Expulsion Phenomena
NASA Technical Reports Server (NTRS)
Kibbey, Timothy P.
2012-01-01
Aluminum oxide slag accumulation and expulsion was previously shown to be a player in various solid rocket motor phenomena, including the Space Shuttle's Reusable Solid Rocket Motor (RSRM) pressure perturbation, or "blip," and phantom moment. In the latter case, such un ]commanded side accelerations near the end of burn have also been identified in several other motor systems. However, efforts to estimate the mass expelled during a given event have come up short. Either bulk calculations are performed without enough physics present, or multiphase, multidimensional Computational Fluid Dynamic analyses are performed that give a snapshot in time and space but do not always aid in grasping the general principle. One ]dimensional, two ]phase compressible flow calculations yield an analytical result for nozzle flow under certain assumptions. This can be carried further to relate the bulk motor parameters of pressure, thrust, and mass flow rate under the different exhaust conditions driven by the addition of condensed phase mass flow. An unknown parameter is correlated to airflow testing with water injection where mass flow rates and pressure are known. Comparison is also made to full ]scale static test motor data where thrust and pressure changes are known and similar behavior is shown. The end goal is to be able to include the accumulation and flow of slag in internal ballistics predictions. This will allow better prediction of the tailoff when much slag is ejected and of mass retained versus time, believed to be a contributor to the widely-observed "flight knockdown" parameter.
The 6dF Galaxy Survey: Mass and Motions in the Local Universe
NASA Astrophysics Data System (ADS)
Colless, M.; Jones, H.; Campbell, L.; Burkey, D.; Taylor, A.; Saunders, W.
2005-01-01
The 6dF Galaxy Survey will provide 167000 redshifts and about 15000 peculiar velocities for galaxies over most of the southern sky out to about cz = 30000 km/s. The survey is currently almost half complete, with the final observations due in mid-2005. An initial data release was made public in December 2002; the first third of the dataset will be released at the end of 2003, with the remaining thirds being released at the end of 2004 and 2005. The status of the survey, the survey database and other relevant information can be obtained from the 6dFGS web site at http://www.mso.anu.edu.au/6dFGS. In terms of constraining cosmological parameters, combining the 6dFGS redshift and peculiar velocity surveys will allow us to: (1) break the degeneracy between the redshift-space distortion parameter beta = Omega_m0.6b/b and the galaxy-mass correlation parameter rg; (2) measure the four parameters Ag, Gamma, beta and rg with precisions of between 1% and 3%; (3) measure the variation of rg and b with scale to within a few percent over a wide range of scales.
Evaluation of gamma dose effect on PIN photodiode using analytical model
NASA Astrophysics Data System (ADS)
Jafari, H.; Feghhi, S. A. H.; Boorboor, S.
2018-03-01
The PIN silicon photodiodes are widely used in the applications which may be found in radiation environment such as space mission, medical imaging and non-destructive testing. Radiation-induced damage in these devices causes to degrade the photodiode parameters. In this work, we have used new approach to evaluate gamma dose effects on a commercial PIN photodiode (BPX65) based on an analytical model. In this approach, the NIEL parameter has been calculated for gamma rays from a 60Co source by GEANT4. The radiation damage mechanisms have been considered by solving numerically the Poisson and continuity equations with the appropriate boundary conditions, parameters and physical models. Defects caused by radiation in silicon have been formulated in terms of the damage coefficient for the minority carriers' lifetime. The gamma induced degradation parameters of the silicon PIN photodiode have been analyzed in detail and the results were compared with experimental measurements and as well as the results of ATLAS semiconductor simulator to verify and parameterize the analytical model calculations. The results showed reasonable agreement between them for BPX65 silicon photodiode irradiated by 60Co gamma source at total doses up to 5 kGy under different reverse voltages.
NASA Astrophysics Data System (ADS)
Zhang, Jiangjiang; Lin, Guang; Li, Weixuan; Wu, Laosheng; Zeng, Lingzao
2018-03-01
Ensemble smoother (ES) has been widely used in inverse modeling of hydrologic systems. However, for problems where the distribution of model parameters is multimodal, using ES directly would be problematic. One popular solution is to use a clustering algorithm to identify each mode and update the clusters with ES separately. However, this strategy may not be very efficient when the dimension of parameter space is high or the number of modes is large. Alternatively, we propose in this paper a very simple and efficient algorithm, i.e., the iterative local updating ensemble smoother (ILUES), to explore multimodal distributions of model parameters in nonlinear hydrologic systems. The ILUES algorithm works by updating local ensembles of each sample with ES to explore possible multimodal distributions. To achieve satisfactory data matches in nonlinear problems, we adopt an iterative form of ES to assimilate the measurements multiple times. Numerical cases involving nonlinearity and multimodality are tested to illustrate the performance of the proposed method. It is shown that overall the ILUES algorithm can well quantify the parametric uncertainties of complex hydrologic models, no matter whether the multimodal distribution exists.
76 FR 40751 - National Environmental Policy Act; Wallops Flight Facility; Site-Wide
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-11
..., and to increase the knowledge of the Earth's upper atmosphere and the near space environment. The... NATIONAL AERONAUTICS AND SPACE ADMINISTRATION [Notice (11-062)] National Environmental Policy Act; Wallops Flight Facility; Site- Wide AGENCY: National Aeronautics and Space Administration. ACTION: Notice...
Development of fluxgate magnetometers and applications to the space science missions
NASA Astrophysics Data System (ADS)
Matsuoka, A.; Shinohara, M.; Tanaka, Y.-M.; Fujimoto, A.; Iguchi, K.
2013-11-01
Magnetic field is one of the essential physical parameters to study the space physics and evolution of the solar system. There are several methods to measure the magnetic field in the space by spacecraft and rockets. Fluxgate magnetometer has been most generally used out of them because it measures the vector field accurately and does not need much weight and power budgets. When we try more difficult missions such as multi-satellite observation, landing on the celestial body and exploration in the area of severe environment, we have to modify the magnetometer or develop new techniques to make the instrument adequate for those projects. For example, we developed a 20-bit delta-sigma analogue-to-digital converter for MGF-I on the BepiColombo MMO satellite, to achieve the wide-range (±2000 nT) measurement with good resolution in the high radiation environment. For further future missions, we have examined the digitalizing of the circuit, which has much potential to drastically reduce the instrument weight, power consumption and performance dependence on the temperature.
NASA Technical Reports Server (NTRS)
deGroh, H. C.; Li, K.; Li, B. Q.
2002-01-01
A 2-D finite element model is presented for the melt growth of single crystals in a microgravity environment with a superimposed DC magnetic field. The model is developed based on the deforming finite element methodology and is capable of predicting the phenomena of the steady and transient convective flows, heat transfer, solute distribution, and solid-liquid interface morphology associated with the melt growth of single crystals in microgravity with and without an applied magnetic field. Numerical simulations were carried out for a wide range of parameters including idealized microgravity conditions, the synthesized g-jitter and the real g-jitter data taken by on-board accelerometers during space flights. The results reveal that the time varying g-jitter disturbances, although small in magnitude, cause an appreciable convective flow in the liquid pool, which in turn produces detrimental effects during the space processing of single crystal growth. An applied magnetic field of appropriate strength, superimposed on microgravity, can be very effective in suppressing the deleterious effects resulting from the g-jitter disturbances.
In-Situ F2-Region Plasma Density and Temperature Measurements from the International Space Station
NASA Technical Reports Server (NTRS)
Coffey, Victoria; Wright, Kenneth; Minow, Joseph
2008-01-01
The International Space Station orbit provides an ideal platform for in-situ studies of space weather effects on the mid and low latitude F-2 region ionosphere. The Floating Potential Measurement Unit (FPMU) operating on the ISS since Aug 2006. is a suite of plasma instruments: a Floating Potential Probe (FPP), a Plasma Impedance Probe (PIP), a Wide-sweep langmuir Probe (WLP), and a Narrow-sweep Langmuir Probe (NLP). This instrument package provides a new opportunity lor collaborative multi-instrument studies of the F-region ionosphere during both quiet and disturbed periods. This presentation first describes the operational parameters for each of the FPMU probes and shOWS examples of an intra-instrument validation. We then show comparisons with the plasma density and temperature measurements derived from the TIMED GUVI ultraviolet imager, the Millstone Hill ground based incoherent scatter radar, and DIAS digisondes, Finally we show one of several observations of night-time equatorial density holes demonstrating the capabilities of the probes lor monitoring mid and low latitude plasma processes.
Fifty-year development of Douglas-fir stands planted at various spacings.
Donald L. Reukema
1979-01-01
A 51-yr record of observations in stands planted at six spacings, ranging from 4 to 12 ft, illustrates clearly the beneficial effects of wide initial spacing and the detrimental effects of carrying too many trees relative to the size to which they will be grown. Not only are trees larger, but yields per acre are greater at wide spacings.
Calibration Laboratory Capabilities Listing as of April 2009
NASA Technical Reports Server (NTRS)
Kennedy, Gary W.
2009-01-01
This document reviews the Calibration Laboratory capabilities for various NASA centers (i.e., Glenn Research Center and Plum Brook Test Facility Kennedy Space Center Marshall Space Flight Center Stennis Space Center and White Sands Test Facility.) Some of the parameters reported are: Alternating current, direct current, dimensional, mass, force, torque, pressure and vacuum, safety, and thermodynamics parameters. Some centers reported other parameters.
NASA Astrophysics Data System (ADS)
Richard, Pierre; Zhang, W.-L.; Wu, S.-F.; van Roekeghem, A.; Zhang, P.; Miao, H.; Qian, T.; Nie, S.-M.; Chen, G.-F.; Ding, H.; Xu, N.; Biermann, S.; Capan, C.; Fisk, Z.; Saparov, B. I.; Sefat, A. S.
2015-03-01
It is widely believed that the key ingredients for high-temperature superconductivity are already present in the non-superconducting parent compounds. With its ability to probe the single-particle electronic structure directly in the momentum space, ARPES is a very powerful tool to determine which parameters of the electronic structure are possibly relevant for promoting superconductivity. Here we report ARPES studies on the parent compounds of the 122 family of Fe-based superconductors and their 3 d transition metal pnictide cousins. In particular, we show that the Fe-compound exhibits the largest electronic correlations, possibly a determining factor for unconventional superconductivity.
Evolution of statistical averages: An interdisciplinary proposal using the Chapman-Enskog method
NASA Astrophysics Data System (ADS)
Mariscal-Sanchez, A.; Sandoval-Villalbazo, A.
2017-08-01
This work examines the idea of applying the Chapman-Enskog (CE) method for approximating the solution of the Boltzmann equation beyond the realm of physics, using an information theory approach. Equations describing the evolution of averages and their fluctuations in a generalized phase space are established up to first-order in the Knudsen parameter which is defined as the ratio of the time between interactions (mean free time) and a characteristic macroscopic time. Although the general equations here obtained may be applied in a wide range of disciplines, in this paper, only a particular case related to the evolution of averages in speculative markets is examined.
Successful treatment of IgM paraproteinaemic neuropathy with fludarabine
Wilson, H.; Lunn, M.; Schey, S.; Hughes, R
1999-01-01
OBJECTIVES—To evaluate the response of four patients with IgM paraproteinaemic neuropathy to a novel therapy—pulsed intravenous fludarabine. BACKGROUND—The peripheral neuropathy associated with IgM paraproteinaemia usually runs a chronic, slowly progressive course which may eventually cause severe disability. Treatment with conventional immunosuppressive regimens has been unsatisfactory. Fludarabine is a novel purine analogue which has recently been shown to be effective in low grade lymphoid malignancies. METHODS—Four patients were treated with IgM paraproteinaemic neuropathy with intravenous pulses of fludarabine. Two of the four patients had antibodies to MAG and characteristic widely spaced myelin on nerve biopsy and a third had characteristic widely spaced myelin only. The fourth had an endoneurial lymphocytic infiltrate on nerve biopsy and a diagnosis of Waldenström's macroglobulinaemia. RESULTS—In all cases subjective and objective clinical improvement occurred associated with a significant fall in the IgM paraprotein concentration in three cases. Neurophysiological parameters improved in the three patients examined. The treatment was well tolerated. All patients developed mild, reversible lymphopenia and 50% mild generalised myelosuppression, but there were no febrile episodes. CONCLUSION—Fludarabine should be considered as a possible treatment for patients with IgM MGUS paraproteinaemic neuropathy. PMID:10209166
Inertial effects on heat transfer in superhydrophobic microchannels
NASA Astrophysics Data System (ADS)
Cowley, Adam; Maynes, Daniel; Crockett, Julie; Iverson, Brian; BYU Fluids Team
2015-11-01
This work numerically studies the effects of inertia on thermal transport in superhydrophbic microchannels. An infinite parallel plate channel comprised of structured superhydrophbic walls is considered. The structure of the superhydrophobic surfaces consists of square pillars organized in a square array aligned with the flow direction. Laminar, fully developed flow is explored. The flow is assumed to be non-wetting and have an idealized flat meniscus. A shear-free, adiabatic boundary condition is used at the liquid/gas interface, while a no-slip, constant heat flux condition is used at the liquid/solid interface. A wide range of Peclet numbers, relative channel spacing distances, and relative pillar sizes are considered. Results are presented in terms of Poiseuille number, Nusselt number, hydrodynamic slip length, and temperature jump length. Interestingly, the thermal transport is varied only slightly by inertial effects for a wide range of parameters explored and compares well with other analytical and numerical work that assumed Stokes flow. It is only for very small relative channel spacing and large Peclet number that inertial effects exert significant influence. Overall, the heat transfer is reduced for the superhydrophbic channels in comparison to classic smooth walled channels. This research was supported by the National Science Foundation (NSF) - United States (Grant No. CBET-1235881).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konno, Kohkichi, E-mail: kohkichi@tomakomai-ct.ac.jp; Nagasawa, Tomoaki, E-mail: nagasawa@tomakomai-ct.ac.jp; Takahashi, Rohta, E-mail: takahashi@tomakomai-ct.ac.jp
We consider the scattering of a quantum particle by two independent, successive parity-invariant point interactions in one dimension. The parameter space for the two point interactions is given by the direct product of two tori, which is described by four parameters. By investigating the effects of the two point interactions on the transmission probability of plane wave, we obtain the conditions for the parameter space under which perfect resonant transmission occur. The resonance conditions are found to be described by symmetric and anti-symmetric relations between the parameters.
Mapping an operator's perception of a parameter space
NASA Technical Reports Server (NTRS)
Pew, R. W.; Jagacinski, R. J.
1972-01-01
Operators monitored the output of two versions of the crossover model having a common random input. Their task was to make discrete, real-time adjustments of the parameters k and tau of one of the models to make its output time history converge to that of the other, fixed model. A plot was obtained of the direction of parameter change as a function of position in the (tau, k) parameter space relative to the nominal value. The plot has a great deal of structure and serves as one form of representation of the operator's perception of the parameter space.
2013-09-01
Ground testing of prototype hardware and processing algorithms for a Wide Area Space Surveillance System (WASSS) Neil Goldstein, Rainer A...at Magdalena Ridge Observatory using the prototype Wide Area Space Surveillance System (WASSS) camera, which has a 4 x 60 field-of-view , < 0.05...objects with larger-aperture cameras. The sensitivity of the system depends on multi-frame averaging and a Principal Component Analysis based image
Scalable Online Network Modeling and Simulation
2005-08-01
ONLINE NETWORK MODELING AND SIMULATION 6. AUTHOR(S) Boleslaw Szymanski , Shivkumar Kalyanaraman, Biplab Sikdar and Christopher Carothers 5...performance for a wide range of parameter values (parameter sensitivity), understanding of protocol stability and dynamics, and studying feature ...a wide range of parameter values (parameter sensitivity), understanding of protocol stability and dynamics, and studying feature interactions
Turbulent magnetic fluctuations in laboratory reconnection
NASA Astrophysics Data System (ADS)
Von Stechow, Adrian; Grulke, Olaf; Klinger, Thomas
2016-07-01
The role of fluctuations and turbulence is an important question in astrophysics. While direct observations in space are rare and difficult dedicated laboratory experiments provide a versatile environment for the investigation of magnetic reconnection due to their good diagnostic access and wide range of accessible plasma parameters. As such, they also provide an ideal chance for the validation of space plasma reconnection theories and numerical simulation results. In particular, we studied magnetic fluctuations within reconnecting current sheets for various reconnection parameters such as the reconnection rate, guide field, as well as plasma density and temperature. These fluctuations have been previously interpreted as signatures of current sheet plasma instabilities in space and laboratory systems. Especially in low collisionality plasmas these may provide a source of anomalous resistivity and thereby contribute a significant fraction of the reconnection rate. We present fluctuation measurements from two complementary reconnection experiments and compare them to numerical simulation results. VINETA.II (Greifswald, Germany) is a cylindrical, high guide field reconnection experiment with an open field line geometry. The reconnecting current sheet has a three-dimensional structure that is predominantly set by the magnetic pitch angle which results from the superposition of the guide field and the in-plane reconnecting field. Within this current sheet, high frequency magnetic fluctuations are observed that correlate well with the local current density and show a power law spectrum with a spectral break at the lower hybrid frequency. Their correlation lengths are found to be extremely short, but propagation is nonetheless observed with high phase velocities that match the Whistler dispersion. To date, the experiment has been run with an external driving field at frequencies higher than the ion cyclotron frequency f_{ci}, which implies that the EMHD framework applies. Recent machine upgrades allow the inclusion of ion dynamics by reducing the drive frequency below f_{ci}. Two numerical codes (EMHD and hybrid, respectively) have been developed at the Max Planck Institute for solar physics and are used to investigate instability mechanisms and scaling laws for the observed results. MRX (PPPL. Princeton) is a zero to medium guide field, toroidal reconnection experiment. Despite the differing plasma parameters, the qualitative magnetic fluctuation behavior (amplitude profiles, spectra and propagation properties) is comparable to VINETA.II. Results from a new measurement campaign at several different guide fields provides partial overlap with VINETA.II guide field ratios and thereby extends the accessible parameter space of our studies.
Design and Efficiency Analysis of Operational Scenarios for Space Situational Awareness Radar System
NASA Astrophysics Data System (ADS)
Choi, E. J.; Cho, S.; Jo, J. H.; Park, J.; Chung, T.; Park, J.; Jeon, H.; Yun, A.; Lee, Y.
In order to perform the surveillance and tracking of space objects, optical and radar sensors are the technical components for space situational awareness system. Especially, space situational awareness radar system in combination with optical sensors network plays an outstanding role for space situational awareness. At present, OWL-Net(Optical Wide Field patrol Network) optical system, which is the only infra structures for tracking of space objects in Korea is very limited in all-weather and observation time. Therefore, the development of radar system capable of continuous operation is becoming an essential space situational awareness element. Therefore, for an efficient space situational awareness at the current state, the strategy of the space situational awareness radar development should be considered. The purpose of this paper is to analyze the efficiency of radar system for detection and tracking of space objects. The detection capabilities are limited to an altitude of 2,000 km with debris size of 1 m2 in radar cross section (RCS) for the radar operating frequencies of L, S, C, X, and Ku-band. The power budget analysis results showed that the maximum detection range of 2,000km can be achieved with the transmitted power of 900 kW, transmit and receive antenna gains of 40 dB and 43 dB, respectively, pulse width of 2 ms, and a signal processing gain of 13.3dB, at frequency of 1.3GHz. The required signal-to-noise ratio (SNR) was assumed to be 12.6 dB for probability of detection of 80% with false alarm rate 10-6. Through the efficiency analysis and trade-off study, the key parameters of the radar system are designed. As a result, this research will provide the guideline for the conceptual design of space situational awareness system.
Exploring the nonlinear cloud and rain equation
NASA Astrophysics Data System (ADS)
Koren, Ilan; Tziperman, Eli; Feingold, Graham
2017-01-01
Marine stratocumulus cloud decks are regarded as the reflectors of the climate system, returning back to space a significant part of the income solar radiation, thus cooling the atmosphere. Such clouds can exist in two stable modes, open and closed cells, for a wide range of environmental conditions. This emergent behavior of the system, and its sensitivity to aerosol and environmental properties, is captured by a set of nonlinear equations. Here, using linear stability analysis, we express the transition from steady to a limit-cycle state analytically, showing how it depends on the model parameters. We show that the control of the droplet concentration (N), the environmental carrying-capacity (H0), and the cloud recovery parameter (τ) can be linked by a single nondimensional parameter (μ=√{N }/(ατH0) ) , suggesting that for deeper clouds the transition from open (oscillating) to closed (stable fixed point) cells will occur for higher droplet concentration (i.e., higher aerosol loading). The analytical calculations of the possible states, and how they are affected by changes in aerosol and the environmental variables, provide an enhanced understanding of the complex interactions of clouds and rain.
Astrophysics to z approx. 10 with Gravitational Waves
NASA Technical Reports Server (NTRS)
Stebbins, Robin; Hughes, Scott; Lang, Ryan
2007-01-01
The most useful characterization of a gravitational wave detector's performance is the accuracy with which astrophysical parameters of potential gravitational wave sources can be estimated. One of the most important source types for the Laser Interferometer Space Antenna (LISA) is inspiraling binaries of black holes. LISA can measure mass and spin to better than 1% for a wide range of masses, even out to high redshifts. The most difficult parameter to estimate accurately is almost always luminosity distance. Nonetheless, LISA can measure luminosity distance of intermediate-mass black hole binary systems (total mass approx.10(exp 4) solar mass) out to z approx.10 with distance accuracies approaching 25% in many cases. With this performance, LISA will be able to follow the merger history of black holes from the earliest mergers of proto-galaxies to the present. LISA's performance as a function of mass from 1 to 10(exp 7) solar mass and of redshift out to z approx. 30 will be described. The re-formulation of LISA's science requirements based on an instrument sensitivity model and parameter estimation will be described.
Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas
2013-01-01
Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.
Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas
2013-01-01
Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum. PMID:23766941
Wireless modification of the intraoperative examination monitor for awake surgery.
Yoshimitsu, Kitaro; Maruyama, Takashi; Muragaki, Yoshihiro; Suzuki, Takashi; Saito, Taiichi; Nitta, Masayuki; Tanaka, Masahiko; Chernov, Mikhail; Tamura, Manabu; Ikuta, Soko; Okamoto, Jun; Okada, Yoshikazu; Iseki, Hiroshi
2011-01-01
The dedicated intraoperative examination monitor for awake surgery (IEMAS) was originally developed by us to facilitate the process of brain mapping during awake craniotomy and successfully used in 186 neurosurgical procedures. This information-sharing device provides the opportunity for all members of the surgical team to visualize a wide spectrum of the integrated intraoperative information related to the condition of the patient, nuances of the surgical procedure, and details of the cortical mapping, practically without interruption of the surgical manipulations. The wide set of both anatomical and functional parameters, such as view of the patient's mimic and face movements while answering the specific questions, type of the examination test, position of the surgical instruments, parameters of the bispectral index monitor, and general view of the surgical field through the operating microscope, is presented compactly in one screen with several displays. However, the initially designed IEMAS system was occasionally affected by interruption or detachment of the connecting cables, which sometimes interfered with its effective clinical use. Therefore, a new modification of the device was developed. The specific feature is installation of wireless information transmitting technology using audio-visual transmitters and receivers for transfer of images and verbal information. The modified IEMAS system is very convenient to use in the narrow space of the operating room.
Challenges of model transferability to data-scarce regions (Invited)
NASA Astrophysics Data System (ADS)
Samaniego, L. E.
2013-12-01
Developing the ability to globally predict the movement of water on the land surface at spatial scales from 1 to 5 km constitute one of grand challenges in land surface modelling. Copying with this grand challenge implies that land surface models (LSM) should be able to make reliable predictions across locations and/or scales other than those used for parameter estimation. In addition to that, data scarcity and quality impose further difficulties in attaining reliable predictions of water and energy fluxes at the scales of interest. Current computational limitations impose also seriously limitations to exhaustively investigate the parameter space of LSM over large domains (e.g. greater than half a million square kilometers). Addressing these challenges require holistic approaches that integrate the best techniques available for parameter estimation, field measurements and remotely sensed data at their native resolutions. An attempt to systematically address these issues is the multiscale parameterisation technique (MPR) that links high resolution land surface characteristics with effective model parameters. This technique requires a number of pedo-transfer functions and a much fewer global parameters (i.e. coefficients) to be inferred by calibration in gauged basins. The key advantage of this technique is the quasi-scale independence of the global parameters which enables to estimate global parameters at coarser spatial resolutions and then to transfer them to (ungauged) areas and scales of interest. In this study we show the ability of this technique to reproduce the observed water fluxes and states over a wide range of climate and land surface conditions ranging from humid to semiarid and from sparse to dense forested regions. Results of transferability of global model parameters in space (from humid to semi-arid basins) and across scales (from coarser to finer) clearly indicate the robustness of this technique. Simulations with coarse data sets (e.g. EOBS forcing 25x25 km2, FAO soil map 1:5000000) using parameters obtained with high resolution information (REGNIE forcing 1x1 km2, BUEK soil map 1:1000000) in different climatic regions indicate the potential of MPR for prediction in data-scarce regions. In this presentation, we will also discuss how the transferability of global model parameters across scales and locations helps to identify deficiencies in model structure and regionalization functions.
14 CFR 1214.117 - Launch and orbit parameters for a standard launch.
Code of Federal Regulations, 2013 CFR
2013-01-01
...) Launch from Kennedy Space Center (KSC) into the customer's choice of two standard mission orbits: 160 NM... 14 Aeronautics and Space 5 2013-01-01 2013-01-01 false Launch and orbit parameters for a standard launch. 1214.117 Section 1214.117 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION...
14 CFR 1214.117 - Launch and orbit parameters for a standard launch.
Code of Federal Regulations, 2012 CFR
2012-01-01
...) Launch from Kennedy Space Center (KSC) into the customer's choice of two standard mission orbits: 160 NM... 14 Aeronautics and Space 5 2012-01-01 2012-01-01 false Launch and orbit parameters for a standard launch. 1214.117 Section 1214.117 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION...
14 CFR 1214.117 - Launch and orbit parameters for a standard launch.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) Launch from Kennedy Space Center (KSC) into the customer's choice of two standard mission orbits: 160 NM... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Launch and orbit parameters for a standard launch. 1214.117 Section 1214.117 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION...
NASA Astrophysics Data System (ADS)
Ohata, Koji; Naruse, Hajime; Yokokawa, Miwa; Viparelli, Enrica
2017-11-01
Understanding of the formative conditions of fluvial bedforms is significant for both river management and geological studies. Diagrams showing bedform stability conditions have been widely used for the analyses of sedimentary structures. However, the use of discriminants to determine the boundaries of different bedforms regimes has not yet been explored. In this study, we use discriminant functions to describe formative conditions for a range of fluvial bedforms in a 3-D dimensionless parametric space. We do this by means of discriminant analysis using the Mahalanobis distance. We analyzed 3,793 available laboratory and field data and used these to produce new bedform phase diagrams. These diagrams employ three dimensionless parameters representing properties of flow hydraulics and sediment particles as their axes. The discriminant functions for bedform regimes proposed herein are quadratic functions of three dimensionless parameters and are expressed as curved surfaces in 3-D space. These empirical functions can be used to estimate paleoflow velocities from sedimentary structures. As an example of the reconstruction of hydraulic conditions, we calculated the paleoflow velocity of the 2011 Tohoku-Oki tsunami backwash flow from the sedimentary structures of the tsunami deposit. In so doing, we successfully reconstructed reasonable values of the paleoflow velocities.
A fresh look at crater scaling laws for normal and oblique hypervelocity impacts
NASA Technical Reports Server (NTRS)
Watts, A. J.; Atkinson, D. R.; Rieco, S. R.; Brandvold, J. B.; Lapin, S. L.; Coombs, C. R.
1993-01-01
With the concomitant increase in the amount of man-made debris and an ever increasing use of space satellites, the issue of accidental collisions with particles becomes more severe. While the natural micrometeoroid population is unavoidable and assumed constant, continued launches increase the debris population at a steady rate. Debris currently includes items ranging in size from microns to meters which originated from spent satellites and rocket cases. To understand and model these environments, impact damage in the form of craters and perforations must be analyzed. Returned spacecraft materials such as those from LDEF and Solar Max have provided such a testbed. From these space-aged samples various impact parameters (i.e., particle size, particle and target material, particle shape, relative impact speed, etc.) may be determined. These types of analyses require the use of generic analytic scaling laws which can adequately describe the impact effects. Currently, most existing analytic scaling laws are little more than curve-fits to limited data and are not based on physics, and thus are not generically applicable over a wide range of impact parameters. During this study, a series of physics-based scaling laws for normal and oblique crater and perforation formation has been generated into two types of materials: aluminum and Teflon.
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Thurman, S. W.
1992-01-01
An approximate six-parameter analytic model for Earth-based differenced range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 microrad, and angular rate precision on the order of 10 to 25(10)(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wide band and narrow band (delta)VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 /microrad, and angular rate precisions of 0.5 to 1.0(10)(exp -12) rad/sec.
NASA Astrophysics Data System (ADS)
Jia, Bing
2014-03-01
A comb-shaped chaotic region has been simulated in multiple two-dimensional parameter spaces using the Hindmarsh—Rose (HR) neuron model in many recent studies, which can interpret almost all of the previously simulated bifurcation processes with chaos in neural firing patterns. In the present paper, a comb-shaped chaotic region in a two-dimensional parameter space was reproduced, which presented different processes of period-adding bifurcations with chaos with changing one parameter and fixed the other parameter at different levels. In the biological experiments, different period-adding bifurcation scenarios with chaos by decreasing the extra-cellular calcium concentration were observed from some neural pacemakers at different levels of extra-cellular 4-aminopyridine concentration and from other pacemakers at different levels of extra-cellular caesium concentration. By using the nonlinear time series analysis method, the deterministic dynamics of the experimental chaotic firings were investigated. The period-adding bifurcations with chaos observed in the experiments resembled those simulated in the comb-shaped chaotic region using the HR model. The experimental results show that period-adding bifurcations with chaos are preserved in different two-dimensional parameter spaces, which provides evidence of the existence of the comb-shaped chaotic region and a demonstration of the simulation results in different two-dimensional parameter spaces in the HR neuron model. The results also present relationships between different firing patterns in two-dimensional parameter spaces.
Uncertainty Analysis of Simulated Hydraulic Fracturing
NASA Astrophysics Data System (ADS)
Chen, M.; Sun, Y.; Fu, P.; Carrigan, C. R.; Lu, Z.
2012-12-01
Artificial hydraulic fracturing is being used widely to stimulate production of oil, natural gas, and geothermal reservoirs with low natural permeability. Optimization of field design and operation is limited by the incomplete characterization of the reservoir, as well as the complexity of hydrological and geomechanical processes that control the fracturing. Thus, there are a variety of uncertainties associated with the pre-existing fracture distribution, rock mechanics, and hydraulic-fracture engineering that require evaluation of their impact on the optimized design. In this study, a multiple-stage scheme was employed to evaluate the uncertainty. We first define the ranges and distributions of 11 input parameters that characterize the natural fracture topology, in situ stress, geomechanical behavior of the rock matrix and joint interfaces, and pumping operation, to cover a wide spectrum of potential conditions expected for a natural reservoir. These parameters were then sampled 1,000 times in an 11-dimensional parameter space constrained by the specified ranges using the Latin-hypercube method. These 1,000 parameter sets were fed into the fracture simulators, and the outputs were used to construct three designed objective functions, i.e. fracture density, opened fracture length and area density. Using PSUADE, three response surfaces (11-dimensional) of the objective functions were developed and global sensitivity was analyzed to identify the most sensitive parameters for the objective functions representing fracture connectivity, which are critical for sweep efficiency of the recovery process. The second-stage high resolution response surfaces were constructed with dimension reduced to the number of the most sensitive parameters. An additional response surface with respect to the objective function of the fractal dimension for fracture distributions was constructed in this stage. Based on these response surfaces, comprehensive uncertainty analyses were conducted among input parameters and objective functions. In addition, reduced-order emulation models resulting from this analysis can be used for optimal control of hydraulic fracturing. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Quantifying uncertainty in NDSHA estimates due to earthquake catalogue
NASA Astrophysics Data System (ADS)
Magrin, Andrea; Peresan, Antonella; Vaccari, Franco; Panza, Giuliano
2014-05-01
The procedure for the neo-deterministic seismic zoning, NDSHA, is based on the calculation of synthetic seismograms by the modal summation technique. This approach makes use of information about the space distribution of large magnitude earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g., morphostructural features and ongoing deformation processes identified by earth observations). Hence the method does not make use of attenuation models (GMPE), which may be unable to account for the complexity of the product between seismic source tensor and medium Green function and are often poorly constrained by the available observations. NDSHA defines the hazard from the envelope of the values of ground motion parameters determined considering a wide set of scenario earthquakes; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In NDSHA uncertainties are not statistically treated as in PSHA, where aleatory uncertainty is traditionally handled with probability density functions (e.g., for magnitude and distance random variables) and epistemic uncertainty is considered by applying logic trees that allow the use of alternative models and alternative parameter values of each model, but the treatment of uncertainties is performed by sensitivity analyses for key modelling parameters. To fix the uncertainty related to a particular input parameter is an important component of the procedure. The input parameters must account for the uncertainty in the prediction of fault radiation and in the use of Green functions for a given medium. A key parameter is the magnitude of sources used in the simulation that is based on catalogue informations, seismogenic zones and seismogenic nodes. Because the largest part of the existing catalogues is based on macroseismic intensity, a rough estimate of ground motion error can therefore be the factor of 2, intrinsic in MCS scale. We tested this hypothesis by the analysis of uncertainty in ground motion maps due to the catalogue random errors in magnitude and localization.
NASA Astrophysics Data System (ADS)
Wells, J. R.; Kim, J. B.
2011-12-01
Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that multiple acceptable parameter sets exist. Further we expect to demonstrate that the multiple parameter sets produce significantly divergent future forecasts in NEP, C storage, and ET and runoff; and thereby identify a highly important source of DGVM uncertainty
Research on the space-borne coherent wind lidar technique and the prototype experiment
NASA Astrophysics Data System (ADS)
Gao, Long; Tao, Yuliang; An, Chao; Yang, Jukui; Du, Guojun; Zheng, Yongchao
2016-10-01
Space-borne coherent wind lidar technique is considered as one of the most promising and appropriate remote Sensing methods for successfully measuring the whole global vector wind profile between the lower atmosphere and the middle atmosphere. Compared with other traditional methods, the space-borne coherent wind lidar has some advantages, such as, the all-day operation; many lidar systems can be integrated into the same satellite because of the light-weight and the small size, eye-safe wavelength, and being insensitive to the background light. Therefore, this coherent lidar could be widely applied into the earth climate research, disaster monitoring, numerical weather forecast, environment protection. In this paper, the 2μm space-borne coherent wind lidar system for measuring the vector wind profile is proposed. And the technical parameters about the sub-system of the coherent wind lidar are simulated and the all sub-system schemes are proposed. For sake of validating the technical parameters of the space-borne coherent wind lidar system and the optical off-axis telescope, the weak laser signal detection technique, etc. The proto-type coherent wind lidar is produced and the experiments for checking the performance of this proto-type coherent wind lidar are finished with the hard-target and the soft target, and the horizontal wind and the vertical wind profile are measured and calibrated, respectively. For this proto-type coherent wind lidar, the wavelength is 1.54μm, the pulse energy 80μJ, the pulse width 300ns, the diameter of the off-axis telescope 120mm, the single wedge for cone scanning with the 40°angle, and the two dualbalanced InGaAs detector modules are used. The experiment results are well consisted with the simulation process, and these results show that the wind profile between the vertical altitude 4km can be measured, the accuracy of the wind velocity and the wind direction are better than 1m/s and +/-10°, respectively.
Colorado River basin sensitivity to disturbance impacts
NASA Astrophysics Data System (ADS)
Bennett, K. E.; Urrego-Blanco, J. R.; Jonko, A. K.; Vano, J. A.; Newman, A. J.; Bohn, T. J.; Middleton, R. S.
2017-12-01
The Colorado River basin is an important river for the food-energy-water nexus in the United States and is projected to change under future scenarios of increased CO2emissions and warming. Streamflow estimates to consider climate impacts occurring as a result of this warming are often provided using modeling tools which rely on uncertain inputs—to fully understand impacts on streamflow sensitivity analysis can help determine how models respond under changing disturbances such as climate and vegetation. In this study, we conduct a global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the Variable Infiltration Capacity (VIC) hydrologic model to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in VIC. Additionally, we examine sensitivities of basin-wide model simulations using an approach that incorporates changes in temperature, precipitation and vegetation to consider impact responses for snow-dominated headwater catchments, low elevation arid basins, and for the upper and lower river basins. We find that for the Colorado River basin, snow-dominated regions are more sensitive to uncertainties. New parameter sensitivities identified include runoff/evapotranspiration sensitivity to albedo, while changes in snow water equivalent are sensitive to canopy fraction and Leaf Area Index (LAI). Basin-wide streamflow sensitivities to precipitation, temperature and vegetation are variable seasonally and also between sub-basins; with the largest sensitivities for smaller, snow-driven headwater systems where forests are dense. For a major headwater basin, a 1ºC of warming equaled a 30% loss of forest cover, while a 10% precipitation loss equaled a 90% forest cover decline. Scenarios utilizing multiple disturbances led to unexpected results where changes could either magnify or diminish extremes, such as low and peak flows and streamflow timing, dependent on the strength and direction of the forcing. These results indicate the importance of understanding model sensitivities under disturbance impacts to manage these shifts; plan for future water resource changes and determine how the impacts will affect the sustainability and adaptability of food-energy-water systems.
Optimizing detection and analysis of slow waves in sleep EEG.
Mensen, Armand; Riedner, Brady; Tononi, Giulio
2016-12-01
Analysis of individual slow waves in EEG recording during sleep provides both greater sensitivity and specificity compared to spectral power measures. However, parameters for detection and analysis have not been widely explored and validated. We present a new, open-source, Matlab based, toolbox for the automatic detection and analysis of slow waves; with adjustable parameter settings, as well as manual correction and exploration of the results using a multi-faceted visualization tool. We explore a large search space of parameter settings for slow wave detection and measure their effects on a selection of outcome parameters. Every choice of parameter setting had some effect on at least one outcome parameter. In general, the largest effect sizes were found when choosing the EEG reference, type of canonical waveform, and amplitude thresholding. Previously published methods accurately detect large, global waves but are conservative and miss the detection of smaller amplitude, local slow waves. The toolbox has additional benefits in terms of speed, user-interface, and visualization options to compare and contrast slow waves. The exploration of parameter settings in the toolbox highlights the importance of careful selection of detection METHODS: The sensitivity and specificity of the automated detection can be improved by manually adding or deleting entire waves and or specific channels using the toolbox visualization functions. The toolbox standardizes the detection procedure, sets the stage for reliable results and comparisons and is easy to use without previous programming experience. Copyright © 2016 Elsevier B.V. All rights reserved.
14 CFR § 1214.117 - Launch and orbit parameters for a standard launch.
Code of Federal Regulations, 2014 CFR
2014-01-01
... flights: (1) Launch from Kennedy Space Center (KSC) into the customer's choice of two standard mission... 14 Aeronautics and Space 5 2014-01-01 2014-01-01 false Launch and orbit parameters for a standard launch. § 1214.117 Section § 1214.117 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE...
NASA Astrophysics Data System (ADS)
Fox, Benjamin D.; Selby, Neil D.; Heyburn, Ross; Woodhouse, John H.
2012-09-01
Estimating reliable depths for shallow seismic sources is important in both seismo-tectonic studies and in seismic discrimination studies. Surface wave excitation is sensitive to source depth, especially at intermediate and short-periods, owing to the approximate exponential decay of surface wave displacements with depth. A new method is presented here to retrieve earthquake source parameters from regional and teleseismic intermediate period (100-15 s) fundamental-mode surface wave recordings. This method makes use of advances in mapping global dispersion to allow higher frequency surface wave recordings at regional and teleseismic distances to be used with more confidence than in previous studies and hence improve the resolution of depth estimates. Synthetic amplitude spectra are generated using surface wave theory combined with a great circle path approximation, and a grid of double-couple sources are compared with the data. Source parameters producing the best-fitting amplitude spectra are identified by minimizing the least-squares misfit in logarithmic amplitude space. The F-test is used to search the solution space for statistically acceptable parameters and the ranges of these variables are used to place constraints on the best-fitting source. Estimates of focal mechanism, depth and scalar seismic moment are determined for 20 small to moderate sized (4.3 ≤Mw≤ 6.4) earthquakes. These earthquakes are situated across a wide range of geographic and tectonic locations and describe a range of faulting styles over the depth range 4-29 km. For the larger earthquakes, comparisons with other studies are favourable, however existing source determination procedures, such as the CMT technique, cannot be performed for the smaller events. By reducing the magnitude threshold at which robust source parameters can be determined, the accuracy, especially at shallow depths, of seismo-tectonic studies, seismic hazard assessments, and seismic discrimination investigations can be improved by the application of this methodology.
Parameter estimation uncertainty: Comparing apples and apples?
NASA Astrophysics Data System (ADS)
Hart, D.; Yoon, H.; McKenna, S. A.
2012-12-01
Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Chaikuad, Apirat; Knapp, Stefan; von Delft, Frank
2015-01-01
The quest for an optimal limited set of effective crystallization conditions remains a challenge in macromolecular crystallography, an issue that is complicated by the large number of chemicals which have been deemed to be suitable for promoting crystal growth. The lack of rational approaches towards the selection of successful chemical space and representative combinations has led to significant overlapping conditions, which are currently present in a multitude of commercially available crystallization screens. Here, an alternative approach to the sampling of widely used PEG precipitants is suggested through the use of PEG smears, which are mixtures of different PEGs with a requirement of either neutral or cooperatively positive effects of each component on crystal growth. Four newly defined smears were classified by molecular-weight groups and enabled the preservation of specific properties related to different polymer sizes. These smears not only allowed a wide coverage of properties of these polymers, but also reduced PEG variables, enabling greater sampling of other parameters such as buffers and additives. The efficiency of the smear-based screens was evaluated on more than 220 diverse recombinant human proteins, which overall revealed a good initial crystallization success rate of nearly 50%. In addition, in several cases successful crystallizations were only obtained using PEG smears, while various commercial screens failed to yield crystals. The defined smears therefore offer an alternative approach towards PEG sampling, which will benefit the design of crystallization screens sampling a wide chemical space of this key precipitant. PMID:26249344
Chaikuad, Apirat; Knapp, Stefan; von Delft, Frank
2015-08-01
The quest for an optimal limited set of effective crystallization conditions remains a challenge in macromolecular crystallography, an issue that is complicated by the large number of chemicals which have been deemed to be suitable for promoting crystal growth. The lack of rational approaches towards the selection of successful chemical space and representative combinations has led to significant overlapping conditions, which are currently present in a multitude of commercially available crystallization screens. Here, an alternative approach to the sampling of widely used PEG precipitants is suggested through the use of PEG smears, which are mixtures of different PEGs with a requirement of either neutral or cooperatively positive effects of each component on crystal growth. Four newly defined smears were classified by molecular-weight groups and enabled the preservation of specific properties related to different polymer sizes. These smears not only allowed a wide coverage of properties of these polymers, but also reduced PEG variables, enabling greater sampling of other parameters such as buffers and additives. The efficiency of the smear-based screens was evaluated on more than 220 diverse recombinant human proteins, which overall revealed a good initial crystallization success rate of nearly 50%. In addition, in several cases successful crystallizations were only obtained using PEG smears, while various commercial screens failed to yield crystals. The defined smears therefore offer an alternative approach towards PEG sampling, which will benefit the design of crystallization screens sampling a wide chemical space of this key precipitant.
Wide Field and Planetary Camera for Space Telescope
NASA Technical Reports Server (NTRS)
Lockhart, R. F.
1982-01-01
The Space Telescope's Wide Field and Planetary Camera instrument, presently under construction, will be used to map the observable universe and to study the outer planets. It will be able to see 1000 times farther than any previously employed instrument. The Wide Field system will be located in a radial bay, receiving its signals via a pick-off mirror centered on the optical axis of the telescope assembly. The external thermal radiator employed by the instrument for cooling will be part of the exterior surface of the Space Telescope. In addition to having a larger (1200-12,000 A) wavelength range than any of the other Space Telescope instruments, its data rate, at 1 Mb/sec, exceeds that of the other instruments. Attention is given to the operating modes and projected performance levels of the Wide Field Camera and Planetary Camera.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xingyuan; Miller, Gretchen R.; Rubin, Yoram
2012-09-13
The heat pulse method is widely used to measure water flux through plants; it works by inferring the velocity of water through a porous medium from the speed at which a heat pulse is propagated through the system. No systematic, non-destructive calibration procedure exists to determine the site-specific parameters necessary for calculating sap velocity, e.g., wood thermal diffusivity and probe spacing. Such parameter calibration is crucial to obtain the correct transpiration flux density from the sap flow measurements at the plant scale; and consequently, to up-scale tree-level water fluxes to canopy and landscape scales. The purpose of this study ismore » to present a statistical framework for estimating the wood thermal diffusivity and probe spacing simutaneously from in-situ heat response curves collected by the implanted probes of a heat ratio apparatus. Conditioned on the time traces of wood temperature following a heat pulse, the parameters are inferred using a Bayesian inversion technique, based on the Markov chain Monte Carlo sampling method. The primary advantage of the proposed methodology is that it does not require known probe spacing or any further intrusive sampling of sapwood. The Bayesian framework also enables direct quantification of uncertainty in estimated sap flow velocity. Experiments using synthetic data show that repeated tests using the same apparatus are essential to obtain reliable and accurate solutions. When applied to field conditions, these tests are conducted during different seasons and automated using the existing data logging system. The seasonality of wood thermal diffusivity is obtained as a by-product of the parameter estimation process, and it is shown to be affected by both moisture content and temperature. Empirical factors are often introduced to account for the influence of non-ideal probe geometry on the estimation of heat pulse velocity, and they are estimated in this study as well. The proposed methodology can be applied for the calibration of existing heat ratio sap flow systems at other sites. It is especially useful when an alternative transpiration calibration device, such as a lysimeter, is not available.« less
A new Predictive Model for Relativistic Electrons in Outer Radiation Belt
NASA Astrophysics Data System (ADS)
Chen, Y.
2017-12-01
Relativistic electrons trapped in the Earth's outer radiation belt present a highly hazardous radiation environment for spaceborne electronics. These energetic electrons, with kinetic energies up to several megaelectron-volt (MeV), manifest a highly dynamic and event-specific nature due to the delicate interplay of competing transport, acceleration and loss processes. Therefore, developing a forecasting capability for outer belt MeV electrons has long been a critical and challenging task for the space weather community. Recently, the vital roles of electron resonance with waves (including such as chorus and electromagnetic ion cyclotron) have been widely recognized; however, it is still difficult for current diffusion radiation belt models to reproduce the behavior of MeV electrons during individual geomagnetic storms, mainly because of the large uncertainties existing in input parameters. In this work, we expanded our previous cross-energy cross-pitch-angle coherence study and developed a new predictive model for MeV electrons over a wide range of L-shells inside the outer radiation belt. This new model uses NOAA POES observations from low-Earth-orbits (LEOs) as inputs to provide high-fidelity nowcast (multiple hour prediction) and forecast (> 1 day prediction) of the energization of MeV electrons as well as the evolving MeV electron distributions afterwards during storms. Performance of the predictive model is quantified by long-term in situ data from Van Allen Probes and LANL GEO satellites. This study adds new science significance to an existing LEO space infrastructure, and provides reliable and powerful tools to the whole space community.
A Comparison of Space and Ground Based Facility Environmental Effects for FEP Teflon. Revised
NASA Technical Reports Server (NTRS)
Rutledge, Sharon K.; Banks, Bruce A.; Kitral, Michael
1998-01-01
Fluorinated Ethylene Propylene (FEP) Teflon is widely used as a thermal control material for spacecraft, however, it is susceptible to erosion, cracking, and subsequent mechanical failure in low Earth orbit. One of the difficulties in determining whether FEP Teflon will survive during a mission is the wide disparity of erosion rates observed for this material in space and in ground based facilities. Each environment contains different levels of atomic oxygen, ions, and vacuum ultraviolet (VUV) radiation in addition to parameters such as the energy of the arriving species and temperature. These variations make it difficult to determine what is causing the observed differences in erosion rates. This paper attempts to narrow down which factors affect the erosion rate of FEP Teflon through attempting to change only one environmental constituent at a time. This was attempted through the use of a single simulation facility (plasma asher) environment with a variety of Faraday cages and VUV transparent windows. Isolating one factor inside of a radio frequency (RF) plasma proved to be very difficult. Two observations could be made. First, it appears that the erosion yield of FEP Teflon with respect to that of polyimide Kapton is not greatly affected by the presence or lack of VUV radiation present in the RF plasma and the relative erosion yield for the FEP Teflon may decrease with increasing fluence. Second, shielding from charged particles appears to lower the relative erosion yield of the FEP to approximately that observed in space, however it is difficult to determine for sure whether ions, electrons, or some other components are causing the enhanced erosion.
Forecasts of non-Gaussian parameter spaces using Box-Cox transformations
NASA Astrophysics Data System (ADS)
Joachimi, B.; Taylor, A. N.
2011-09-01
Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.
Bubble Entropy: An Entropy Almost Free of Parameters.
Manis, George; Aktaruzzaman, Md; Sassi, Roberto
2017-11-01
Objective : A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. Methods: We call the new definition Bubble Entropy . Bubble Entropy is based on permutation entropy, where the vectors in the embedding space are ranked. We use the bubble sort algorithm for the ordering procedure and count instead the number of swaps performed for each vector. Doing so, we create a more coarse-grained distribution and then compute the entropy of this distribution. Results: Experimental results with both real and synthetic HRV signals showed that bubble entropy presents remarkable stability and exhibits increased descriptive and discriminating power compared to all other definitions, including the most popular ones. Conclusion: The definition proposed is almost free of parameters. The most common ones are the scale factor r and the embedding dimension m . In our definition, the scale factor is totally eliminated and the importance of m is significantly reduced. The proposed method presents increased stability and discriminating power. Significance: After the extensive use of some entropy measures in physiological signals, typical values for their parameters have been suggested, or at least, widely used. However, the parameters are still there, application and dataset dependent, influencing the computed value and affecting the descriptive power. Reducing their significance or eliminating them alleviates the problem, decoupling the method from the data and the application, and eliminating subjective factors. Objective : A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. Methods: We call the new definition Bubble Entropy . Bubble Entropy is based on permutation entropy, where the vectors in the embedding space are ranked. We use the bubble sort algorithm for the ordering procedure and count instead the number of swaps performed for each vector. Doing so, we create a more coarse-grained distribution and then compute the entropy of this distribution. Results: Experimental results with both real and synthetic HRV signals showed that bubble entropy presents remarkable stability and exhibits increased descriptive and discriminating power compared to all other definitions, including the most popular ones. Conclusion: The definition proposed is almost free of parameters. The most common ones are the scale factor r and the embedding dimension m . In our definition, the scale factor is totally eliminated and the importance of m is significantly reduced. The proposed method presents increased stability and discriminating power. Significance: After the extensive use of some entropy measures in physiological signals, typical values for their parameters have been suggested, or at least, widely used. However, the parameters are still there, application and dataset dependent, influencing the computed value and affecting the descriptive power. Reducing their significance or eliminating them alleviates the problem, decoupling the method from the data and the application, and eliminating subjective factors.
Sensitivity of Dynamical Systems to Banach Space Parameters
2005-02-13
We consider general nonlinear dynamical systems in a Banach space with dependence on parameters in a second Banach space. An abstract theoretical ... framework for sensitivity equations is developed. An application to measure dependent delay differential systems arising in a class of HIV models is presented.
Tethered Satellites as Enabling Platforms for an Operational Space Weather Monitoring System
NASA Technical Reports Server (NTRS)
Krause, L. Habash; Gilchrist, B. E.; Bilen, S.; Owens, J.; Voronka, N.; Furhop, K.
2013-01-01
Space weather nowcasting and forecasting models require assimilation of near-real time (NRT) space environment data to improve the precision and accuracy of operational products. Typically, these models begin with a climatological model to provide "most probable distributions" of environmental parameters as a function of time and space. The process of NRT data assimilation gently pulls the climate model closer toward the observed state (e.g. via Kalman smoothing) for nowcasting, and forecasting is achieved through a set of iterative physics-based forward-prediction calculations. The issue of required space weather observatories to meet the spatial and temporal requirements of these models is a complex one, and we do not address that with this poster. Instead, we present some examples of how tethered satellites can be used to address the shortfalls in our ability to measure critical environmental parameters necessary to drive these space weather models. Examples include very long baseline electric field measurements, magnetized ionospheric conductivity measurements, and the ability to separate temporal from spatial irregularities in environmental parameters. Tethered satellite functional requirements will be presented for each space weather parameter considered in this study.
NASA Astrophysics Data System (ADS)
Kang, Jai Young
2005-12-01
The objectives of this study are to perform extensive analysis on internal mass motion for a wider parameter space and to provide suitable design criteria for a broader applicability for the class of spinning space vehicles. In order to examine the stability criterion determined by a perturbation method, some numerical simulations will be performed and compared at various parameter points. In this paper, Ince-Strutt diagram for determination of stable-unstable regions of the internal mass motion of the spinning thrusting space vehicle in terms of design parameters will be obtained by an analytical method. Also, phase trajectories of the motion will be obtained for various parameter values and their characteristics are compared.
2007-09-30
the behavioral ecology of marine mammals by simultaneously tracking multiple vocalizing individuals in space and time. OBJECTIVES The ...goal is to contribute to the behavioral ecology of marine mammals by simultaneously tracking multiple vocalizing individuals in space and time. 15...OA Graduate Traineeship for E-M Nosal) LONG-TERM GOALS The goal of our research is to develop systems that use a widely spaced hydrophone array
Scaling for quantum tunneling current in nano- and subnano-scale plasmonic junctions.
Zhang, Peng
2015-05-19
When two conductors are separated by a sufficiently thin insulator, electrical current can flow between them by quantum tunneling. This paper presents a self-consistent model of tunneling current in a nano- and subnano-meter metal-insulator-metal plasmonic junction, by including the effects of space charge and exchange correlation potential. It is found that the J-V curve of the junction may be divided into three regimes: direct tunneling, field emission, and space-charge-limited regime. In general, the space charge inside the insulator reduces current transfer across the junction, whereas the exchange-correlation potential promotes current transfer. It is shown that these effects may modify the current density by orders of magnitude from the widely used Simmons' formula, which is only accurate for a limited parameter space (insulator thickness > 1 nm and barrier height > 3 eV) in the direct tunneling regime. The proposed self-consistent model may provide a more accurate evaluation of the tunneling current in the other regimes. The effects of anode emission and material properties (i.e. work function of the electrodes, electron affinity and permittivity of the insulator) are examined in detail in various regimes. Our simple model and the general scaling for tunneling current may provide insights to new regimes of quantum plasmonics.
EnviroNET: On-line information for LDEF
NASA Technical Reports Server (NTRS)
Lauriente, Michael
1993-01-01
EnviroNET is an on-line, free-form database intended to provide a centralized repository for a wide range of technical information on environmentally induced interactions of use to Space Shuttle customers and spacecraft designers. It provides a user-friendly, menu-driven format on networks that are connected globally and is available twenty-four hours a day - every day. The information, updated regularly, includes expository text, tabular numerical data, charts and graphs, and models. The system pools space data collected over the years by NASA, USAF, other government research facilities, industry, universities, and the European Space Agency. The models accept parameter input from the user, then calculate and display the derived values corresponding to that input. In addition to the archive, interactive graphics programs are also available on space debris, the neutral atmosphere, radiation, magnetic fields, and the ionosphere. A user-friendly, informative interface is standard for all the models and includes a pop-up help window with information on inputs, outputs, and caveats. The system will eventually simplify mission analysis with analytical tools and deliver solutions for computationally intense graphical applications to do 'What if...' scenarios. A proposed plan for developing a repository of information from the Long Duration Exposure Facility (LDEF) for a user group is presented.
Scaling for quantum tunneling current in nano- and subnano-scale plasmonic junctions
Zhang, Peng
2015-01-01
When two conductors are separated by a sufficiently thin insulator, electrical current can flow between them by quantum tunneling. This paper presents a self-consistent model of tunneling current in a nano- and subnano-meter metal-insulator-metal plasmonic junction, by including the effects of space charge and exchange correlation potential. It is found that the J-V curve of the junction may be divided into three regimes: direct tunneling, field emission, and space-charge-limited regime. In general, the space charge inside the insulator reduces current transfer across the junction, whereas the exchange-correlation potential promotes current transfer. It is shown that these effects may modify the current density by orders of magnitude from the widely used Simmons’ formula, which is only accurate for a limited parameter space (insulator thickness > 1 nm and barrier height > 3 eV) in the direct tunneling regime. The proposed self-consistent model may provide a more accurate evaluation of the tunneling current in the other regimes. The effects of anode emission and material properties (i.e. work function of the electrodes, electron affinity and permittivity of the insulator) are examined in detail in various regimes. Our simple model and the general scaling for tunneling current may provide insights to new regimes of quantum plasmonics. PMID:25988951
Amasya, Gulin; Badilli, Ulya; Aksu, Buket; Tarimci, Nilufer
2016-03-10
With Quality by Design (QbD), a systematic approach involving design and development of all production processes to achieve the final product with a predetermined quality, you work within a design space that determines the critical formulation and process parameters. Verification of the quality of the final product is no longer necessary. In the current study, the QbD approach was used in the preparation of lipid nanoparticle formulations to improve skin penetration of 5-Fluorouracil, a widely-used compound for treating non-melanoma skin cancer. 5-Fluorouracil-loaded lipid nanoparticles were prepared by the W/O/W double emulsion - solvent evaporation method. Artificial neural network software was used to evaluate the data obtained from the lipid nanoparticle formulations, to establish the design space, and to optimize the formulations. Two different artificial neural network models were developed. The limit values of the design space of the inputs and outputs obtained by both models were found to be within the knowledge space. The optimal formulations recommended by the models were prepared and the critical quality attributes belonging to those formulations were assigned. The experimental results remained within the design space limit values. Consequently, optimal formulations with the critical quality attributes determined to achieve the Quality Target Product Profile were successfully obtained within the design space by following the QbD steps. Copyright © 2016 Elsevier B.V. All rights reserved.
Power law analysis of the human microbiome.
Ma, Zhanshan Sam
2015-11-01
Taylor's (1961, Nature, 189:732) power law, a power function (V = am(b) ) describing the scaling relationship between the mean and variance of population abundances of organisms, has been found to govern the population abundance distributions of single species in both space and time in macroecology. It is regarded as one of few generalities in ecology, and its parameter b has been widely applied to characterize spatial aggregation (i.e. heterogeneity) and temporal stability of single-species populations. Here, we test its applicability to bacterial populations in the human microbiome using extensive data sets generated by the US-NIH Human Microbiome Project (HMP). We further propose extending Taylor's power law from the population to the community level, and accordingly introduce four types of power-law extensions (PLEs): type I PLE for community spatial aggregation (heterogeneity), type II PLE for community temporal aggregation (stability), type III PLE for mixed-species population spatial aggregation (heterogeneity) and type IV PLE for mixed-species population temporal aggregation (stability). Our results show that fittings to the four PLEs with HMP data were statistically extremely significant and their parameters are ecologically sound, hence confirming the validity of the power law at both the population and community levels. These findings not only provide a powerful tool to characterize the aggregations of population and community in both time and space, offering important insights into community heterogeneity in space and/or stability in time, but also underscore the three general properties of power laws (scale invariance, no average and universality) and their specific manifestations in our four PLEs. © 2015 John Wiley & Sons Ltd.
Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel
2016-01-01
Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization. PMID:27243005
Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel
2016-01-01
Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.
NASA Astrophysics Data System (ADS)
Salvucci, Guido D.; Gentine, Pierre
2013-04-01
The ability to predict terrestrial evapotranspiration (E) is limited by the complexity of rate-limiting pathways as water moves through the soil, vegetation (roots, xylem, stomata), canopy air space, and the atmospheric boundary layer. The impossibility of specifying the numerous parameters required to model this process in full spatial detail has necessitated spatially upscaled models that depend on effective parameters such as the surface vapor conductance (Csurf). Csurf accounts for the biophysical and hydrological effects on diffusion through the soil and vegetation substrate. This approach, however, requires either site-specific calibration of Csurf to measured E, or further parameterization based on metrics such as leaf area, senescence state, stomatal conductance, soil texture, soil moisture, and water table depth. Here, we show that this key, rate-limiting, parameter can be estimated from an emergent relationship between the diurnal cycle of the relative humidity profile and E. The relation is that the vertical variance of the relative humidity profile is less than would occur for increased or decreased evaporation rates, suggesting that land-atmosphere feedback processes minimize this variance. It is found to hold over a wide range of climate conditions (arid-humid) and limiting factors (soil moisture, leaf area, energy). With this relation, estimates of E and Csurf can be obtained globally from widely available meteorological measurements, many of which have been archived since the early 1900s. In conjunction with precipitation and stream flow, long-term E estimates provide insights and empirical constraints on projected accelerations of the hydrologic cycle.
Salvucci, Guido D; Gentine, Pierre
2013-04-16
The ability to predict terrestrial evapotranspiration (E) is limited by the complexity of rate-limiting pathways as water moves through the soil, vegetation (roots, xylem, stomata), canopy air space, and the atmospheric boundary layer. The impossibility of specifying the numerous parameters required to model this process in full spatial detail has necessitated spatially upscaled models that depend on effective parameters such as the surface vapor conductance (C(surf)). C(surf) accounts for the biophysical and hydrological effects on diffusion through the soil and vegetation substrate. This approach, however, requires either site-specific calibration of C(surf) to measured E, or further parameterization based on metrics such as leaf area, senescence state, stomatal conductance, soil texture, soil moisture, and water table depth. Here, we show that this key, rate-limiting, parameter can be estimated from an emergent relationship between the diurnal cycle of the relative humidity profile and E. The relation is that the vertical variance of the relative humidity profile is less than would occur for increased or decreased evaporation rates, suggesting that land-atmosphere feedback processes minimize this variance. It is found to hold over a wide range of climate conditions (arid-humid) and limiting factors (soil moisture, leaf area, energy). With this relation, estimates of E and C(surf) can be obtained globally from widely available meteorological measurements, many of which have been archived since the early 1900s. In conjunction with precipitation and stream flow, long-term E estimates provide insights and empirical constraints on projected accelerations of the hydrologic cycle.
On Markov parameters in system identification
NASA Technical Reports Server (NTRS)
Phan, Minh; Juang, Jer-Nan; Longman, Richard W.
1991-01-01
A detailed discussion of Markov parameters in system identification is given. Different forms of input-output representation of linear discrete-time systems are reviewed and discussed. Interpretation of sampled response data as Markov parameters is presented. Relations between the state-space model and particular linear difference models via the Markov parameters are formulated. A generalization of Markov parameters to observer and Kalman filter Markov parameters for system identification is explained. These extended Markov parameters play an important role in providing not only a state-space realization, but also an observer/Kalman filter for the system of interest.
21SSD: a public data base of simulated 21-cm signals from the epoch of reionization
NASA Astrophysics Data System (ADS)
Semelin, B.; Eames, E.; Bolgar, F.; Caillat, M.
2017-12-01
The 21-cm signal from the epoch of reionization (EoR) is expected to be detected in the next few years, either with existing instruments or by the upcoming SKA and HERA projects. In this context, there is a pressing need for publicly available high-quality templates covering a wide range of possible signals. These are needed both for end-to-end simulations of the up-coming instruments and to develop signal analysis methods. We present such a set of templates, publicly available, for download at 21ssd.obspm.fr. The data base contains 21-cm brightness temperature lightcones at high and low resolution, and several derived statistical quantities for 45 models spanning our choice of 3D parameter space. These data are the result of fully coupled radiative hydrodynamic high-resolution (10243) simulations performed with the LICORICE code. Both X-ray and Lyman line transfer are performed to account for heating and Wouthuysen-Field coupling fluctuations. We also present a first exploitation of the data using the power spectrum and the pixel distribution function (PDF) computed from lightcone data. We analyse how these two quantities behave when varying the model parameters while taking into account the thermal noise expected of a typical SKA survey. Finally, we show that the noiseless power spectrum and PDF have different - and somewhat complementary - abilities to distinguish between different models. This preliminary result will have to be expanded to the case including thermal noise. This type of results opens the door to formulating an optimal sampling of the parameter space, dependent on the chosen diagnostics.
Influence of Constraint in Parameter Space on Quantum Games
NASA Astrophysics Data System (ADS)
Zhao, Hai-Jun; Fang, Xi-Ming
2004-04-01
We study the influence of the constraint in the parameter space on quantum games. Decomposing SU(2) operator into product of three rotation operators and controlling one kind of them, we impose a constraint on the parameter space of the players' operator. We find that the constraint can provide a tuner to make the bilateral payoffs equal, so that the mismatch of the players' action at multi-equilibrium could be avoided. We also find that the game exhibits an intriguing structure as a function of the parameter of the controlled operators, which is useful for making game models.
Linear and Nonlinear Time-Frequency Analysis for Parameter Estimation of Resident Space Objects
2017-02-22
AFRL-AFOSR-UK-TR-2017-0023 Linear and Nonlinear Time -Frequency Analysis for Parameter Estimation of Resident Space Objects Marco Martorella...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the...Nonlinear Time -Frequency Analysis for Parameter Estimation of Resident Space Objects 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-14-1-0183 5c. PROGRAM
NASA Astrophysics Data System (ADS)
Ai, Yuewei; Shao, Xinyu; Jiang, Ping; Li, Peigen; Liu, Yang; Yue, Chen
2015-11-01
The welded joints of dissimilar materials have been widely used in automotive, ship and space industries. The joint quality is often evaluated by weld seam geometry, microstructures and mechanical properties. To obtain the desired weld seam geometry and improve the quality of welded joints, this paper proposes a process modeling and parameter optimization method to obtain the weld seam with minimum width and desired depth of penetration for laser butt welding of dissimilar materials. During the process, Taguchi experiments are conducted on the laser welding of the low carbon steel (Q235) and stainless steel (SUS301L-HT). The experimental results are used to develop the radial basis function neural network model, and the process parameters are optimized by genetic algorithm. The proposed method is validated by a confirmation experiment. Simultaneously, the microstructures and mechanical properties of the weld seam generated from optimal process parameters are further studied by optical microscopy and tensile strength test. Compared with the unoptimized weld seam, the welding defects are eliminated in the optimized weld seam and the mechanical properties are improved. The results show that the proposed method is effective and reliable for improving the quality of welded joints in practical production.
NASA Astrophysics Data System (ADS)
Casadei, D.
2014-10-01
The objective Bayesian treatment of a model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered. It is shown that the reference prior for the parameter of interest (the signal intensity) can be well approximated by the widely (ab)used flat prior only when the expected background is very high. On the other hand, a very simple approximation (the limiting form of the reference prior for perfect prior background knowledge) can be safely used over a large portion of the background parameters space. The resulting approximate reference posterior is a Gamma density whose parameters are related to the observed counts. This limiting form is simpler than the result obtained with a flat prior, with the additional advantage of representing a much closer approximation to the reference posterior in all cases. Hence such limiting prior should be considered a better default or conventional prior than the uniform prior. On the computing side, it is shown that a 2-parameter fitting function is able to reproduce extremely well the reference prior for any background prior. Thus, it can be useful in applications requiring the evaluation of the reference prior for a very large number of times.
NASA Technical Reports Server (NTRS)
Ponomarev, A. L.; Cucinotta, F. A.; Sachs, R. K.; Brenner, D. J.; Peterson, L. E.
2001-01-01
The patterns of DSBs induced in the genome are different for sparsely and densely ionizing radiations: In the former case, the patterns are well described by a random-breakage model; in the latter, a more sophisticated tool is needed. We used a Monte Carlo algorithm with a random-walk geometry of chromatin, and a track structure defined by the radial distribution of energy deposition from an incident ion, to fit the PFGE data for fragment-size distribution after high-dose irradiation. These fits determined the unknown parameters of the model, enabling the extrapolation of data for high-dose irradiation to the low doses that are relevant for NASA space radiation research. The randomly-located-clusters formalism was used to speed the simulations. It was shown that only one adjustable parameter, Q, the track efficiency parameter, was necessary to predict DNA fragment sizes for wide ranges of doses. This parameter was determined for a variety of radiations and LETs and was used to predict the DSB patterns at the HPRT locus of the human X chromosome after low-dose irradiation. It was found that high-LET radiation would be more likely than low-LET radiation to induce additional DSBs within the HPRT gene if this gene already contained one DSB.
Prediction of Geomagnetic Activity and Key Parameters in High-Latitude Ionosphere-Basic Elements
NASA Technical Reports Server (NTRS)
Lyatsky, W.; Khazanov, G. V.
2007-01-01
Prediction of geomagnetic activity and related events in the Earth's magnetosphere and ionosphere is an important task of the Space Weather program. Prediction reliability is dependent on the prediction method and elements included in the prediction scheme. Two main elements are a suitable geomagnetic activity index and coupling function -- the combination of solar wind parameters providing the best correlation between upstream solar wind data and geomagnetic activity. The appropriate choice of these two elements is imperative for any reliable prediction model. The purpose of this work was to elaborate on these two elements -- the appropriate geomagnetic activity index and the coupling function -- and investigate the opportunity to improve the reliability of the prediction of geomagnetic activity and other events in the Earth's magnetosphere. The new polar magnetic index of geomagnetic activity and the new version of the coupling function lead to a significant increase in the reliability of predicting the geomagnetic activity and some key parameters, such as cross-polar cap voltage and total Joule heating in high-latitude ionosphere, which play a very important role in the development of geomagnetic and other activity in the Earth s magnetosphere, and are widely used as key input parameters in modeling magnetospheric, ionospheric, and thermospheric processes.
Hidden Markov induced Dynamic Bayesian Network for recovering time evolving gene regulatory networks
NASA Astrophysics Data System (ADS)
Zhu, Shijia; Wang, Yadong
2015-12-01
Dynamic Bayesian Networks (DBN) have been widely used to recover gene regulatory relationships from time-series data in computational systems biology. Its standard assumption is ‘stationarity’, and therefore, several research efforts have been recently proposed to relax this restriction. However, those methods suffer from three challenges: long running time, low accuracy and reliance on parameter settings. To address these problems, we propose a novel non-stationary DBN model by extending each hidden node of Hidden Markov Model into a DBN (called HMDBN), which properly handles the underlying time-evolving networks. Correspondingly, an improved structural EM algorithm is proposed to learn the HMDBN. It dramatically reduces searching space, thereby substantially improving computational efficiency. Additionally, we derived a novel generalized Bayesian Information Criterion under the non-stationary assumption (called BWBIC), which can help significantly improve the reconstruction accuracy and largely reduce over-fitting. Moreover, the re-estimation formulas for all parameters of our model are derived, enabling us to avoid reliance on parameter settings. Compared to the state-of-the-art methods, the experimental evaluation of our proposed method on both synthetic and real biological data demonstrates more stably high prediction accuracy and significantly improved computation efficiency, even with no prior knowledge and parameter settings.
Growth and characterization of struvite-Na crystals
NASA Astrophysics Data System (ADS)
Chauhan, Chetan K.; Joshi, Mihirkumar J.
2014-09-01
Sodium magnesium phosphate heptahydrate [NaMgPO4·7H2O], also known as struvite-Na, is the sodium analog to struvite. Among phosphate containing bio-minerals, struvite has attracted considerable attention, because of its common occurrence in a wide variety of environments. Struvite and family crystals were found as urinary calculi in humans and animals. Struvite-Na crystals were grown by a single diffusion gel growth technique in a silica hydro gel medium. Struvite-Na crystals with different morphologies having transparent to translucent diaphaneity were grown with different growth parameters. The phenomenon of Liesegang rings was also observed with some particular growth parameters. The powder XRD study confirmed the structural similarity of the grown struvite-Na crystals with struvite and found that struvite-Na crystallized in the orthorhombic Pmn21 space group with unit cell parameters such as a= 6.893 Å, b=6.124 Å, c=11.150 Å, and α=β=γ=90°. FT-IR spectra of struvite-Na crystals revealed the presence of functional groups. The TGA, DTA and DSC were carried out simultaneously. The kinetic and thermodynamic parameters of dehydration/decomposition process were calculated. The variation of dielectric constant with frequency of applied field was studied in the range from 400 Hz to 100 kHz.
Dynamical structure of magnetized dissipative accretion flow around black holes
NASA Astrophysics Data System (ADS)
Sarkar, Biplob; Das, Santabrata
2016-09-01
We study the global structure of optically thin, advection dominated, magnetized accretion flow around black holes. We consider the magnetic field to be turbulent in nature and dominated by the toroidal component. With this, we obtain the complete set of accretion solutions for dissipative flows where bremsstrahlung process is regarded as the dominant cooling mechanism. We show that rotating magnetized accretion flow experiences virtual barrier around black hole due to centrifugal repulsion that can trigger the discontinuous transition of the flow variables in the form of shock waves. We examine the properties of the shock waves and find that the dynamics of the post-shock corona (PSC) is controlled by the flow parameters, namely viscosity, cooling rate and strength of the magnetic field, respectively. We separate the effective region of the parameter space for standing shock and observe that shock can form for wide range of flow parameters. We obtain the critical viscosity parameter that allows global accretion solutions including shocks. We estimate the energy dissipation at the PSC from where a part of the accreting matter can deflect as outflows and jets. We compare the maximum energy that could be extracted from the PSC and the observed radio luminosity values for several supermassive black hole sources and the observational implications of our present analysis are discussed.
Rocket ascent G-limited moment-balanced optimization program (RAGMOP)
NASA Technical Reports Server (NTRS)
Lyons, J. T.; Woltosz, W. S.; Abercrombie, G. E.; Gottlieb, R. G.
1972-01-01
This document describes the RAGMOP (Rocket Ascent G-limited Momentbalanced Optimization Program) computer program for parametric ascent trajectory optimization. RAGMOP computes optimum polynomial-form attitude control histories, launch azimuth, engine burn-time, and gross liftoff weight for space shuttle type vehicles using a search-accelerated, gradient projection parameter optimization technique. The trajectory model available in RAGMOP includes a rotating oblate earth model, the option of input wind tables, discrete and/or continuous throttling for the purposes of limiting the thrust acceleration and/or the maximum dynamic pressure, limitation of the structural load indicators (the product of dynamic pressure with angle-of-attack and sideslip angle), and a wide selection of intermediate and terminal equality constraints.
Quantum detection of wormholes.
Sabín, Carlos
2017-04-06
We show how to use quantum metrology to detect a wormhole. A coherent state of the electromagnetic field experiences a phase shift with a slight dependence on the throat radius of a possible distant wormhole. We show that this tiny correction is, in principle, detectable by homodyne measurements after long propagation lengths for a wide range of throat radii and distances to the wormhole, even if the detection takes place very far away from the throat, where the spacetime is very close to a flat geometry. We use realistic parameters from state-of-the-art long-baseline laser interferometry, both Earth-based and space-borne. The scheme is, in principle, robust to optical losses and initial mixedness.
High-energy tail distributions and resonant wave particle interaction
NASA Technical Reports Server (NTRS)
Leubner, M. P.
1983-01-01
High-energy tail distributions (k distributions) are used as an alternative to a bi-Lorentzian distribution to study the influence of energetic protons on the right- and left-hand cyclotron modes in a hot two-temperature plasma. Although the parameters are chosen to be in a range appropriate to solar wind or magnetospheric configurations, the results apply not only to specific space plasmas. The presence of energetic particles significantly alters the behavior of the electromagnetic ion cyclotron modes, leading to a wide range of unstable frequencies and increased growth rates. From the strongly enhanced growth rates it can be concluded that high-energy tail distributions should not show major temperature anisotropies, which is consistent with observations.
Investigation of Vibrational Control of the Bridgman Crystal Growth Technique
NASA Technical Reports Server (NTRS)
Fedoseyev, Alexandre I.; Alexander, J. I. D.; Feigelson, R. S.; Zharikov, E. V.; Ostrogorsky, A. G.; Marin, C.; Volz, M. P.; Kansa, E. J.; Friedman, M. J.
2001-01-01
The character of natural buoyant convection in rigidly contained inhomogeneous fluids can be drastically altered by vibrating the container. Vibrations are expected to play a crucial influence on heat and mass transfer onboard the International Space Station (ISS). It is becoming evident that substantial vibrations will exist on the ISS in the wide frequency spectrum. In general, vibrational flows are very complex and governed by many parameters. In many terrestrial crystal growth situations, convective transport of heat and constituent components is dominated by buoyancy driven convection arising from compositional and thermal gradients. Thus, it may be concluded that vibro-convective flow can potentially be used to influence and even control transport in some crystal growth situations.
The spatial-temporal ambiguity in auroral modeling
NASA Technical Reports Server (NTRS)
Rees, M. H.; Roble, R. G.; Kopp, J.; Abreu, V. J.; Rusch, D. W.; Brace, L. H.; Brinton, H. C.; Hoffman, R. A.; Heelis, R. A.; Kayser, D. C.
1980-01-01
The paper examines the time-dependent models of the aurora which show that various ionospheric parameters respond to the onset of auroral ionization with different time histories. A pass of the Atmosphere Explorer C satellite over Poker Flat, Alaska, and ground based photometric and photographic observations have been used to resolve the time-space ambiguity of a specific auroral event. The density of the O(+), NO(+), O2(+), and N2(+) ions, the electron density, and the electron temperature observed at 280 km altitude in a 50 km wide segment of an auroral arc are predicted by the model if particle precipitation into the region commenced about 11 min prior to the overpass.
Time-resolved brightness measurements by streaking
NASA Astrophysics Data System (ADS)
Torrance, Joshua S.; Speirs, Rory W.; McCulloch, Andrew J.; Scholten, Robert E.
2018-03-01
Brightness is a key figure of merit for charged particle beams, and time-resolved brightness measurements can elucidate the processes involved in beam creation and manipulation. Here we report on a simple, robust, and widely applicable method for the measurement of beam brightness with temporal resolution by streaking one-dimensional pepperpots, and demonstrate the technique to characterize electron bunches produced from a cold-atom electron source. We demonstrate brightness measurements with 145 ps temporal resolution and a minimum resolvable emittance of 40 nm rad. This technique provides an efficient method of exploring source parameters and will prove useful for examining the efficacy of techniques to counter space-charge expansion, a critical hurdle to achieving single-shot imaging of atomic scale targets.
NASA Technical Reports Server (NTRS)
Sauer, Carl G., Jr.
1989-01-01
A patched conic trajectory optimization program MIDAS is described that was developed to investigate a wide variety of complex ballistic heliocentric transfer trajectories. MIDAS includes the capability of optimizing trajectory event times such as departure date, arrival date, and intermediate planetary flyby dates and is able to both add and delete deep space maneuvers when dictated by the optimization process. Both powered and unpowered flyby or gravity assist trajectories of intermediate bodies can be handled and capability is included to optimize trajectories having a rendezvous with an intermediate body such as for a sample return mission. Capability is included in the optimization process to constrain launch energy and launch vehicle parking orbit parameters.
NASA Technical Reports Server (NTRS)
Posner, Arik; Hesse, Michael; SaintCyr, Chris
2014-01-01
Space weather forecasting critically depends upon availability of timely and reliable observational data. It is therefore particularly important to understand how existing and newly planned observational assets perform during periods of severe space weather. Extreme space weather creates challenging conditions under which instrumentation and spacecraft may be impeded or in which parameters reach values that are outside the nominal observational range. This paper analyzes existing and upcoming observational capabilities for forecasting, and discusses how the findings may impact space weather research and its transition to operations. A single limitation to the assessment is lack of information provided to us on radiation monitor performance, which caused us not to fully assess (i.e., not assess short term) radiation storm forecasting. The assessment finds that at least two widely spaced coronagraphs including L4 would provide reliability for Earth-bound CMEs. Furthermore, all magnetic field measurements assessed fully meet requirements. However, with current or even with near term new assets in place, in the worst-case scenario there could be a near-complete lack of key near-real-time solar wind plasma data of severe disturbances heading toward and impacting Earth's magnetosphere. Models that attempt to simulate the effects of these disturbances in near real time or with archival data require solar wind plasma observations as input. Moreover, the study finds that near-future observational assets will be less capable of advancing the understanding of extreme geomagnetic disturbances at Earth, which might make the resulting space weather models unsuitable for transition to operations.
NASA Technical Reports Server (NTRS)
Fletcher, Lauren E.; Aldridge, Ann M.; Wheelwright, Charles; Maida, James
1997-01-01
Task illumination has a major impact on human performance: What a person can perceive in his environment significantly affects his ability to perform tasks, especially in space's harsh environment. Training for lighting conditions in space has long depended on physical models and simulations to emulate the effect of lighting, but such tests are expensive and time-consuming. To evaluate lighting conditions not easily simulated on Earth, personnel at NASA Johnson Space Center's (JSC) Graphics Research and Analysis Facility (GRAF) have been developing computerized simulations of various illumination conditions using the ray-tracing program, Radiance, developed by Greg Ward at Lawrence Berkeley Laboratory. Because these computer simulations are only as accurate as the data used, accurate information about the reflectance properties of materials and light distributions is needed. JSC's Lighting Environment Test Facility (LETF) personnel gathered material reflectance properties for a large number of paints, metals, and cloths used in the Space Shuttle and Space Station programs, and processed these data into reflectance parameters needed for the computer simulations. They also gathered lamp distribution data for most of the light sources used, and validated the ability to accurately simulate lighting levels by comparing predictions with measurements for several ground-based tests. The result of this study is a database of material reflectance properties for a wide variety of materials, and lighting information for most of the standard light sources used in the Shuttle/Station programs. The combination of the Radiance program and GRAF's graphics capability form a validated computerized lighting simulation capability for NASA.
Reducing the Knowledge Tracing Space
ERIC Educational Resources Information Center
Ritter, Steven; Harris, Thomas K.; Nixon, Tristan; Dickison, Daniel; Murray, R. Charles; Towle, Brendon
2009-01-01
In Cognitive Tutors, student skill is represented by estimates of student knowledge on various knowledge components. The estimate for each knowledge component is based on a four-parameter model developed by Corbett and Anderson [Nb]. In this paper, we investigate the nature of the parameter space defined by these four parameters by modeling data…
ERIC Educational Resources Information Center
Ramin, John E.
2011-01-01
The purpose of this study was to explore the effectiveness of implementing Life Space Crisis Intervention as a school-wide strategy for reducing school violence. Life Space Crisis Intervention (LSCI) is a strength-based verbal interaction strategy (Long, Fecser, Wood, 2001). LSCI utilizes naturally occurring crisis situations as teachable…
Hydroponic cultivation of soybean for Bioregenerative Life Support Systems (BLSSs)
NASA Astrophysics Data System (ADS)
De Pascale, Stefania; De Micco, Veronica; Aronne, Giovanna; Paradiso, Roberta
For long time our research group has been involved in experiments aiming to evaluate the possibility to cultivate plants in Space to regenerate resources and produce food. Apart from investigating the response of specific growth processes (at morpho-functional levels) to space factors (namely microgravity and ionising radiation), wide attention has been dedicated to agro-technologies applied to ecologically closed systems. Based on technical and human dietary requirements, soybean [Glycine max (L.) Merr.] is studied as one of the candidate species for hydroponic (soilless) cultivation in the research program MELiSSA (Micro-Ecological Life Support System Alternative) of the European Space Agency (ESA). Soybean seeds show high nutritional value, due to the relevant content of protein, lipids, dietary fiber and biologically active substances such as isoflavones. They can produce fresh sprouts or be transformed in several edible products (soymilk and okara or soy pulp). Soybean is traditionally grown in open field where specific interactions with soil microrganisms occur. Most available information on plant growth, seed productivity and nutrient composition relate to cultivated varieties (cultivars) selected for soil cultivation. However, in a space outpost, plant cultivation would rely on soilless systems. Given that plant growth, seed yield and quality strictly depend on the environmental conditions, to make successful the cultivation of soybean in space, it was necessary to screen all agronomic information according to space constraints. Indeed, selected cultivars have to comply with the space growth environment while providing a suitable nutritional quality to fulfill the astronauts needs. We proposed an objective criterion for the preliminary theoretical selection of the most suitable cultivars for seed production, which were subsequently evaluated in bench tests in hydroponics. Several Space-oriented experiments were carried out in a closed growth chamber to evaluate the adaptation of soybean plants to hydroponics under controlled environment, as well as the plant response to changing cultural parameters, in order to identify the best cultivation protocol for BLSSs. The optimisation of growth conditions in hydroponics has been pursued being aware that environmental factors acting at sub-optimal levels may also increase the sensitivity of plants to space factors. The influence of the following parameters on plant growth and yield was also studied: - the hydroponic system: sole liquid solution (Nutrient Film Technique, NFT) vs solid substrate (rockwool); - the source of nitrogen in the nutrient solution: nitrate fertilizers vs urea; - the root symbiosis with atmospheric nitrogen-fixing bacteria: absence or presence of Bradyrhizobium japonicum; - the influence of microbes in the rhizosphere: inoculation with a mix containing mycorrhizal and trichoderma species, and beneficial bacteria vs a non-inoculated control. All the treatments were evaluated in terms of agronomic traits (e.g. plant size and seed production), physiological traits (gas exchange, nutrient uptake), chemical composition of seeds and their products, and technical parameters such as resource use efficiency and non-edible biomass production (waste).
Determination of the Parameter Sets for the Best Performance of IPS-driven ENLIL Model
NASA Astrophysics Data System (ADS)
Yun, Jongyeon; Choi, Kyu-Cheol; Yi, Jonghyuk; Kim, Jaehun; Odstrcil, Dusan
2016-12-01
Interplanetary scintillation-driven (IPS-driven) ENLIL model was jointly developed by University of California, San Diego (UCSD) and National Aeronaucics and Space Administration/Goddard Space Flight Center (NASA/GSFC). The model has been in operation by Korean Space Weather Cetner (KSWC) since 2014. IPS-driven ENLIL model has a variety of ambient solar wind parameters and the results of the model depend on the combination of these parameters. We have conducted researches to determine the best combination of parameters to improve the performance of the IPS-driven ENLIL model. The model results with input of 1,440 combinations of parameters are compared with the Advanced Composition Explorer (ACE) observation data. In this way, the top 10 parameter sets showing best performance were determined. Finally, the characteristics of the parameter sets were analyzed and application of the results to IPS-driven ENLIL model was discussed.
Unequal density effect on static structure factor of coupled electron layers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saini, L. K., E-mail: lks@ashd.svnit.ac.in; Nayak, Mukesh G., E-mail: lks@ashd.svnit.ac.in
In order to understand the ordered phase, if any, in a real coupled electron layers (CEL), there is a need to take into account the effect of unequal layer density. Such phase is confirmed by a strong peak in a static structure factor. With the aid of quantum/dynamical version of Singwi, Tosi, Land and Sjölander (so-called qSTLS) approximation, we have calculated the intra- and interlayer static structure factors, S{sub ll}(q) and S{sub 12}(q), over a wide range of density parameter r{sub sl} and interlayer spacing d. In our present study, the sharp peak in S{sub 22}(q) has been found atmore » critical density with sufficiently lower interlayer spacing. Further, to find the resultant effect of unequal density on intra- and interlayer static structure factors, we have compared our results with that of the recent CEL system with equal layer density and isolated single electron layer.« less
Adaptive servo control for umbilical mating
NASA Technical Reports Server (NTRS)
Zia, Omar
1988-01-01
Robotic applications at Kennedy Space Center are unique and in many cases require the fime positioning of heavy loads in dynamic environments. Performing such operations is beyond the capabilities of an off-the-shelf industrial robot. Therefore Robotics Applications Development Laboratory at Kennedy Space Center has put together an integrated system that coordinates state of the art robotic system providing an excellent easy to use testbed for NASA sensor integration experiments. This paper reviews the ways of improving the dynamic response of the robot operating under force feedback with varying dynamic internal perturbations in order to provide continuous stable operations under variable load conditions. The goal is to improve the stability of the system with force feedback using the adaptive control feature of existing system over a wide range of random motions. The effect of load variations on the dynamics and the transfer function (order or values of the parameters) of the system has been investigated, more accurate models of the system have been determined and analyzed.
A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Bell, Thomas L.
2003-01-01
A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.
Magneto-thermal reconnection of significance to space and astrophysics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coppi, B., E-mail: coppi@psfc.mit.edu
Magnetic reconnection processes that can be excited in collisionless plasma regimes are of interest to space and astrophysics to the extent that the layers in which reconnection takes place are not rendered unrealistically small by their unfavorable dependence on relevant macroscopic distances. The equations describing new modes producing magnetic reconnection over relatively small but significant distances, unlike tearing types of mode, even when dealing with large macroscopic scale lengths, are given. The considered modes are associated with a finite electron temperature gradient and have a phase velocity in the direction of the electron diamagnetic velocity that can reverse to themore » opposite direction as relevant parameters are varied over a relatively wide range. The electron temperature perturbation has a primary role in the relevant theory. In particular, when referring to regimes in which the longitudinal (to the magnetic field) electron thermal conductivity is relatively large, the electron temperature perturbation becomes singular if the ratio of the transverse to the longitudinal electron thermal conductivity becomes negligible.« less
Improvements in Space Surveillance Processing for Wide Field of View Optical Sensors
NASA Astrophysics Data System (ADS)
Sydney, P.; Wetterer, C.
2014-09-01
For more than a decade, an autonomous satellite tracking system at the Air Force Maui Optical and Supercomputing (AMOS) observatory has been generating routine astrometric measurements of Earth-orbiting Resident Space Objects (RSOs) using small commercial telescopes and sensors. Recent work has focused on developing an improved processing system, enhancing measurement performance and response while supporting other sensor systems and missions. This paper will outline improved techniques in scheduling, detection, astrometric and photometric measurements, and catalog maintenance. The processing system now integrates with Special Perturbation (SP) based astrodynamics algorithms, allowing covariance-based scheduling and more precise orbital estimates and object identification. A merit-based scheduling algorithm provides a global optimization framework to support diverse collection tasks and missions. The detection algorithms support a range of target tracking and camera acquisition rates. New comprehensive star catalogs allow for more precise astrometric and photometric calibrations including differential photometry for monitoring environmental changes. This paper will also examine measurement performance with varying tracking rates and acquisition parameters.
Ringe, Stefan; Oberhofer, Harald; Hille, Christoph; Matera, Sebastian; Reuter, Karsten
2016-08-09
The size-modified Poisson-Boltzmann (MPB) equation is an efficient implicit solvation model which also captures electrolytic solvent effects. It combines an account of the dielectric solvent response with a mean-field description of solvated finite-sized ions. We present a general solution scheme for the MPB equation based on a fast function-space-oriented Newton method and a Green's function preconditioned iterative linear solver. In contrast to popular multigrid solvers, this approach allows us to fully exploit specialized integration grids and optimized integration schemes. We describe a corresponding numerically efficient implementation for the full-potential density-functional theory (DFT) code FHI-aims. We show that together with an additional Stern layer correction the DFT+MPB approach can describe the mean activity coefficient of a KCl aqueous solution over a wide range of concentrations. The high sensitivity of the calculated activity coefficient on the employed ionic parameters thereby suggests to use extensively tabulated experimental activity coefficients of salt solutions for a systematic parametrization protocol.
Space station impact experiments
NASA Technical Reports Server (NTRS)
Schultz, P.; Ahrens, T.; Alexander, W. M.; Cintala, M.; Gault, D.; Greeley, R.; Hawke, B. R.; Housen, K.; Schmidt, R.
1986-01-01
Four processes serve to illustrate potential areas of study and their implications for general problems in planetary science. First, accretional processes reflect the success of collisional aggregation over collisional destruction during the early history of the solar system. Second, both catastrophic and less severe effects of impacts on planetary bodies survivng from the time of the early solar system may be expressed by asteroid/planetary spin rates, spin orientations, asteroid size distributions, and perhaps the origin of the Moon. Third, the surfaces of planetary bodies directly record the effects of impacts in the form of craters; these records have wide-ranging implications. Fourth, regoliths evolution of asteroidal surfaces is a consequence of cumulative impacts, but the absence of a significant gravity term may profoundly affect the retention of shocked fractions and agglutinate build-up, thereby biasing the correct interpretations of spectral reflectance data. An impact facility on the Space Station would provide the controlled conditions necessary to explore such processes either through direct simulation of conditions or indirect simulation of certain parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fabrycky, Daniel C.; Lissauer, Jack J.; Ragozzine, Darin
Having discovered 885 planet candidates in 361 multiple-planet systems, Kepler has made transits a powerful method for studying the statistics of planetary systems. The orbits of only two pairs of planets in these candidate systems are apparently unstable. This indicates that a high percentage of the candidate systems are truly planets orbiting the same star, motivating physical investigations of the population. Pairs of planets in this sample are typically not in orbital resonances. However, pairs with orbital period ratios within a few percent of a first-order resonance (e.g. 2:1, 3:2) prefer orbital spacings just wide of the resonance and avoidmore » spacings just narrow of the resonance. Finally, we investigate mutual inclinations based on transit duration ratios. We infer that the inner planets of pairs tend to have a smaller impact parameter than their outer companions, suggesting these planetary systems are typically coplanar to within a few degrees.« less
NASA Technical Reports Server (NTRS)
Jones, L. D.
1979-01-01
The Space Environment Test Division Post-Test Data Reduction Program processes data from test history tapes generated on the Flexible Data System in the Space Environment Simulation Laboratory at the National Aeronautics and Space Administration/Lyndon B. Johnson Space Center. The program reads the tape's data base records to retrieve the item directory conversion file, the item capture file and the process link file to determine the active parameters. The desired parameter names are read in by lead cards after which the periodic data records are read to determine parameter data level changes. The data is considered to be compressed rather than full sample rate. Tabulations and/or a tape for generating plots may be output.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotasidis, Fotis A., E-mail: Fotis.Kotasidis@unige.ch; Zaidi, Habib; Geneva Neuroscience Centre, Geneva University, CH-1205 Geneva
2014-06-15
Purpose: The Ingenuity time-of-flight (TF) PET/MR is a recently developed hybrid scanner combining the molecular imaging capabilities of PET with the excellent soft tissue contrast of MRI. It is becoming common practice to characterize the system's point spread function (PSF) and understand its variation under spatial transformations to guide clinical studies and potentially use it within resolution recovery image reconstruction algorithms. Furthermore, due to the system's utilization of overlapping and spherical symmetric Kaiser-Bessel basis functions during image reconstruction, its image space PSF and reconstructed spatial resolution could be affected by the selection of the basis function parameters. Hence, a detailedmore » investigation into the multidimensional basis function parameter space is needed to evaluate the impact of these parameters on spatial resolution. Methods: Using an array of 12 × 7 printed point sources, along with a custom made phantom, and with the MR magnet on, the system's spatially variant image-based PSF was characterized in detail. Moreover, basis function parameters were systematically varied during reconstruction (list-mode TF OSEM) to evaluate their impact on the reconstructed resolution and the image space PSF. Following the spatial resolution optimization, phantom, and clinical studies were subsequently reconstructed using representative basis function parameters. Results: Based on the analysis and under standard basis function parameters, the axial and tangential components of the PSF were found to be almost invariant under spatial transformations (∼4 mm) while the radial component varied modestly from 4 to 6.7 mm. Using a systematic investigation into the basis function parameter space, the spatial resolution was found to degrade for basis functions with a large radius and small shape parameter. However, it was found that optimizing the spatial resolution in the reconstructed PET images, while having a good basis function superposition and keeping the image representation error to a minimum, is feasible, with the parameter combination range depending upon the scanner's intrinsic resolution characteristics. Conclusions: Using the printed point source array as a MR compatible methodology for experimentally measuring the scanner's PSF, the system's spatially variant resolution properties were successfully evaluated in image space. Overall the PET subsystem exhibits excellent resolution characteristics mainly due to the fact that the raw data are not under-sampled/rebinned, enabling the spatial resolution to be dictated by the scanner's intrinsic resolution and the image reconstruction parameters. Due to the impact of these parameters on the resolution properties of the reconstructed images, the image space PSF varies both under spatial transformations and due to basis function parameter selection. Nonetheless, for a range of basis function parameters, the image space PSF remains unaffected, with the range depending on the scanner's intrinsic resolution properties.« less
Rosenbluth, J.; Schiff, R.
2008-01-01
Antiglycolipid IgM antibodies are known to induce formation of ‘wide-spaced’ or ‘expanded’ myelin, a distinctive form of dysmylination characterized by a repeat period ~2X or 3X normal, seen also in diseases including multiple sclerosis. To determine whether an antibody directed against a myelin protein would cause equivalent pathology, we implanted O10 hybridoma cells into the spinal cord of adult or juvenile rats. O10 produces an IgM directed against PLP, the major protein of CNS myelin. Subsequent examination of the cords showed focal demyelination and remyelination. In addition, however, some juvenile cords, but none of the adults, displayed wide-spaced myelin with lamellae separated by an extracellular material comprised of elements consistent with IgM molecules in appearance. Wide spacing tended to involve the outer layers of the sheath and in some cases alternated with normally spaced lamellae. A feature not seen previously consists of multiple expanded myelin lamellae in one sector of a sheath continuous with normally spaced lamellae in another, resulting in variation in sheath thickness around the axonal circumference. This uneven distribution of wide-spaced lamellae is most simply explained based on incorporation of IgM molecules into immature sheaths during myelin formation and implies a model of CNS myelinogenesis more complex than simple spiraling. The periaxonal space never displays widening of this kind, but the interface with adjacent myelin sheaths or oligodendrocytes may. Thus, wide spacing appears to require that IgM molecules bridge between two PLP-containing membranes and does not reflect the mere presence of immunoglobulin within the extracellular space. PMID:18951490
Cloud GPU-based simulations for SQUAREMR.
Kantasis, George; Xanthis, Christos G; Haris, Kostas; Heiberg, Einar; Aletras, Anthony H
2017-01-01
Quantitative Magnetic Resonance Imaging (MRI) is a research tool, used more and more in clinical practice, as it provides objective information with respect to the tissues being imaged. Pixel-wise T 1 quantification (T 1 mapping) of the myocardium is one such application with diagnostic significance. A number of mapping sequences have been developed for myocardial T 1 mapping with a wide range in terms of measurement accuracy and precision. Furthermore, measurement results obtained with these pulse sequences are affected by errors introduced by the particular acquisition parameters used. SQUAREMR is a new method which has the potential of improving the accuracy of these mapping sequences through the use of massively parallel simulations on Graphical Processing Units (GPUs) by taking into account different acquisition parameter sets. This method has been shown to be effective in myocardial T 1 mapping; however, execution times may exceed 30min which is prohibitively long for clinical applications. The purpose of this study was to accelerate the construction of SQUAREMR's multi-parametric database to more clinically acceptable levels. The aim of this study was to develop a cloud-based cluster in order to distribute the computational load to several GPU-enabled nodes and accelerate SQUAREMR. This would accommodate high demands for computational resources without the need for major upfront equipment investment. Moreover, the parameter space explored by the simulations was optimized in order to reduce the computational load without compromising the T 1 estimates compared to a non-optimized parameter space approach. A cloud-based cluster with 16 nodes resulted in a speedup of up to 13.5 times compared to a single-node execution. Finally, the optimized parameter set approach allowed for an execution time of 28s using the 16-node cluster, without compromising the T 1 estimates by more than 10ms. The developed cloud-based cluster and optimization of the parameter set reduced the execution time of the simulations involved in constructing the SQUAREMR multi-parametric database thus bringing SQUAREMR's applicability within time frames that would be likely acceptable in the clinic. Copyright © 2016 Elsevier Inc. All rights reserved.
Space Shuttle Pad Exposure Period Meteorological Parameters STS-1 Through STS-107
NASA Technical Reports Server (NTRS)
Overbey, B. G.; Roberts, B. C.
2005-01-01
During the 113 missions of the Space Transportation System (STS) to date, the Space Shuttle fleet has been exposed to the elements on the launch pad for approx. 4,195 days. The Natural Environments Branch at Marshall Space Flight Center archives atmospheric environments to which the Space Shuttle vehicles are exposed. This Technical Memorandum (TM) provides a summary of the historical record of the meteorological conditions encountered by the Space Shuttle fleet during the pad exposure period. Parameters included in this TM are temperature, relative humidity, wind speed, wind direction, sea level pressure, and precipitation. Extremes for each of these parameters for each mission are also summarized. Sources for the data include meteorological towers and hourly surface observations. Data are provided from the first launch of the STS in 1981 through the launch of STS-107 in 2003.
Optimal Constellation Design for Maximum Continuous Coverage of Targets Against a Space Background
2012-05-31
constellation is considered with the properties shown in Table 13. The parameter hres refers to the number of equally spaced offset planes in which cross...mean anomaly 180 ◦ M0i mean anomaly of lead satellite at epoch 0 ◦ R omni-directional sensor range 5000 km m initial polygon resolution 50 PPC hres ...a Walker Star. Idealized parameters for the Iridium constellation are shown in Table 14. The parameter hres refers to the number of equally spaced
NASA Technical Reports Server (NTRS)
Nemeth, Michael P.
2013-01-01
Nondimensional linear-bifurcation buckling equations for balanced, symmetrically laminated cylinders with negligible shell-wall anisotropies and subjected to uniform axial compression loads are presented. These equations are solved exactly for the practical case of simply supported ends. Nondimensional quantities are used to characterize the buckling behavior that consist of a stiffness-weighted length-to-radius parameter, a stiffness-weighted shell-thinness parameter, a shell-wall nonhomogeneity parameter, two orthotropy parameters, and a nondimensional buckling load. Ranges for the nondimensional parameters are established that encompass a wide range of laminated-wall constructions and numerous generic plots of nondimensional buckling load versus a stiffness-weighted length-to-radius ratio are presented for various combinations of the other parameters. These plots are expected to include many practical cases of interest to designers. Additionally, these plots show how the parameter values affect the distribution and size of the festoons forming each response curve and how they affect the attenuation of each response curve to the corresponding solution for an infinitely long cylinder. To aid in preliminary design studies, approximate formulas for the nondimensional buckling load are derived, and validated against the corresponding exact solution, that give the attenuated buckling response of an infinitely long cylinder in terms of the nondimensional parameters presented herein. A relatively small number of "master curves" are identified that give a nondimensional measure of the buckling load of an infinitely long cylinder as a function of the orthotropy and wall inhomogeneity parameters. These curves reduce greatly the complexity of the design-variable space as compared to representations that use dimensional quantities as design variables. As a result of their inherent simplicity, these master curves are anticipated to be useful in the ongoing development of buckling-design technology.
Novel jet observables from machine learning
NASA Astrophysics Data System (ADS)
Datta, Kaustuv; Larkoski, Andrew J.
2018-03-01
Previous studies have demonstrated the utility and applicability of machine learning techniques to jet physics. In this paper, we construct new observables for the discrimination of jets from different originating particles exclusively from information identified by the machine. The approach we propose is to first organize information in the jet by resolved phase space and determine the effective N -body phase space at which discrimination power saturates. This then allows for the construction of a discrimination observable from the N -body phase space coordinates. A general form of this observable can be expressed with numerous parameters that are chosen so that the observable maximizes the signal vs. background likelihood. Here, we illustrate this technique applied to discrimination of H\\to b\\overline{b} decays from massive g\\to b\\overline{b} splittings. We show that for a simple parametrization, we can construct an observable that has discrimination power comparable to, or better than, widely-used observables motivated from theory considerations. For the case of jets on which modified mass-drop tagger grooming is applied, the observable that the machine learns is essentially the angle of the dominant gluon emission off of the b\\overline{b} pair.
EnviroNET: An on-line environment data base for LDEF data
NASA Technical Reports Server (NTRS)
Lauriente, Michael
1992-01-01
EnviroNET is an on-line, free form data base intended to provide a centralized depository for a wide range of technical information on environmentally induced interactions of use to Space Shuttle customers and spacecraft designers. It provides a user friendly, menu driven format on networks that are connected globally and is available twenty-four hours a day, every day. The information updated regularly, includes expository text, tabular numerical data, charts and graphs, and models. The system pools space data collected over the years by NASA, USAF, other government facilities, industry, universities, and ESA. The models accept parameter input from the user and calculate and display the derived values corresponding to that input. In addition to the archive, interactive graphics programs are also available on space debris, the neutral atmosphere, radiation, magnetic field, and ionosphere. A user friendly informative interface is standard for all the models with a pop-up window, help window with information on inputs, outputs, and caveats. The system will eventually simplify mission analysis with analytical tools and deliver solution for computational intense graphical applications to do 'What if' scenarios. A proposed plan for developing a repository of LDEF information for a user group concludes the presentation.
NASA Technical Reports Server (NTRS)
Mardirossian, H.; Beri, A. C.; Doll, C. E.
1990-01-01
The Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC) provides spacecraft trajectory determination for a wide variety of National Aeronautics and Space Administration (NASA)-supported satellite missions, using the Tracking Data Relay Satellite System (TDRSS) and Ground Spaceflight and Tracking Data Network (GSTDN). To take advantage of computerized decision making processes that can be used in spacecraft navigation, the Orbit Determination Automation System (ODAS) was designed, developed, and implemented as a prototype system to automate orbit determination (OD) and orbit quality assurance (QA) functions performed by orbit operations. Based on a machine-resident generic schedule and predetermined mission-dependent QA criteria, ODAS autonomously activates an interface with the existing trajectory determination system using a batch least-squares differential correction algorithm to perform the basic OD functions. The computational parameters determined during the OD are processed to make computerized decisions regarding QA, and a controlled recovery process is activated when the criteria are not satisfied. The complete cycle is autonomous and continuous. ODAS was extensively tested for performance under conditions resembling actual operational conditions and found to be effective and reliable for extended autonomous OD. Details of the system structure and function are discussed, and test results are presented.
NASA Technical Reports Server (NTRS)
Mardirossian, H.; Heuerman, K.; Beri, A.; Samii, M. V.; Doll, C. E.
1989-01-01
The Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC) provides spacecraft trajectory determination for a wide variety of National Aeronautics and Space Administration (NASA)-supported satellite missions, using the Tracking Data Relay Satellite System (TDRSS) and Ground Spaceflight and Tracking Data Network (GSTDN). To take advantage of computerized decision making processes that can be used in spacecraft navigation, the Orbit Determination Automation System (ODAS) was designed, developed, and implemented as a prototype system to automate orbit determination (OD) and orbit quality assurance (QA) functions performed by orbit operations. Based on a machine-resident generic schedule and predetermined mission-dependent QA criteria, ODAS autonomously activates an interface with the existing trajectory determination system using a batch least-squares differential correction algorithm to perform the basic OD functions. The computational parameters determined during the OD are processed to make computerized decisions regarding QA, and a controlled recovery process isactivated when the criteria are not satisfied. The complete cycle is autonomous and continuous. ODAS was extensively tested for performance under conditions resembling actual operational conditions and found to be effective and reliable for extended autonomous OD. Details of the system structure and function are discussed, and test results are presented.
Lunar Swirls: Plasma Magnetic Field Interaction and Dust Transport
NASA Astrophysics Data System (ADS)
Dropmann, Michael; Laufer, Rene; Herdrich, Georg; Matthews, Lorin; Hyde, Truell
2013-10-01
In close collaboration between the Center for Astrophysics, Space Physics and Engineering Research (CASPER) at Baylor University, Texas, and the Institute of Space Systems (IRS) at the University of Stuttgart, Germany, two plasma facilities have been established using the Inductively heated Plasma Generator 6 (IPG6), based on proven IRS designs. A wide range of applications is currently under consideration for both test and research facilities. Basic investigations in the area of plasma radiation and catalysis, simulation of certain parameters of fusion divertors and space applications are planned. In this paper, the facility at Baylor University (IPG6-B) will be used for simulation of mini-magnetospheres on the Moon. The interaction of the solar wind with magnetic fields leads to the formation of electric fields, which can influence the incoming solar wind ion flux and affect dust transport processes on the lunar surface. Both effects may be partially responsible for the occurrence of lunar swirls. Interactions of the solar wind with such mini-magnetospheres will be simulated in the IPG6-B by observing the interaction between a plasma jet and a permanent magnet. The resulting data should lead to better models of dust transport processes and solar wind deflection on the moon.
HST PanCET Program: A Cloudy Atmosphere for the Promising JWST Target WASP-101b
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wakeford, H. R.; Mandell, A.; Stevenson, K. B.
We present results from the first observations of the Hubble Space Telescope (HST) Panchromatic Comparative Exoplanet Treasury program for WASP-101b, a highly inflated hot Jupiter and one of the community targets proposed for the James Webb Space Telescope ( JWST ) Early Release Science (ERS) program. From a single HST Wide Field Camera 3 observation, we find that the near-infrared transmission spectrum of WASP-101b contains no significant H{sub 2}O absorption features and we rule out a clear atmosphere at 13 σ . Therefore, WASP-101b is not an optimum target for a JWST ERS program aimed at observing strong molecular transmissionmore » features. We compare WASP-101b to the well-studied and nearly identical hot Jupiter WASP-31b. These twin planets show similar temperature–pressure profiles and atmospheric features in the near-infrared. We suggest exoplanets in the same parameter space as WASP-101b and WASP-31b will also exhibit cloudy transmission spectral features. For future HST exoplanet studies, our analysis also suggests that a lower count limit needs to be exceeded per pixel on the detector in order to avoid unwanted instrumental systematics.« less
Estimability of geodetic parameters from space VLBI observables
NASA Technical Reports Server (NTRS)
Adam, Jozsef
1990-01-01
The feasibility of space very long base interferometry (VLBI) observables for geodesy and geodynamics is investigated. A brief review of space VLBI systems from the point of view of potential geodetic application is given. A selected notational convention is used to jointly treat the VLBI observables of different types of baselines within a combined ground/space VLBI network. The basic equations of the space VLBI observables appropriate for convariance analysis are derived and included. The corresponding equations for the ground-to-ground baseline VLBI observables are also given for a comparison. The simplified expression of the mathematical models for both space VLBI observables (time delay and delay rate) include the ground station coordinates, the satellite orbital elements, the earth rotation parameters, the radio source coordinates, and clock parameters. The observation equations with these parameters were examined in order to determine which of them are separable or nonseparable. Singularity problems arising from coordinate system definition and critical configuration are studied. Linear dependencies between partials are analytically derived. The mathematical models for ground-space baseline VLBI observables were tested with simulation data in the frame of some numerical experiments. Singularity due to datum defect is confirmed.
Global tectonics and space geodesy.
Gordon, R G; Stein, S
1992-04-17
Much of the success of plate tectonics can be attributed to the near rigidity of tectonic plates and the availability of data that describe the rates and directions of motion across narrow plate boundaries \\m=~\\1 to 60 kilometers wide. Nonetheless, many plate boundaries in both continental and oceanic lithosphere are not narrow but are hundreds to thousands of kilometers wide. Wide plate boundary zones cover \\m=~\\15 percent of Earth's surface area. Space geodesy, which includes very long baseline radio interferometry, satellite laser ranging, and the global positioning system, is providing the accurate long-distance measurements needed to estimate the present motion across and within wide plate boundary zones. Space geodetic data show that plate velocities averaged over years are remarkably similar to velocities averaged over millions of years.
Dynamics in the Parameter Space of a Neuron Model
NASA Astrophysics Data System (ADS)
Paulo, C. Rech
2012-06-01
Some two-dimensional parameter-space diagrams are numerically obtained by considering the largest Lyapunov exponent for a four-dimensional thirteen-parameter Hindmarsh—Rose neuron model. Several different parameter planes are considered, and it is shown that depending on the combination of parameters, a typical scenario can be preserved: for some choice of two parameters, the parameter plane presents a comb-shaped chaotic region embedded in a large periodic region. It is also shown that there exist regions close to these comb-shaped chaotic regions, separated by the comb teeth, organizing themselves in period-adding bifurcation cascades.
Kantowski-Sachs Einstein-æther perfect fluid models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Latta, Joey; Leon, Genly; Paliathanasis, Andronikos, E-mail: lattaj@mathstat.dal.ca, E-mail: genly.leon@pucv.cl, E-mail: anpaliat@phys.uoa.gr
We investigate Kantowski-Sachs models in Einstein-æ ther theory with a perfect fluid source using the singularity analysis to prove the integrability of the field equations and dynamical system tools to study the evolution. We find an inflationary source at early times, and an inflationary sink at late times, for a wide region in the parameter space. The results by A.A. Coley, G. Leon, P. Sandin and J. Latta ( JCAP 12 (2015) 010), are then re-obtained as particular cases. Additionally, we select other values for the non-GR parameters which are consistent with current constraints, getting a very rich phenomenology. Inmore » particular, we find solutions with infinite shear, zero curvature, and infinite matter energy density in comparison with the Hubble scalar. We also have stiff-like future attractors, anisotropic late-time attractors, or both, in some special cases. Such results are developed analytically, and then verified by numerics. Finally, the physical interpretation of the new critical points is discussed.« less
Survey Of CO{sub 2} Laser Ablation Propulsion With Polyoxymethylene Propellant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sinko, John E.; Sasoh, Akihiro
Polyoxymethylene (POM) has been widely studied as a laser propulsion propellant paired to CO{sub 2} laser radiation. POM is a good test case for studying ablation properties of polymer materials, and within limits, for study of general trends in laser ablation-induced impulse. Despite many studies, there is no general understanding of POM ablation that takes into account the ambient pressure, spot area, fluence, and effects from confinement and combustion. This paper reviews and synthesizes CO{sub 2} laser ablation propulsion research using POM targets. Necessary directions for future study are indicated to address incomplete regions of the various parameter spaces. Literaturemore » data is compared in terms of propulsion parameters such as momentum coupling coefficient and specific impulse, within a range of fluences from about 1-500 J/cm{sup 2}, ambient pressures from about 10{sup -2}-10{sup 5} Pa, and laser spot areas from about 0.01-10 cm{sup 2}.« less
Exo-Milankovitch Cycles. I. Orbits and Rotation States
NASA Astrophysics Data System (ADS)
Deitrick, Russell; Barnes, Rory; Quinn, Thomas R.; Armstrong, John; Charnay, Benjamin; Wilhelm, Caitlyn
2018-02-01
The obliquity of the Earth, which controls our seasons, varies by only ∼2.°5 over ∼40,000 years, and its eccentricity varies by only ∼0.05 over 100,000 years. Nonetheless, these small variations influence Earth’s ice ages. For exoplanets, however, variations can be significantly larger. Previous studies of the habitability of moonless Earth-like exoplanets have found that high obliquities, high eccentricities, and dynamical variations can extend the outer edge of the habitable zone by preventing runaway glaciation (snowball states). We expand upon these studies by exploring the orbital dynamics with a semianalytic model that allows us to map broad regions of parameter space. We find that, in general, the largest drivers of obliquity variations are secular spin–orbit resonances. We show how the obliquity varies in several test cases, including Kepler-62 f, across a wide range of orbital and spin parameters. These obliquity variations, alongside orbital variations, will have a dramatic impact on the climates of such planets.
Multiclustered chimeras in large semiconductor laser arrays with nonlocal interactions
NASA Astrophysics Data System (ADS)
Shena, J.; Hizanidis, J.; Hövel, P.; Tsironis, G. P.
2017-09-01
The dynamics of a large array of coupled semiconductor lasers is studied numerically for a nonlocal coupling scheme. Our focus is on chimera states, a self-organized spatiotemporal pattern of coexisting coherence and incoherence. In laser systems, such states have been previously found for global and nearest-neighbor coupling, mainly in small networks. The technological advantage of large arrays has motivated us to study a system of 200 nonlocally coupled lasers with respect to the emerging collective dynamics. Moreover, the nonlocal nature of the coupling allows us to obtain robust chimera states with multiple (in)coherent domains. The crucial parameters are the coupling strength, the coupling phase and the range of the nonlocal interaction. We find that multiclustered chimera states exist in a wide region of the parameter space and we provide quantitative characterization for the obtained spatiotemporal patterns. By proposing two different experimental setups for the realization of the nonlocal coupling scheme, we are confident that our results can be confirmed in the laboratory.
Performance of a parallel code for the Euler equations on hypercube computers
NASA Technical Reports Server (NTRS)
Barszcz, Eric; Chan, Tony F.; Jesperson, Dennis C.; Tuminaro, Raymond S.
1990-01-01
The performance of hypercubes were evaluated on a computational fluid dynamics problem and the parallel environment issues were considered that must be addressed, such as algorithm changes, implementation choices, programming effort, and programming environment. The evaluation focuses on a widely used fluid dynamics code, FLO52, which solves the two dimensional steady Euler equations describing flow around the airfoil. The code development experience is described, including interacting with the operating system, utilizing the message-passing communication system, and code modifications necessary to increase parallel efficiency. Results from two hypercube parallel computers (a 16-node iPSC/2, and a 512-node NCUBE/ten) are discussed and compared. In addition, a mathematical model of the execution time was developed as a function of several machine and algorithm parameters. This model accurately predicts the actual run times obtained and is used to explore the performance of the code in interesting but yet physically realizable regions of the parameter space. Based on this model, predictions about future hypercubes are made.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiao; Wang, Yanhui, E-mail: wangyh@dlut.edu.cn; Wang, Dezhen, E-mail: wangdez@dlut.edu.cn
A two-dimensional fluid model is developed to study the filaments (or discharge channels) in atmospheric-pressure discharge with one plate electrode covered by a dielectric layer. Under certain discharge parameters, one or more stable filaments with wide radii could be regularly arranged in the discharge space. Different from the short-lived randomly distributed microdischarges, this stable and thick filament can carry more current and have longer lifetime. Because only one electrode is covered by a dielectric layer in the simulation, the formed discharge channel extends outwards near the dielectric layer and shrinks inwards near the naked electrode, agreeing with the experimental results.more » In this paper, the evolution of channel is studied, and its behavior is like a streamer or an ionization wave, but the propagation distance is short. The discharge parameters such as voltage amplitude, electrode width, and N{sub 2} impurities content could significantly influence the number of discharge channel, which is discussed in the paper.« less
NASA Astrophysics Data System (ADS)
Selouani, Sid-Ahmed; O'Shaughnessy, Douglas
2003-12-01
Limiting the decrease in performance due to acoustic environment changes remains a major challenge for continuous speech recognition (CSR) systems. We propose a novel approach which combines the Karhunen-Loève transform (KLT) in the mel-frequency domain with a genetic algorithm (GA) to enhance the data representing corrupted speech. The idea consists of projecting noisy speech parameters onto the space generated by the genetically optimized principal axis issued from the KLT. The enhanced parameters increase the recognition rate for highly interfering noise environments. The proposed hybrid technique, when included in the front-end of an HTK-based CSR system, outperforms that of the conventional recognition process in severe interfering car noise environments for a wide range of signal-to-noise ratios (SNRs) varying from 16 dB to[InlineEquation not available: see fulltext.] dB. We also showed the effectiveness of the KLT-GA method in recognizing speech subject to telephone channel degradations.
A new Bayesian recursive technique for parameter estimation
NASA Astrophysics Data System (ADS)
Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis
2006-08-01
The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.
Sun, Xiaodian; Jin, Li; Xiong, Momiao
2008-01-01
It is system dynamics that determines the function of cells, tissues and organisms. To develop mathematical models and estimate their parameters are an essential issue for studying dynamic behaviors of biological systems which include metabolic networks, genetic regulatory networks and signal transduction pathways, under perturbation of external stimuli. In general, biological dynamic systems are partially observed. Therefore, a natural way to model dynamic biological systems is to employ nonlinear state-space equations. Although statistical methods for parameter estimation of linear models in biological dynamic systems have been developed intensively in the recent years, the estimation of both states and parameters of nonlinear dynamic systems remains a challenging task. In this report, we apply extended Kalman Filter (EKF) to the estimation of both states and parameters of nonlinear state-space models. To evaluate the performance of the EKF for parameter estimation, we apply the EKF to a simulation dataset and two real datasets: JAK-STAT signal transduction pathway and Ras/Raf/MEK/ERK signaling transduction pathways datasets. The preliminary results show that EKF can accurately estimate the parameters and predict states in nonlinear state-space equations for modeling dynamic biochemical networks. PMID:19018286
Adaptive Parameter Estimation of Person Recognition Model in a Stochastic Human Tracking Process
NASA Astrophysics Data System (ADS)
Nakanishi, W.; Fuse, T.; Ishikawa, T.
2015-05-01
This paper aims at an estimation of parameters of person recognition models using a sequential Bayesian filtering method. In many human tracking method, any parameters of models used for recognize the same person in successive frames are usually set in advance of human tracking process. In real situation these parameters may change according to situation of observation and difficulty level of human position prediction. Thus in this paper we formulate an adaptive parameter estimation using general state space model. Firstly we explain the way to formulate human tracking in general state space model with their components. Then referring to previous researches, we use Bhattacharyya coefficient to formulate observation model of general state space model, which is corresponding to person recognition model. The observation model in this paper is a function of Bhattacharyya coefficient with one unknown parameter. At last we sequentially estimate this parameter in real dataset with some settings. Results showed that sequential parameter estimation was succeeded and were consistent with observation situations such as occlusions.
Posner, A; Hesse, M; St Cyr, O C
2014-04-01
Space weather forecasting critically depends upon availability of timely and reliable observational data. It is therefore particularly important to understand how existing and newly planned observational assets perform during periods of severe space weather. Extreme space weather creates challenging conditions under which instrumentation and spacecraft may be impeded or in which parameters reach values that are outside the nominal observational range. This paper analyzes existing and upcoming observational capabilities for forecasting, and discusses how the findings may impact space weather research and its transition to operations. A single limitation to the assessment is lack of information provided to us on radiation monitor performance, which caused us not to fully assess (i.e., not assess short term) radiation storm forecasting. The assessment finds that at least two widely spaced coronagraphs including L4 would provide reliability for Earth-bound CMEs. Furthermore, all magnetic field measurements assessed fully meet requirements. However, with current or even with near term new assets in place, in the worst-case scenario there could be a near-complete lack of key near-real-time solar wind plasma data of severe disturbances heading toward and impacting Earth's magnetosphere. Models that attempt to simulate the effects of these disturbances in near real time or with archival data require solar wind plasma observations as input. Moreover, the study finds that near-future observational assets will be less capable of advancing the understanding of extreme geomagnetic disturbances at Earth, which might make the resulting space weather models unsuitable for transition to operations. Manuscript assesses current and near-future space weather assetsCurrent assets unreliable for forecasting of severe geomagnetic stormsNear-future assets will not improve the situation.
Posner, A; Hesse, M; St Cyr, O C
2014-01-01
Space weather forecasting critically depends upon availability of timely and reliable observational data. It is therefore particularly important to understand how existing and newly planned observational assets perform during periods of severe space weather. Extreme space weather creates challenging conditions under which instrumentation and spacecraft may be impeded or in which parameters reach values that are outside the nominal observational range. This paper analyzes existing and upcoming observational capabilities for forecasting, and discusses how the findings may impact space weather research and its transition to operations. A single limitation to the assessment is lack of information provided to us on radiation monitor performance, which caused us not to fully assess (i.e., not assess short term) radiation storm forecasting. The assessment finds that at least two widely spaced coronagraphs including L4 would provide reliability for Earth-bound CMEs. Furthermore, all magnetic field measurements assessed fully meet requirements. However, with current or even with near term new assets in place, in the worst-case scenario there could be a near-complete lack of key near-real-time solar wind plasma data of severe disturbances heading toward and impacting Earth's magnetosphere. Models that attempt to simulate the effects of these disturbances in near real time or with archival data require solar wind plasma observations as input. Moreover, the study finds that near-future observational assets will be less capable of advancing the understanding of extreme geomagnetic disturbances at Earth, which might make the resulting space weather models unsuitable for transition to operations. Key Points Manuscript assesses current and near-future space weather assets Current assets unreliable for forecasting of severe geomagnetic storms Near-future assets will not improve the situation PMID:26213516
Carrillo, José Antonio; Colombi, Annachiara; Scianna, Marco
2018-05-14
The description of the cell spatial pattern and characteristic distances is fundamental in a wide range of physio-pathological biological phenomena, from morphogenesis to cancer growth. Discrete particle models are widely used in this field, since they are focused on the cell-level of abstraction and are able to preserve the identity of single individuals reproducing their behavior. In particular, a fundamental role in determining the usefulness and the realism of a particle mathematical approach is played by the choice of the intercellular pairwise interaction kernel and by the estimate of its parameters. The aim of the paper is to demonstrate how the concept of H-stability, deriving from statistical mechanics, can have important implications in this respect. For any given interaction kernel, it in fact allows to a priori predict the regions of the free parameter space that result in stable configurations of the system characterized by a finite and strictly positive minimal interparticle distance, which is fundamental when dealing with biological phenomena. The proposed analytical arguments are indeed able to restrict the range of possible variations of selected model coefficients, whose exact estimate however requires further investigations (e.g., fitting with empirical data), as illustrated in this paper by series of representative simulations dealing with cell colony reorganization, sorting phenomena and zebrafish embryonic development. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Hamid, Ahmed M.; Ibrahim, Yehia M.; Garimella, Venkata BS; ...
2015-10-28
We report on the development and characterization of a new traveling wave-based Structure for Lossless Ion Manipulations (TW-SLIM) for ion mobility separations (IMS). The TW-SLIM module uses parallel arrays of rf electrodes on two closely spaced surfaces for ion confinement, where the rf electrodes are separated by arrays of short electrodes, and using these TWs can be created to drive ion motion. In this initial work, TWs are created by the dynamic application of dc potentials. The capabilities of the TW-SLIM module for efficient ion confinement, lossless ion transport, and ion mobility separations at different rf and TW parameters aremore » reported. The TW-SLIM module is shown to transmit a wide mass range of ions (m/z 200–2500) utilizing a confining rf waveform (~1 MHz and ~300 V p-p) and low TW amplitudes (<20 V). Additionally, the short TW-SLIM module achieved resolutions comparable to existing commercially available low pressure IMS platforms and an ion mobility peak capacity of ~32 for TW speeds of <210 m/s. TW-SLIM performance was characterized over a wide range of rf and TW parameters and demonstrated robust performance. In conclusion, the combined attributes of the flexible design and low voltage requirements for the TW-SLIM module provide a basis for devices capable of much higher resolution and more complex ion manipulations.« less
Experimental Study of Buoyant-Thermocapillary Convection in a Rectangular Cavity
NASA Technical Reports Server (NTRS)
Braunsfurth, Manfred G.; Homsy, George M.
1996-01-01
The problem of buoyant-thermocapillary convection in cavities is governed by a relatively large number of nondimensional parameters, and there is consequently a large number of different types of flow that can be found in this system. Previous results give disjoint glimpses of a wide variety of qualitatively and quantitatively different results in widely different parts of parameter space. In this study, we report experiments on the primary and secondary instabilities in a geometry with equal aspect ratios in the range from 1 to 8 in both the direction along and perpendicular to the applied temperature gradient. We thus complement previous work which mostly involved either fluid layers of large extent in both directions, or consisted of investigations of strictly two-dimensional disturbances. We observe the primary transition from an essentially two-dimensional flow to steady three-dimensional longitudinal rolls. The critical Marangoni number is found to depend on the aspect ratios of the system, and varies from 4.6 x 10(exp 5) at aspect ratio 2.0 to 5.5 x 10(exp 4) at aspect ratio 3.5. Further, we have investigated the stability of the three-dimensional flow at larger Marangoni numbers, and find a novel oscillatory flow at critical Marangoni numbers of the order of 6 x 10(exp 5). We suggest possible mechanisms which give rise to the oscillation, and find that it is expected to be a relaxation type oscillation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamid, Ahmed M.; Ibrahim, Yehia M.; Garimella, Venkata BS
We report on the development and characterization of a new traveling wave-based Structure for Lossless Ion Manipulations (TW-SLIM) for ion mobility separations (IMS). The TW-SLIM module uses parallel arrays of rf electrodes on two closely spaced surfaces for ion confinement, where the rf electrodes are separated by arrays of short electrodes, and using these TWs can be created to drive ion motion. In this initial work, TWs are created by the dynamic application of dc potentials. The capabilities of the TW-SLIM module for efficient ion confinement, lossless ion transport, and ion mobility separations at different rf and TW parameters aremore » reported. The TW-SLIM module is shown to transmit a wide mass range of ions (m/z 200–2500) utilizing a confining rf waveform (~1 MHz and ~300 V p-p) and low TW amplitudes (<20 V). Additionally, the short TW-SLIM module achieved resolutions comparable to existing commercially available low pressure IMS platforms and an ion mobility peak capacity of ~32 for TW speeds of <210 m/s. TW-SLIM performance was characterized over a wide range of rf and TW parameters and demonstrated robust performance. In conclusion, the combined attributes of the flexible design and low voltage requirements for the TW-SLIM module provide a basis for devices capable of much higher resolution and more complex ion manipulations.« less
2010-01-01
Background Signal transduction networks represent the information processing systems that dictate which dynamical regimes of biochemical activity can be accessible to a cell under certain circumstances. One of the major concerns in molecular systems biology is centered on the elucidation of the robustness properties and information processing capabilities of signal transduction networks. Achieving this goal requires the establishment of causal relations between the design principle of biochemical reaction systems and their emergent dynamical behaviors. Methods In this study, efforts were focused in the construction of a relatively well informed, deterministic, non-linear dynamic model, accounting for reaction mechanisms grounded on standard mass action and Hill saturation kinetics, of the canonical reaction topology underlying Toll-like receptor 4 (TLR4)-mediated signaling events. This signaling mechanism has been shown to be deployed in macrophages during a relatively short time window in response to lypopolysaccharyde (LPS) stimulation, which leads to a rapidly mounted innate immune response. An extensive computational exploration of the biochemical reaction space inhabited by this signal transduction network was performed via local and global perturbation strategies. Importantly, a broad spectrum of biologically plausible dynamical regimes accessible to the network in widely scattered regions of parameter space was reconstructed computationally. Additionally, experimentally reported transcriptional readouts of target pro-inflammatory genes, which are actively modulated by the network in response to LPS stimulation, were also simulated. This was done with the main goal of carrying out an unbiased statistical assessment of the intrinsic robustness properties of this canonical reaction topology. Results Our simulation results provide convincing numerical evidence supporting the idea that a canonical reaction mechanism of the TLR4 signaling network is capable of performing information processing in a robust manner, a functional property that is independent of the signaling task required to be executed. Nevertheless, it was found that the robust performance of the network is not solely determined by its design principle (topology), but this may be heavily dependent on the network's current position in biochemical reaction space. Ultimately, our results enabled us the identification of key rate limiting steps which most effectively control the performance of the system under diverse dynamical regimes. Conclusions Overall, our in silico study suggests that biologically relevant and non-intuitive aspects on the general behavior of a complex biomolecular network can be elucidated only when taking into account a wide spectrum of dynamical regimes attainable by the system. Most importantly, this strategy provides the means for a suitable assessment of the inherent variational constraints imposed by the structure of the system when systematically probing its parameter space. PMID:20230643
Strong washout approximation to resonant leptogenesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garbrecht, Björn; Gautier, Florian; Klaric, Juraj, E-mail: garbrecht@tum.de, E-mail: florian.gautier@tum.de, E-mail: juraj.klaric@tum.de
We show that the effective decay asymmetry for resonant Leptogenesis in the strong washout regime with two sterile neutrinos and a single active flavour can in wide regions of parameter space be approximated by its late-time limit ε=Xsin(2φ)/(X{sup 2}+sin{sup 2}φ), where X=8πΔ/(|Y{sub 1}|{sup 2}+|Y{sub 2}|{sup 2}), Δ=4(M{sub 1}-M{sub 2})/(M{sub 1}+M{sub 2}), φ=arg(Y{sub 2}/Y{sub 1}), and M{sub 1,2}, Y{sub 1,2} are the masses and Yukawa couplings of the sterile neutrinos. This approximation in particular extends to parametric regions where |Y{sub 1,2}|{sup 2}>> Δ, i.e. where the width dominates the mass splitting. We generalise the formula for the effective decay asymmetry to themore » case of several flavours of active leptons and demonstrate how this quantity can be used to calculate the lepton asymmetry for phenomenological scenarios that are in agreement with the observed neutrino oscillations. We establish analytic criteria for the validity of the late-time approximation for the decay asymmetry and compare these with numerical results that are obtained by solving for the mixing and the oscillations of the sterile neutrinos. For phenomenologically viable models with two sterile neutrinos, we find that the flavoured effective late-time decay asymmetry can be applied throughout parameter space.« less
Sequential Bayesian geoacoustic inversion for mobile and compact source-receiver configuration.
Carrière, Olivier; Hermand, Jean-Pierre
2012-04-01
Geoacoustic characterization of wide areas through inversion requires easily deployable configurations including free-drifting platforms, underwater gliders and autonomous vehicles, typically performing repeated transmissions during their course. In this paper, the inverse problem is formulated as sequential Bayesian filtering to take advantage of repeated transmission measurements. Nonlinear Kalman filters implement a random-walk model for geometry and environment and an acoustic propagation code in the measurement model. Data from MREA/BP07 sea trials are tested consisting of multitone and frequency-modulated signals (bands: 0.25-0.8 and 0.8-1.6 kHz) received on a shallow vertical array of four hydrophones 5-m spaced drifting over 0.7-1.6 km range. Space- and time-coherent processing are applied to the respective signal types. Kalman filter outputs are compared to a sequence of global optimizations performed independently on each received signal. For both signal types, the sequential approach is more accurate but also more efficient. Due to frequency diversity, the processing of modulated signals produces a more stable tracking. Although an extended Kalman filter provides comparable estimates of the tracked parameters, the ensemble Kalman filter is necessary to properly assess uncertainty. In spite of mild range dependence and simplified bottom model, all tracked geoacoustic parameters are consistent with high-resolution seismic profiling, core logging P-wave velocity, and previous inversion results with fixed geometries.
NASA Technical Reports Server (NTRS)
Srinivasan, Supramaniam; Mukerjee, Sanjeev; Parthasarathy, A.; CesarFerreira, A.; Wakizoe, Masanobu; Rho, Yong Woo; Kim, Junbom; Mosdale, Renaut A.; Paetzold, Ronald F.; Lee, James
1994-01-01
The proton exchange membrane fuel cell (PEMFC) is one of the most promising electrochemical power sources for space and electric vehicle applications. The wide spectrum of R&D activities on PEMFC's, carried out in our Center from 1988 to date, is as follows (1) Electrode Kinetic and Electrocatalysis of Oxygen Reduction; (2) Optimization of Structures of Electrodes and of Membrane and Electrode Assemblies; (3) Selection and Evaluation of Advanced Proton Conducting Membranes and of Operating Conditions to Attain High Energy Efficiency; (4) Modeling Analysis of Fuel Cell Performance and of Thermal and Water Management; and (5) Engineering Design and Development of Multicell Stacks. The accomplishments on these tasks may be summarized as follows: (1) A microelectrode technique was developed to determine the electrode kinetic parameters for the fuel cell reactions and mass transport parameters for the H2 and O2 reactants in the proton conducting membrane. (2) High energy efficiencies and high power densities were demonstrated in PEMFCs with low platinum loading electrodes (0.4 mg/cm(exp 2) or less), advanced membranes and optimized structures of membrane and electrode assemblies, as well as operating conditions. (3) The modeling analyses revealed methods to minimize mass transport limitations, particularly with air as the cathodic reactant; and for efficient thermal and water management. (4) Work is in progress to develop multi-kilowatt stacks with the electrodes containing low platinum loadings.
Li, Hongyin; Bai, Yanzheng; Hu, Ming; Luo, Yingxin; Zhou, Zebing
2016-12-23
The state-of-the-art accelerometer technology has been widely applied in space missions. The performance of the next generation accelerometer in future geodesic satellites is pushed to 8 × 10 - 13 m / s 2 / H z 1 / 2 , which is close to the hardware fundamental limit. According to the instrument noise budget, the geodesic test mass must be kept in the center of the accelerometer within the bounds of 56 pm / Hz 1 / 2 by the feedback controller. The unprecedented control requirements and necessity for the integration of calibration functions calls for a new type of control scheme with more flexibility and robustness. A novel digital controller design for the next generation electrostatic accelerometers based on disturbance observation and rejection with the well-studied Embedded Model Control (EMC) methodology is presented. The parameters are optimized automatically using a non-smooth optimization toolbox and setting a weighted H-infinity norm as the target. The precise frequency performance requirement of the accelerometer is well met during the batch auto-tuning, and a series of controllers for multiple working modes is generated. Simulation results show that the novel controller could obtain not only better disturbance rejection performance than the traditional Proportional Integral Derivative (PID) controllers, but also new instrument functions, including: easier tuning procedure, separation of measurement and control bandwidth and smooth control parameter switching.
Impact of the IMF conditions on the high latitude geomagnetic field fluctuations at Swarm altitude
NASA Astrophysics Data System (ADS)
De Michelis, Paola; Consolini, Giuseppe; Tozzi, Roberta
2016-04-01
Several space-plasma media are characterized by turbulent fluctuations covering a wide range of temporal and spatial scales from the MHD domain down to the kinetic region, which substantially affect the overall dynamics of these media. In the framework of ionosphere-magnetosphere coupling, magnetic field and plasma disturbances are driven by different current systems responsible for the coupling. These disturbances manifest in the plasma parameters inhomogeneity and in the magnetic field fluctuations, which are capable of affecting the ionospheric conditions. The present work focuses on the analysis of the statistical features of high latitude magnetic field fluctuations at Swarm altitude. The multi-satellite mission, Swarm, is equipped with several instruments which observe electric and magnetic fields as well as ionospheric parameters of the near-Earth space environment. Using these data we investigate the scaling properties of the magnetic field fluctuations at ionospheric altitude and high latitudes in the Northern and Southern hemispheres according to different interplanetary magnetic field conditions and Earth's seasons. The aim of this work is to characterize the different features of ionospheric turbulence in order to better understand the nature and possible drivers of magnetic field variability and to discuss the results in the framework of Sun-Earth relationship and ionospheric polar convection. This work is supported by the Italian National Program for Antarctic Research (PNRA) Research Project 2013/AC3.08
Li, Hongyin; Bai, Yanzheng; Hu, Ming; Luo, Yingxin; Zhou, Zebing
2016-01-01
The state-of-the-art accelerometer technology has been widely applied in space missions. The performance of the next generation accelerometer in future geodesic satellites is pushed to 8×10−13m/s2/Hz1/2, which is close to the hardware fundamental limit. According to the instrument noise budget, the geodesic test mass must be kept in the center of the accelerometer within the bounds of 56 pm/Hz1/2 by the feedback controller. The unprecedented control requirements and necessity for the integration of calibration functions calls for a new type of control scheme with more flexibility and robustness. A novel digital controller design for the next generation electrostatic accelerometers based on disturbance observation and rejection with the well-studied Embedded Model Control (EMC) methodology is presented. The parameters are optimized automatically using a non-smooth optimization toolbox and setting a weighted H-infinity norm as the target. The precise frequency performance requirement of the accelerometer is well met during the batch auto-tuning, and a series of controllers for multiple working modes is generated. Simulation results show that the novel controller could obtain not only better disturbance rejection performance than the traditional Proportional Integral Derivative (PID) controllers, but also new instrument functions, including: easier tuning procedure, separation of measurement and control bandwidth and smooth control parameter switching. PMID:28025534
Automated system for generation of soil moisture products for agricultural drought assessment
NASA Astrophysics Data System (ADS)
Raja Shekhar, S. S.; Chandrasekar, K.; Sesha Sai, M. V. R.; Diwakar, P. G.; Dadhwal, V. K.
2014-11-01
Drought is a frequently occurring disaster affecting lives of millions of people across the world every year. Several parameters, indices and models are being used globally to forecast / early warning of drought and monitoring drought for its prevalence, persistence and severity. Since drought is a complex phenomenon, large number of parameter/index need to be evaluated to sufficiently address the problem. It is a challenge to generate input parameters from different sources like space based data, ground data and collateral data in short intervals of time, where there may be limitation in terms of processing power, availability of domain expertise, specialized models & tools. In this study, effort has been made to automate the derivation of one of the important parameter in the drought studies viz Soil Moisture. Soil water balance bucket model is in vogue to arrive at soil moisture products, which is widely popular for its sensitivity to soil conditions and rainfall parameters. This model has been encoded into "Fish-Bone" architecture using COM technologies and Open Source libraries for best possible automation to fulfill the needs for a standard procedure of preparing input parameters and processing routines. The main aim of the system is to provide operational environment for generation of soil moisture products by facilitating users to concentrate on further enhancements and implementation of these parameters in related areas of research, without re-discovering the established models. Emphasis of the architecture is mainly based on available open source libraries for GIS and Raster IO operations for different file formats to ensure that the products can be widely distributed without the burden of any commercial dependencies. Further the system is automated to the extent of user free operations if required with inbuilt chain processing for every day generation of products at specified intervals. Operational software has inbuilt capabilities to automatically download requisite input parameters like rainfall, Potential Evapotranspiration (PET) from respective servers. It can import file formats like .grd, .hdf, .img, generic binary etc, perform geometric correction and re-project the files to native projection system. The software takes into account the weather, crop and soil parameters to run the designed soil water balance model. The software also has additional features like time compositing of outputs to generate weekly, fortnightly profiles for further analysis. Other tools to generate "Area Favorable for Crop Sowing" using the daily soil moisture with highly customizable parameters interface has been provided. A whole India analysis would now take a mere 20 seconds for generation of soil moisture products which would normally take one hour per day using commercial software.
A Validation Study of Merging and Spacing Techniques in a NAS-Wide Simulation
NASA Technical Reports Server (NTRS)
Glaab, Patricia C.
2011-01-01
In November 2010, Intelligent Automation, Inc. (IAI) delivered an M&S software tool to that allows system level studies of the complex terminal airspace with the ACES simulation. The software was evaluated against current day arrivals in the Atlanta TRACON using Atlanta's Hartsfield-Jackson International Airport (KATL) arrival schedules. Results of this validation effort are presented describing data sets, traffic flow assumptions and techniques, and arrival rate comparisons between reported landings at Atlanta versus simulated arrivals using the same traffic sets in ACES equipped with M&S. Initial results showed the simulated system capacity to be significantly below arrival capacity seen at KATL. Data was gathered for Atlanta using commercial airport and flight tracking websites (like FlightAware.com), and analyzed to insure compatible techniques were used for result reporting and comparison. TFM operators for Atlanta were consulted for tuning final simulation parameters and for guidance in flow management techniques during high volume operations. Using these modified parameters and incorporating TFM guidance for efficiencies in flowing aircraft, arrival capacity for KATL was matched for the simulation. Following this validation effort, a sensitivity study was conducted to measure the impact of variations in system parameters on the Atlanta airport arrival capacity.
Koski, Antti; Tossavainen, Timo; Juhola, Martti
2004-01-01
Electrocardiogram (ECG) signals are the most prominent biomedical signal type used in clinical medicine. Their compression is important and widely researched in the medical informatics community. In the previous literature compression efficacy has been investigated only in the context of how much known or developed methods reduced the storage required by compressed forms of original ECG signals. Sometimes statistical signal evaluations based on, for example, root mean square error were studied. In previous research we developed a refined method for signal compression and tested it jointly with several known techniques for other biomedical signals. Our method of so-called successive approximation quantization used with wavelets was one of the most successful in those tests. In this paper, we studied to what extent these lossy compression methods altered values of medical parameters (medical information) computed from signals. Since the methods are lossy, some information is lost due to the compression when a high enough compression ratio is reached. We found that ECG signals sampled at 400 Hz could be compressed to one fourth of their original storage space, but the values of their medical parameters changed less than 5% due to compression, which indicates reliable results.
Interactive design optimization of magnetorheological-brake actuators using the Taguchi method
NASA Astrophysics Data System (ADS)
Erol, Ozan; Gurocak, Hakan
2011-10-01
This research explored an optimization method that would automate the process of designing a magnetorheological (MR)-brake but still keep the designer in the loop. MR-brakes apply resistive torque by increasing the viscosity of an MR fluid inside the brake. This electronically controllable brake can provide a very large torque-to-volume ratio, which is very desirable for an actuator. However, the design process is quite complex and time consuming due to many parameters. In this paper, we adapted the popular Taguchi method, widely used in manufacturing, to the problem of designing a complex MR-brake. Unlike other existing methods, this approach can automatically identify the dominant parameters of the design, which reduces the search space and the time it takes to find the best possible design. While automating the search for a solution, it also lets the designer see the dominant parameters and make choices to investigate only their interactions with the design output. The new method was applied for re-designing MR-brakes. It reduced the design time from a week or two down to a few minutes. Also, usability experiments indicated significantly better brake designs by novice users.
Drifting oscillations in axion monodromy
Flauger, Raphael; McAllister, Liam; Silverstein, Eva; ...
2017-10-31
In this paper, we study the pattern of oscillations in the primordial power spectrum in axion monodromy inflation, accounting for drifts in the oscillation period that can be important for comparing to cosmological data. In these models the potential energy has a monomial form over a super-Planckian field range, with superimposed modulations whose size is model-dependent. The amplitude and frequency of the modulations are set by the expectation values of moduli fields. We show that during the course of inflation, the diminishing energy density can induce slow adjustments of the moduli, changing the modulations. We provide templates capturing the effectsmore » of drifting moduli, as well as drifts arising in effective field theory models based on softly broken discrete shift symmetries, and we estimate the precision required to detect a drifting period. A non-drifting template suffices over a wide range of parameters, but for the highest frequencies of interest, or for sufficiently strong drift, it is necessary to include parameters characterizing the change in frequency over the e-folds visible in the CMB. Finally, we use these templates to perform a preliminary search for drifting oscillations in a part of the parameter space in the Planck nominal mission data.« less
Validating the simulation of large-scale parallel applications using statistical characteristics
Zhang, Deli; Wilke, Jeremiah; Hendry, Gilbert; ...
2016-03-01
Simulation is a widely adopted method to analyze and predict the performance of large-scale parallel applications. Validating the hardware model is highly important for complex simulations with a large number of parameters. Common practice involves calculating the percent error between the projected and the real execution time of a benchmark program. However, in a high-dimensional parameter space, this coarse-grained approach often suffers from parameter insensitivity, which may not be known a priori. Moreover, the traditional approach cannot be applied to the validation of software models, such as application skeletons used in online simulations. In this work, we present a methodologymore » and a toolset for validating both hardware and software models by quantitatively comparing fine-grained statistical characteristics obtained from execution traces. Although statistical information has been used in tasks like performance optimization, this is the first attempt to apply it to simulation validation. Lastly, our experimental results show that the proposed evaluation approach offers significant improvement in fidelity when compared to evaluation using total execution time, and the proposed metrics serve as reliable criteria that progress toward automating the simulation tuning process.« less
Large-Scale Alfvenic Impulses on the Sun: How They Are Generated and What We Learn From Them
NASA Technical Reports Server (NTRS)
Thompson, Barbara
2004-01-01
NASA GSFC The Sun's atmosphere hosts a wide variety of magnetosonic disturbances. These wave modes are detected, almost exclusively, by examining images of the Sun's magnetic atmosphere and looking for propagating distortions. Although none of the Sun's plasma parameters are measured directly, we derive a great deal of information from these observations. In fact, by modeling these propagating disturbances, we may be able to derive the most accurate estimates plasma parameters. From observations absorption, refraction, reflection, and coupling of numerous wave modes, we advance our knowledge of the Sun's magnetic field, temperature, density, and current. The Sun's continuous oscillation, coronal mass ejections, flares, and other dynamic phenomena can produce wave disturbances which are observable from near-Earth space. Several of these disturbances have been traced from the inner corona out into the heliosphere. From the generation of these disturbances, we are able to learn about the phenomena which create them as well as the media through which they re-propagating. The presentation will include a discussion of the generation of Alfvenic disturbances on the Sun, ways we observe these disturbances, and how recent advances in modeling and analysis have brought us closer to determining solar in situ parameters.
Gradient descent for robust kernel-based regression
NASA Astrophysics Data System (ADS)
Guo, Zheng-Chu; Hu, Ting; Shi, Lei
2018-06-01
In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.
NASA Astrophysics Data System (ADS)
Batac, Rene C.; Paguirigan, Antonino A., Jr.; Tarun, Anjali B.; Longjas, Anthony G.
2017-04-01
We propose a cellular automata model for earthquake occurrences patterned after the sandpile model of self-organized criticality (SOC). By incorporating a single parameter describing the probability to target the most susceptible site, the model successfully reproduces the statistical signatures of seismicity. The energy distributions closely follow power-law probability density functions (PDFs) with a scaling exponent of around -1. 6, consistent with the expectations of the Gutenberg-Richter (GR) law, for a wide range of the targeted triggering probability values. Additionally, for targeted triggering probabilities within the range 0.004-0.007, we observe spatiotemporal distributions that show bimodal behavior, which is not observed previously for the original sandpile. For this critical range of values for the probability, model statistics show remarkable comparison with long-period empirical data from earthquakes from different seismogenic regions. The proposed model has key advantages, the foremost of which is the fact that it simultaneously captures the energy, space, and time statistics of earthquakes by just introducing a single parameter, while introducing minimal parameters in the simple rules of the sandpile. We believe that the critical targeting probability parameterizes the memory that is inherently present in earthquake-generating regions.
Mapping the Pressure-radius Relationship of Exoplanets
NASA Astrophysics Data System (ADS)
Cubillos, Patricio; Fossati, Luca; Kubyshkina, Darya
2017-10-01
The radius of a planet is one of the most physically meaningful and readily accessible parameters of extra-solar planets. This parameter is extensively used in the literature to compare planets or study trends in the know population of exoplanets. However, in an atmosphere, the concept of a planet radius is inherently fuzzy. The atmospheric pressures probed by trasmission (transit) or emission (eclipse) spectra are not directly constrained by the observations, they vary as a function of the atmospheric properties and observing wavelengths, and further correlate with the atmospheric properties producing degenerate solutions.Here, we characterize the properties of exoplanet radii using a radiative-transfer model to compute clear- atmosphere transmission and emission spectra of gas-dominated planets. We explore a wide range of planetary temperatures, masses, and radii, sampling from 300 to 3000 K and Jupiter- to Earth-like values. We will discuss how transit and photospheric radii vary over the parameter space, and map the global trends in the atmospheric pressures associated with these radii. We will also highlight the biases introduced by the choice of an observing band, or the assumption of a clear/cloudy atmosphere, and the relevance that these biases take as better instrumentation improves the precision of photometric observations.
Concept for an International Standard related to Space Weather Effects on Space Systems
NASA Astrophysics Data System (ADS)
Tobiska, W. Kent; Tomky, Alyssa
There is great interest in developing an international standard related to space weather in order to specify the tools and parameters needed for space systems operations. In particular, a standard is important for satellite operators who may not be familiar with space weather. In addition, there are others who participate in space systems operations that would also benefit from such a document. For example, the developers of software systems that provide LEO satellite orbit determination, radio communication availability for scintillation events (GEO-to-ground L and UHF bands), GPS uncertainties, and the radiation environment from ground-to-space for commercial space tourism. These groups require recent historical data, current epoch specification, and forecast of space weather events into their automated or manual systems. Other examples are national government agencies that rely on space weather data provided by their organizations such as those represented in the International Space Environment Service (ISES) group of 14 national agencies. Designers, manufacturers, and launchers of space systems require real-time, operational space weather parameters that can be measured, monitored, or built into automated systems. Thus, a broad scope for the document will provide a useful international standard product to a variety of engineering and science domains. The structure of the document should contain a well-defined scope, consensus space weather terms and definitions, and internationally accepted descriptions of the main elements of space weather, its sources, and its effects upon space systems. Appendices will be useful for describing expanded material such as guidelines on how to use the standard, how to obtain specific space weather parameters, and short but detailed descriptions such as when best to use some parameters and not others; appendices provide a path for easily updating the standard since the domain of space weather is rapidly changing with new advances in scientific and engineering understanding. We present a draft outline that can be used as the basis for such a standard.
[Optimize dropping process of Ginkgo biloba dropping pills by using design space approach].
Shen, Ji-Chen; Wang, Qing-Qing; Chen, An; Pan, Fang-Lai; Gong, Xing-Chu; Qu, Hai-Bin
2017-07-01
In this paper, a design space approach was applied to optimize the dropping process of Ginkgo biloba dropping pills. Firstly, potential critical process parameters and potential process critical quality attributes were determined through literature research and pre-experiments. Secondly, experiments were carried out according to Box-Behnken design. Then the critical process parameters and critical quality attributes were determined based on the experimental results. Thirdly, second-order polynomial models were used to describe the quantitative relationships between critical process parameters and critical quality attributes. Finally, a probability-based design space was calculated and verified. The verification results showed that efficient production of Ginkgo biloba dropping pills can be guaranteed by operating within the design space parameters. The recommended operation ranges for the critical dropping process parameters of Ginkgo biloba dropping pills were as follows: dropping distance of 5.5-6.7 cm, and dropping speed of 59-60 drops per minute, providing a reference for industrial production of Ginkgo biloba dropping pills. Copyright© by the Chinese Pharmaceutical Association.
Parameter space of experimental chaotic circuits with high-precision control parameters.
de Sousa, Francisco F G; Rubinger, Rero M; Sartorelli, José C; Albuquerque, Holokx A; Baptista, Murilo S
2016-08-01
We report high-resolution measurements that experimentally confirm a spiral cascade structure and a scaling relationship of shrimps in the Chua's circuit. Circuits constructed using this component allow for a comprehensive characterization of the circuit behaviors through high resolution parameter spaces. To illustrate the power of our technological development for the creation and the study of chaotic circuits, we constructed a Chua circuit and study its high resolution parameter space. The reliability and stability of the designed component allowed us to obtain data for long periods of time (∼21 weeks), a data set from which an accurate estimation of Lyapunov exponents for the circuit characterization was possible. Moreover, this data, rigorously characterized by the Lyapunov exponents, allows us to reassure experimentally that the shrimps, stable islands embedded in a domain of chaos in the parameter spaces, can be observed in the laboratory. Finally, we confirm that their sizes decay exponentially with the period of the attractor, a result expected to be found in maps of the quadratic family.
Changes in arctic and boreal ecosystem productivity in response to changes in growing season length
NASA Astrophysics Data System (ADS)
Hmimina, G.; Yu, R.; Billesbach, D. P.; Huemmrich, K. F.; Gamon, J. A.
2017-12-01
Large-scale greening and browning trends have been reported in northern terrestrial ecosystems over the last two decades. The greening is interpreted as an increased productivity in response to increases in temperature. Boreal and arctic ecosystem productivity is expected to increase as the length of growing seasons increases, resulting in a bigger northern carbon sink pool. While evidences of such greening based on the use of remotely-sensed vegetation indices are compelling, analysis over the sparse network of flux tower sites available in northern latitudes paint a more complex story, and raise some issues as to whether vegetation indices based on NIR reflectance at large spatial scales are suited to the analysis of very fragmented landscapes that exhibit strong patterns in snow and standing water cover. In a broader sense, whether "greenness" is a sufficiently good proxy of ecosystem productivity in northern latitudes is unclear. The current work focused on deriving continuous estimates of ecosystem potential productivity and photosynthesis limitation over a network of flux towers, and on analyzing the relationships between potential yearly productivity and the length of the growing season over time and space. A novel partitioning method was used to derive ecophysiological parameters from sparse carbon fluxes measurements, and those parameters were then used to delimit the growing season and to estimate potential yearly productivity over a wide range of ecosystems. The relationships obtained between those two metrics were then computed for each of the 23 studied sites, exhibiting a wide range of different responses to changes in growing season length. While an overall significant increasing productivity trend was found (R²=0.12) suggesting increased productivity, the more northern sites exhibited a consistent decreasing trend (0.11 The attribution of these trends to either changes in potential productivity or productivity limitation by abiotic factors will be discussed, as well as the potential of extending this analysis over space by using remote-sensing data along with flux tower data.
NASA Astrophysics Data System (ADS)
Barber, Caitline A.; Gleason, Colin J.
2018-01-01
Hydraulic geometry (HG) has long enabled daily discharge estimates, flood risk monitoring, and water resource and habitat assessments, among other applications. At-many-stations HG (AMHG) is a newly discovered form of HG with an evolving understanding. AMHG holds that there are temporally and spatially invariant ('congruent') depth, width, velocity, and discharge values that are shared by all stations of a river. Furthermore, these river-wide congruent hydraulics have been shown to link at-a-station HG (AHG) in space, contrary to previous expectation of AHG as spatially unpredictable. To date, AMHG has only been thoroughly examined on six rivers, and its congruent hydraulics are not well understood. To address the limited understanding of AMHG, we calculated AMHG for 191 rivers in the United States using USGS field-measured data from over 1900 gauging stations. These rivers represent nearly all geologic and climatic settings found in the continental U.S. and allow for a robust assessment of AMHG across scales. Over 60% of rivers were found to have AMHG with strong explanatory power to predict AHG across space (defined as r2 > 0.6, 118/191 rivers). We also found that derived congruent hydraulics bear little relation to their observed time-varying counterparts, and the strength of AMHG did not correlate with any available observed or congruent hydraulic parameters. We also found that AMHG is expressed at all fluvial scales in this study. Some statistically significant spatial clusters of rivers with strong and weak AMHG were identified, but further research is needed to identify why these clusters exist. Thus, this first widespread empirical investigation of AMHG leads us to conclude that AMHG is indeed a widely prevalent natural fluvial phenomenon, and we have identified linkages between known fluvial parameters and AMHG. Our work should give confidence to future researchers seeking to perform the necessary detailed hydraulic analysis of AMHG.
The quantum measurement of time
NASA Technical Reports Server (NTRS)
Shepard, Scott R.
1994-01-01
Traditionally, in non-relativistic Quantum Mechanics, time is considered to be a parameter, rather than an observable quantity like space. In relativistic Quantum Field Theory, space and time are treated equally by reducing space to also be a parameter. Herein, after a brief review of other measurements, we describe a third possibility, which is to treat time as a directly observable quantity.
Galaxy power spectrum in redshift space: Combining perturbation theory with the halo model
NASA Astrophysics Data System (ADS)
Okumura, Teppei; Hand, Nick; Seljak, Uroš; Vlah, Zvonimir; Desjacques, Vincent
2015-11-01
Theoretical modeling of the redshift-space power spectrum of galaxies is crucially important to correctly extract cosmological information from galaxy redshift surveys. The task is complicated by the nonlinear biasing and redshift space distortion (RSD) effects, which change with halo mass, and by the wide distribution of halo masses and their occupations by galaxies. One of the main modeling challenges is the existence of satellite galaxies that have both radial distribution inside the halos and large virial velocities inside halos, a phenomenon known as the Finger-of-God (FoG) effect. We present a model for the redshift-space power spectrum of galaxies in which we decompose a given galaxy sample into central and satellite galaxies and relate different contributions to the power spectrum to 1-halo and 2-halo terms in a halo model. Our primary goal is to ensure that any parameters that we introduce have physically meaningful values, and are not just fitting parameters. For the lowest order 2-halo terms we use the previously developed RSD modeling of halos in the context of distribution function and perturbation theory approach. This term needs to be multiplied by the effect of radial distances and velocities of satellites inside the halo. To this one needs to add the 1-halo terms, which are nonperturbative. We show that the real space 1-halo terms can be modeled as almost constant, with the finite extent of the satellites inside the halo inducing a small k2R2 term over the range of scales of interest, where R is related to the size of the halo given by its halo mass. We adopt a similar model for FoG in redshift space, ensuring that FoG velocity dispersion is related to the halo mass. For FoG k2 type expansions do not work over the range of scales of interest and FoG resummation must be used instead. We test several simple damping functions to model the velocity dispersion FoG effect. Applying the formalism to mock galaxies modeled after the "CMASS" sample of the BOSS survey, we find that our predictions for the redshift-space power spectra are accurate up to k ≃0.4 h Mpc-1 within 1% if the halo power spectrum is measured using N -body simulations and within 3% if it is modeled using perturbation theory.
A new technique to expose the hypopharyngeal space: The modified Killian's method.
Sakai, Akihiro; Okami, Kenji; Sugimoto, Ryousuke; Ebisumoto, Koji; Yamamoto, Hikaru; Maki, Daisuke; Saito, Kosuke; Iida, Masahiro
2014-04-01
Recent remarkable progress in endoscopic technology has enabled the detection of superficial cancers that were undetectable in the past. However, even though advanced endoscopic technology can detect early lesions, it is useless unless it can provide wide exposure of an area. By modifying the Killian position, it is possible to observe a wider range of the hypopharyngeal space than is possible with conventional head positions. We report a revolutionary method that uses a new head position to widely open the hypopharynx. The technique is named "the Modified Killian's method." The patient is initially placed in the Killian position and then bent further forward from the original position (i.e., the modified Killian position). While in this position, the patient's head is turned and the Valsalva maneuver is applied. These additional maneuvers constitute the Modified Killian's method and widely expands the hypopharyngeal space. The conventional head position cannot open the hypopharyngeal space sufficiently; however, the Modified Killian's method opens the hypopharyngeal space very widely. The Modified Killian's method enables observation of the entire circumference of the hypopharyngeal space and the cervical esophageal entry. The Modified Killian's method may become an indispensable technique for observing the hypopharynx and detecting hypopharyngeal cancers. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Multifractal analysis of different hydrological products of X-band radar
NASA Astrophysics Data System (ADS)
Skouri-Plakali, Ilektra; Da Silva Rocha Paz, Igor; Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel
2017-04-01
Rainfall is widely considered as the hydrological process that triggers all the others. Its accurate measurements are crucial especially when they are used afterwards for the hydrological modeling of urban and peri-urban catchments for decision-making. Rainfall is a complex process and is scale dependent in space and time. Hence a high spatial and temporal resolution of the data is more appropriate for urban modeling. Therefore, a great interest of high-resolution measurements of precipitation in space and time is manifested. Radar technologies have not stopped evolving since their first appearance about the mid-twentieth. Indeed, the turning point work by Marshall-Palmer (1948) has established the Z - R power-law relation that has been widely used, with major scientific efforts being devoted to find "the best choice" of the two associated parameters. Nowadays X-band radars, being provided with dual-polarization and Doppler means, offer more accurate data of higher resolution. The fact that drops are oblate induces a differential phase shift between the two polarizations. The quantity most commonly used for the rainfall rate computation is actually the specific differential phase shift, which is the gradient of the differential phase shift along the radial beam direction. It is even stronger correlated to the rain rate R than reflectivity Z. Hence the rain rate can be computed with a different power-law relation, which again depends on only two parameters. Furthermore, an attenuation correction is needed to adjust the loss of radar energy due to the absorption and scattering as it passes through the atmosphere. Due to natural variations of reflectivity with altitude, vertical profile of reflectivity should be corrected as well. There are some other typical radar data filtering procedures, all resulting in various hydrological products. In this work, we use the Universal Multifractal framework to analyze and to inter-compare different products of X-band radar operated by Ecole des Ponts ParisTech. Several rainfall events selected during the recent period (2015 - 2016) were studied over two different embedded grids (64kmx64km and 32kmx32km, with a resolution of 250 m) covering the test site, using a variety of hydrological products. Obtained results demonstrate that some of these products are much more compatible with the scaling ideas. Indeed, the choice of data filters and/or data conversion procedures with the associated parameters does affect the scaling behavior. In turn, the scaling principals help to revisit and furthermore to optimize the radar technologies, including the choice of the associated parameters.
A probabilistic approach for the estimation of earthquake source parameters from spectral inversion
NASA Astrophysics Data System (ADS)
Supino, M.; Festa, G.; Zollo, A.
2017-12-01
The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to investigate the robustness of the method and uncertainty propagation from the data-space to the parameter space. Finally, the method is applied to characterize the source parameters of the earthquakes occurring during the 2016-2017 Central Italy sequence, with the goal of investigating the source parameter scaling with magnitude.
An open-source job management framework for parameter-space exploration: OACIS
NASA Astrophysics Data System (ADS)
Murase, Y.; Uchitane, T.; Ito, N.
2017-11-01
We present an open-source software framework for parameter-space exporation, named OACIS, which is useful to manage vast amount of simulation jobs and results in a systematic way. Recent development of high-performance computers enabled us to explore parameter spaces comprehensively, however, in such cases, manual management of the workflow is practically impossible. OACIS is developed aiming at reducing the cost of these repetitive tasks when conducting simulations by automating job submissions and data management. In this article, an overview of OACIS as well as a getting started guide are presented.
Thermal Analysis of Low Layer Density Multilayer Insulation Test Results
NASA Technical Reports Server (NTRS)
Johnson, Wesley L.
2011-01-01
Investigation of the thermal performance of low layer density multilayer insulations is important for designing long-duration space exploration missions involving the storage of cryogenic propellants. Theoretical calculations show an analytical optimal layer density, as widely reported in the literature. However, the appropriate test data by which to evaluate these calculations have been only recently obtained. As part of a recent research project, NASA procured several multilayer insulation test coupons for calorimeter testing. These coupons were configured to allow for the layer density to be varied from 0.5 to 2.6 layer/mm. The coupon testing was completed using the cylindrical Cryostat-l00 apparatus by the Cryogenics Test Laboratory at Kennedy Space Center. The results show the properties of the insulation as a function of layer density for multiple points. Overlaying these new results with data from the literature reveals a minimum layer density; however, the value is higher than predicted. Additionally, the data show that the transition region between high vacuum and no vacuum is dependent on the spacing of the reflective layers. Historically this spacing has not been taken into account as thermal performance was calculated as a function of pressure and temperature only; however the recent testing shows that the data is dependent on the Knudsen number which takes into account pressure, temperature, and layer spacing. These results aid in the understanding of the performance parameters of MLI and help to complete the body of literature on the topic.
Discovery of wide low and very low-mass binary systems using Virtual Observatory tools
NASA Astrophysics Data System (ADS)
Gálvez-Ortiz, M. C.; Solano, E.; Lodieu, N.; Aberasturi, M.
2017-04-01
The frequency of multiple systems and their properties are key constraints of stellar formation and evolution. Formation mechanisms of very low-mass (VLM) objects are still under considerable debate, and an accurate assessment of their multiplicity and orbital properties is essential for constraining current theoretical models. Taking advantage of the virtual observatory capabilities, we looked for comoving low and VLM binary (or multiple) systems using the Large Area Survey of the UKIDSS LAS DR10, SDSS DR9 and the 2MASS Catalogues. Other catalogues (WISE, GLIMPSE, SuperCosmos, etc.) were used to derive the physical parameters of the systems. We report the identification of 36 low and VLM (˜M0-L0 spectral types) candidates to binary/multiple system (separations between 200 and 92 000 au), whose physical association is confirmed through common proper motion, distance and low probability of chance alignment. This new system list notably increases the previous sampling in their mass-separation parameter space (˜100). We have also found 50 low-mass objects that we can classify as ˜L0-T2 according to their photometric information. Only one of these objects presents a common proper motion high-mass companion. Although we could not constrain the age of the majority of the candidates, probably most of them are still bound except four that may be under disruption processes. We suggest that our sample could be divided in two populations: one tightly bound wide VLM systems that are expected to last more than 10 Gyr, and other formed by weak bound wide VLM systems that will dissipate within a few Gyr.
NASA Astrophysics Data System (ADS)
Pakhanov, N. A.; Andreev, V. M.; Shvarts, M. Z.; Pchelyakov, O. P.
2018-03-01
Multi-junction solar cells based on III-V compounds are the most efficient converters of solar energy to electricity and are widely used in space solar arrays and terrestrial photovoltaic modules with sunlight concentrators. All modern high-efficiency III-V solar cells are based on the long-developed triple-junction III-V GaInP/GaInAs/Ge heterostructure and have an almost limiting efficiency for a given architecture — 30 and 41.6% for space and terrestrial concentrated radiations, respectively. Currently, an increase in efficiency is achieved by converting from the 3-junction to the more efficient 4-, 5-, and even 6-junction III-V architectures: growth technologies and methods of post-growth treatment of structures have been developed, new materials with optimal bandgaps have been designed, and crystallographic parameters have been improved. In this review, we consider recent achievements and prospects for the main directions of research and improvement of architectures, technologies, and materials used in laboratories to develop solar cells with the best conversion efficiency: 35.8% for space, 38.8% for terrestrial, and 46.1% for concentrated sunlight. It is supposed that by 2020, the efficiency will approach 40% for direct space radiation and 50% for concentrated terrestrial solar radiation. This review considers the architecture and technologies of solar cells with record-breaking efficiency for terrestrial and space applications. It should be noted that in terrestrial power plants, the use of III-V SCs is economically advantageous in systems with sunlight concentrators.
Development and Evaluation of Titanium Spacesuit Bearings
NASA Technical Reports Server (NTRS)
Rhodes, Richard; Battisti, Brian; Ytuarte, Raymond, Jr.; Schultz, Bradley
2016-01-01
The Z-2 Prototype Planetary Extravehicular Space Suit Assembly is a continuation of NASA's Z-series of spacesuits, designed with the intent of meeting a wide variety of exploration mission objectives, including human exploration of the Martian surface. Incorporating titanium bearings into the Z-series space suit architecture allows us to reduce mass by an estimated 23 lbs per suit system compared to the previously used stainless steel bearing race designs, without compromising suit functionality. There are two obstacles to overcome when using titanium for a bearing race- 1) titanium is flammable when exposed to the oxygen wetted environment inside the space suit and 2) titanium's poor wear properties are often challenging to overcome in tribology applications. In order to evaluate the ignitability of a titanium space suit bearing, a series of tests were conducted at White Sands Test Facility (WSTF) that introduced the bearings to an extreme test profile, with multiple failures imbedded into the test bearings. The testing showed no signs of ignition in the most extreme test cases; however, substantial wear of the bearing races was observed. In order to design a bearing that can last an entire exploration mission (approx. 3 years), design parameters for maximum contact stress need to be identified. To identify these design parameters, bearing test rigs were developed that allow for the quick evaluation of various bearing ball loads, ball diameters, lubricants, and surface treatments. This test data will allow designers to minimize the titanium bearing mass for a specific material and lubricant combination and design around a cycle life requirement for an exploration mission. This paper reviews the current research and testing that has been performed on titanium bearing races to evaluate the use of such materials in an enriched oxygen environment and to optimize the bearing assembly mass and tribological properties to accommodate for the high bearing cycle life for an exploration mission.
Effects of housing density and cage floor space on C57BL/6J mice
Smith, A.L.; Mabus, S.L.; Stockwell, J.D.; Muir, C.
2004-01-01
The Guide for the Care and Use of Laboratory Animals (the Guide) is widely accepted as the housing standard by most Institutional Animal Care and Use Committees. The recommendations are based on best professional judgment rather than experimental data. Current efforts are directed toward replacing these guidelines with data-driven, species-appropriate standards. Our studies were undertaken to determine the optimum housing density for C57BL/6J mice, the most commonly used inbred mouse strain. Four-week-old mice were housed for 8 weeks at four densities (the recommended ???12 in2 [ca. 77.4 cm 2]/mouse down to 5.6 in2 [ca. 36.1 cm2]/mouse) in three cage types with various amounts of floor space. Housing density did not affect a variety of physiologic parameters but did affect certain micro-environmental parameters, although these remained within accepted ranges. A second study was undertaken housing C57BL/6J mice with as little as 3.2 in2/mouse (ca. 20.6 cm2). The major effect was elevated ammonia concentrations that exceeded limits acceptable in the workplace at increased housing densities; however, the nasal passages and eyeballs of the mice remained microscopically normal. On the basis of these results, we conclude that C57BL/6J mice as large as 29 g may be housed with 5.6 in2 of floor space per mouse. This area is approximately half the floor space recommended in the Guide. The role of the Guide is to ensure that laboratory animals are well treated and housed in a species-appropriate manner. Our data suggest that current policies could be altered in order to provide the optimal habitation conditions matched to this species' social needs. Copyright 2004 by the American Association for Laboratory Animal Science.
Method of measuring the dc electric field and other tokamak parameters
Fisch, Nathaniel J.; Kirtz, Arnold H.
1992-01-01
A method including externally imposing an impulsive momentum-space flux to perturb hot tokamak electrons thereby producing a transient synchrotron radiation signal, in frequency-time space, and the inference, using very fast algorithms, of plasma parameters including the effective ion charge state Z.sub.eff, the direction of the magnetic field, and the position and width in velocity space of the impulsive momentum-space flux, and, in particular, the dc toroidal electric field.
The Galaxy Count Correlation Function in Redshift Space Revisited
NASA Astrophysics Data System (ADS)
Campagne, J.-E.; Plaszczynski, S.; Neveu, J.
2017-08-01
In the near future, cosmology will enter the wide and deep galaxy survey era, enabling high-precision studies of the large-scale structure of the universe in three dimensions. To test cosmological models and determine their parameters accurately, it is necessary to use data with exact theoretical expectations expressed in observational parameter space (angles and redshift). The data-driven, galaxy number count fluctuations on redshift shells can be used to build correlation functions ξ (θ ,{z}1,{z}2) on and between shells to probe the baryonic acoustic oscillations and distance-redshift distortions, as well as gravitational lensing and other relativistic effects. To obtain a numerical estimation of ξ (θ ,{z}1,{z}2) from a cosmological model, it is typical to use either a closed form derived from a tripolar spherical expansion or to compute the power spectrum {C}{\\ell }({z}1,{z}2) and perform a Legendre polynomial {P}{\\ell }(\\cos θ ) expansion. Here, we present a new derivation of a ξ (θ ,{z}1,{z}2) closed form using the spherical harmonic expansion and proceeding to an infinite sum over multipoles thanks to an addition theorem. We demonstrate that this new expression is perfectly compatible with the existing closed forms but is simpler to establish and manipulate. We provide formulas for the leading density and redshift-space contributions, but also show how Doppler-like and lensing terms can be easily included in this formalism. We have implemented and made publicly available software for computing those correlations efficiently, without any Limber approximation, and validated this software with the CLASSgal code. It is available at https://gitlab.in2p3.fr/campagne/AngPow.
Shotorban, Babak
2010-04-01
The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
N'Diaye, Mamadou; Pueyo, Laurent; Soummer, Rémi, E-mail: mamadou@stsci.edu
The Apodized Pupil Lyot Coronagraph (APLC) is a diffraction suppression system installed in the recently deployed instruments Palomar/P1640, Gemini/GPI, and VLT/SPHERE to allow direct imaging and spectroscopy of circumstellar environments. Using a prolate apodization, the current implementations offer raw contrasts down to 10{sup –7} at 0.2 arcsec from a star over a wide bandpass (20%), in the presence of central obstruction and struts, enabling the study of young or massive gaseous planets. Observations of older or lighter companions at smaller separations would require improvements in terms of the inner working angle (IWA) and contrast, but the methods originally used for thesemore » designs were not able to fully explore the parameter space. We propose a novel approach to improve the APLC performance. Our method relies on the linear properties of the coronagraphic electric field with the apodization at any wavelength to develop numerical solutions producing coronagraphic star images with high-contrast region in broadband light. We explore the parameter space by considering different aperture geometries, contrast levels, dark-zone sizes, bandpasses, and focal plane mask sizes. We present an application of these solutions to the case of Gemini/GPI with a design delivering a 10{sup –8} raw contrast at 0.19 arcsec and offering a significantly reduced sensitivity to low-order aberrations compared to the current implementation. Optimal solutions have also been found to reach 10{sup –10} contrast in broadband light regardless of the aperture shape, with effective IWA in the 2-3.5 λ/D range, therefore making the APLC a suitable option for the future exoplanet direct imagers on the ground or in space.« less
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Bell, T. L.; Lau, William K. M. (Technical Monitor)
2002-01-01
A characteristic feature of rainfall statistics is that they in general depend on the space and time scales over which rain data are averaged. As a part of an earlier effort to determine the sampling error of satellite rain averages, a space-time model of rainfall statistics was developed to describe the statistics of gridded rain observed in GATE. The model allows one to compute the second moment statistics of space- and time-averaged rain rate which can be fitted to satellite or rain gauge data to determine the four model parameters appearing in the precipitation spectrum - an overall strength parameter, a characteristic length separating the long and short wavelength regimes and a characteristic relaxation time for decay of the autocorrelation of the instantaneous local rain rate and a certain 'fractal' power law exponent. For area-averaged instantaneous rain rate, this exponent governs the power law dependence of these statistics on the averaging length scale $L$ predicted by the model in the limit of small $L$. In particular, the variance of rain rate averaged over an $L \\times L$ area exhibits a power law singularity as $L \\rightarrow 0$. In the present work the model is used to investigate how the statistics of area-averaged rain rate over the tropical Western Pacific measured with ship borne radar during TOGA COARE (Tropical Ocean Global Atmosphere Coupled Ocean Atmospheric Response Experiment) and gridded on a 2 km grid depends on the size of the spatial averaging scale. Good agreement is found between the data and predictions from the model over a wide range of averaging length scales.
NASA Technical Reports Server (NTRS)
Smith, O. E.; Adelfang, S. I.; Tubbs, J. D.
1982-01-01
A five-parameter gamma distribution (BGD) having two shape parameters, two location parameters, and a correlation parameter is investigated. This general BGD is expressed as a double series and as a single series of the modified Bessel function. It reduces to the known special case for equal shape parameters. Practical functions for computer evaluations for the general BGD and for special cases are presented. Applications to wind gust modeling for the ascent flight of the space shuttle are illustrated.
NASA Astrophysics Data System (ADS)
Creixell-Mediante, Ester; Jensen, Jakob S.; Naets, Frank; Brunskog, Jonas; Larsen, Martin
2018-06-01
Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we present an adaptive pMOR technique where the construction of the projection basis is embedded in the optimization process and requires fewer full system analyses, while the accuracy of the reduced system is monitored by a cheap error indicator. The performance of the proposed method is evaluated for a 4-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system.
New SECAA/ NSSDC Capabilities for Accessing ITM Data
NASA Astrophysics Data System (ADS)
Bilitza, D.; Papitashvili, N.; McGuire, R.
NASA's National Space Science Data Center (NSSDC) archives a large volume of data and models that are of relevance to the International Living with a Star (ILWS) project. Working with NSSDC its sister organization the Sun Earth Connection Active Archive (SECAA) has developed a number of data access and browse tools to facilitate user access to this important data source. For the most widely used empirical models (IRI, IGRF, MSIS/CIRA, AE/AP-8) Java-based web interfaces let users compute, list, plot, and download model parameters. We will report about recent enhancements and extensions of these data and model services in the area of ionospheric-thermospheric-mesospheric (ITM) physics. The ATMOWeb system (http://nssdc.gsfc.nasa.gov/atmoweb/) includes data from many of the ITM satellite missions of the sixties, seventies, and eighties (BE-B, DME-A, Alouette 2, AE-B, OGO-6, ISIS-1, ISIS-2, AEROS-A, AE-C, AE-D, AE-E, DE-2, and Hinotori). New capabilities of the ATMOWeb system include in addition to time series plots and data retrievals, ATMOWeb now lets user generate scatter plots and linear regression fits for any pair of parameters. Optional upper and lower boundaries let users filter out specific segments of the data and/or certain ranges of orbit parameters (altitude, longitude, local time, etc.). Data from TIMED is being added to the CDAWeb system, including new web service capabilities, to be available jointly with the broad scope of space physics data already served by CDAWeb. We will also present the newest version of the NSSDC/SECAA models web pages. The layout and sequence of these entry pages to the models catalog, archive, and web interfaces was greatly simplified and broad up-to-date.
NASA Astrophysics Data System (ADS)
Diamond, Miriam
The dark photon (A'), the gauge boson carrier of a hypothetical new force, has been proposed in a wide range of Beyond the Standard Model (BSM) theories, and could serve as our window to an entire dark sector. A massive A' could decay back to the Standard Model (SM) with a significant branching fraction, through kinetic mixing with the SM photon. If this A' can be produced from decays of a dark scalar that mixes with the SM Higgs boson, collider searches involving leptonic final states provide promising discovery prospects with rich phenomenology. This work presents the results of a search for dark photons in the mass range 0.2 ≤ mA' ≤ 10 GeV decaying into collimated jets of light leptons and mesons, so-called ``lepton-jets". It employs 3.57 fb-1 of data from proton--proton collisions at a centre-of-mass energy of √s =13 TeV, collected during 2015 with the ATLAS detector at the LHC. No deviations from SM expectations are observed. Limits on benchmark models predicting Higgs boson decays to A's are derived as a function of the A' lifetime; limits are also established in the parameter space of mA' vs. kinetic mixing parameter epsilon . These extend the limits obtained in a similar search previously performed during Run 1 of the LHC, to include dark photon masses 2 ≤ mA' ≤ 10 GeV and to cover higher epsilon values for 0.2 ≤ mA' ≤ 2 GeV, and are complementary to various other ATLAS A' searches. As data-taking continues at the LHC, the reach of lepton-jet analyses will continue to expand in model coverage and in parameter space.
NASA Astrophysics Data System (ADS)
Ford, Eric B.
2009-05-01
We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.
Development of adaptive control applied to chaotic systems
NASA Astrophysics Data System (ADS)
Rhode, Martin Andreas
1997-12-01
Continuous-time derivative control and adaptive map-based recursive feedback control techniques are used to control chaos in a variety of systems and in situations that are of practical interest. The theoretical part of the research includes the review of fundamental concept of control theory in the context of its applications to deterministic chaotic systems, the development of a new adaptive algorithm to identify the linear system properties necessary for control, and the extension of the recursive proportional feedback control technique, RPF, to high dimensional systems. Chaos control was applied to models of a thermal pulsed combustor, electro-chemical dissolution and the hyperchaotic Rossler system. Important implications for combustion engineering were suggested by successful control of the model of the thermal pulsed combustor. The system was automatically tracked while maintaining control into regions of parameter and state space where no stable attractors exist. In a simulation of the electrochemical dissolution system, application of derivative control to stabilize a steady state, and adaptive RPF to stabilize a period one orbit, was demonstrated. The high dimensional adaptive control algorithm was applied in a simulation using the Rossler hyperchaotic system, where a period-two orbit with two unstable directions was stabilized and tracked over a wide range of a system parameter. In the experimental part, the electrochemical system was studied in parameter space, by scanning the applied potential and the frequency of the rotating copper disk. The automated control algorithm is demonstrated to be effective when applied to stabilize a period-one orbit in the experiment. We show the necessity of small random perturbations applied to the system in order to both learn the dynamics and control the system at the same time. The simultaneous learning and control capability is shown to be an important part of the active feedback control.
Hou, Chen; Gheorghiu, Stefan; Huxley, Virginia H.; Pfeifer, Peter
2010-01-01
The space-filling fractal network in the human lung creates a remarkable distribution system for gas exchange. Landmark studies have illuminated how the fractal network guarantees minimum energy dissipation, slows air down with minimum hardware, maximizes the gas- exchange surface area, and creates respiratory flexibility between rest and exercise. In this paper, we investigate how the fractal architecture affects oxygen transport and exchange under varying physiological conditions, with respect to performance metrics not previously studied. We present a renormalization treatment of the diffusion-reaction equation which describes how oxygen concentrations drop in the airways as oxygen crosses the alveolar membrane system. The treatment predicts oxygen currents across the lung at different levels of exercise which agree with measured values within a few percent. The results exhibit wide-ranging adaptation to changing process parameters, including maximum oxygen uptake rate at minimum alveolar membrane permeability, the ability to rapidly switch from a low oxygen uptake rate at rest to high rates at exercise, and the ability to maintain a constant oxygen uptake rate in the event of a change in permeability or surface area. We show that alternative, less than space-filling architectures perform sub-optimally and that optimal performance of the space-filling architecture results from a competition between underexploration and overexploration of the surface by oxygen molecules. PMID:20865052
Implementation of a Space Weather VOEvent service at IRAP in the frame of Europlanet H2020 PSWS
NASA Astrophysics Data System (ADS)
Gangloff, M.; André, N.; Génot, V.; Cecconi, B.; Le Sidaner, P.; Bouchemit, M.; Budnik, E.; Jourdane, N.
2017-09-01
Under Horizon 2020, the Europlanet Research Infrastructure includes PSWS (Planetary Space Weather Services), a set of new services that extend the concepts of space weather and space situation awareness to other planets of our solar system. One of these services is an Alert service associated in particular with an heliospheric propagator tool for solar wind predictions at planets, a meteor shower prediction tool, and a cometary tail crossing prediction tool. This Alert service, is based on VOEvent, an international standard proposed by the IVOA and widely used by the astronomy community. The VOEvent standard provides a means of describing transient celestial events in a machine-readable format. VOEvent is associated with VTP, the VOEvent Transfer Protocol that defines the system by which VOEvents may be disseminated to the community This presentation will focus on the enhancements of the VOEvent standard necessary to take into account the needs of the Solar System community and Comet, a freely available and open source implementation of VTP used by PSWS for its Alert service. Comet is implemented by several partners of PSWS, including IRAP and Observatoire de Paris. A use case will be presented for the heliospheric propagator tool based on extreme solar wind pressure pulses predicted at planets and probes from a 1D MHD model and real time observations of solar wind parameters.
High sensitivity microchannel plate detectors for space extreme ultraviolet missions.
Yoshioka, K; Homma, T; Murakami, G; Yoshikawa, I
2012-08-01
Microchannel plate (MCP) detectors have been widely used as two-dimensional photon counting devices on numerous space EUV (extreme ultraviolet) missions. Although there are other choices for EUV photon detectors, the characteristic features of MCP detectors such as their light weight, low dark current, and high spatial resolution make them more desirable for space applications than any other detector. In addition, it is known that the photocathode can be tailored to increase the quantum detection efficiency (QDE) especially for longer UV wavelengths (100-150 nm). There are many types of photocathode materials available, typically alkali halides. In this study, we report on the EUV (50-150 nm) QDE evaluations for MCPs that were coated with Au, MgF(2), CsI, and KBr. We confirmed that CsI and KBr show 2-100 times higher QDEs than the bare photocathode MCPs, while Au and MgF(2) show reduced QDEs. In addition, the optimal geometrical parameters for the CsI deposition were also studied experimentally. The best CsI thickness was found to be 150 nm, and it should be deposited on the inner wall of the channels only where the EUV photons initially impinge. We will also discuss the techniques and procedures for reducing the degradation of the photocathode while it is being prepared on the ground before being deployed in space, as adopted by JAXA's EXCEED mission which will be launched in 2013.
Li, Yang; Zhang, Zhenjun; Liao, Zhenhua; Mo, Zhongjun; Liu, Weiqiang
2017-10-01
Finite element models have been widely used to predict biomechanical parameters of the cervical spine. Previous studies investigated the influence of position of rotational centers of prostheses on cervical biomechanical parameters after 1-level total disc replacement. The purpose of this study was to explore the effects of axial position of rotational centers of prostheses on cervical biomechanics after 2-level total disc replacement. A validated finite element model of C3-C7 segments and 2 prostheses, including the rotational center located at the superior endplate (SE) and inferior endplate (IE), was developed. Four total disc replacement models were used: 1) IE inserted at C4-C5 disc space and IE inserted at C5-C6 disc space (IE-IE), 2) IE-SE, 3) SE-IE, and 4) SE-SE. All models were subjected to displacement control combined with a 50 N follower load to simulate flexion and extension motions in the sagittal plane. For each case, biomechanical parameters, including predicted moments, range of rotation at each level, facet joint stress, and von Mises stress on the ultra-high-molecular-weight polyethylene core of the prostheses, were calculated. The SE-IE model resulted in significantly lower stress at the cartilage level during extension and at the ultra-high-molecular-weight polyethylene cores when compared with the SE-SE construct and did not generate hypermotion at the C4-C5 level compared with the IE-SE and IE-IE constructs. Based on the present analysis, the SE-IE construct is recommended for treating cervical disease at the C4-C6 level. This study may provide a useful model to inform clinical operations. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.
2011-12-01
Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently than traditional MCMC.
Naden, Levi N; Shirts, Michael R
2016-04-12
We show how thermodynamic properties of molecular models can be computed over a large, multidimensional parameter space by combining multistate reweighting analysis with a linear basis function approach. This approach reduces the computational cost to estimate thermodynamic properties from molecular simulations for over 130,000 tested parameter combinations from over 1000 CPU years to tens of CPU days. This speed increase is achieved primarily by computing the potential energy as a linear combination of basis functions, computed from either modified simulation code or as the difference of energy between two reference states, which can be done without any simulation code modification. The thermodynamic properties are then estimated with the Multistate Bennett Acceptance Ratio (MBAR) as a function of multiple model parameters without the need to define a priori how the states are connected by a pathway. Instead, we adaptively sample a set of points in parameter space to create mutual configuration space overlap. The existence of regions of poor configuration space overlap are detected by analyzing the eigenvalues of the sampled states' overlap matrix. The configuration space overlap to sampled states is monitored alongside the mean and maximum uncertainty to determine convergence, as neither the uncertainty or the configuration space overlap alone is a sufficient metric of convergence. This adaptive sampling scheme is demonstrated by estimating with high precision the solvation free energies of charged particles of Lennard-Jones plus Coulomb functional form with charges between -2 and +2 and generally physical values of σij and ϵij in TIP3P water. We also compute entropy, enthalpy, and radial distribution functions of arbitrary unsampled parameter combinations using only the data from these sampled states and use the estimates of free energies over the entire space to examine the deviation of atomistic simulations from the Born approximation to the solvation free energy.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-08-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-01-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784
Oliveira, G M; de Oliveira, P P; Omar, N
2001-01-01
Cellular automata (CA) are important as prototypical, spatially extended, discrete dynamical systems. Because the problem of forecasting dynamic behavior of CA is undecidable, various parameter-based approximations have been developed to address the problem. Out of the analysis of the most important parameters available to this end we proposed some guidelines that should be followed when defining a parameter of that kind. Based upon the guidelines, new parameters were proposed and a set of five parameters was selected; two of them were drawn from the literature and three are new ones, defined here. This article presents all of them and makes their qualities evident. Then, two results are described, related to the use of the parameter set in the Elementary Rule Space: a phase transition diagram, and some general heuristics for forecasting the dynamics of one-dimensional CA. Finally, as an example of the application of the selected parameters in high cardinality spaces, results are presented from experiments involving the evolution of radius-3 CA in the Density Classification Task, and radius-2 CA in the Synchronization Task.
An Optimized Trajectory Planning for Welding Robot
NASA Astrophysics Data System (ADS)
Chen, Zhilong; Wang, Jun; Li, Shuting; Ren, Jun; Wang, Quan; Cheng, Qunchao; Li, Wentao
2018-03-01
In order to improve the welding efficiency and quality, this paper studies the combined planning between welding parameters and space trajectory for welding robot and proposes a trajectory planning method with high real-time performance, strong controllability and small welding error. By adding the virtual joint at the end-effector, the appropriate virtual joint model is established and the welding process parameters are represented by the virtual joint variables. The trajectory planning is carried out in the robot joint space, which makes the control of the welding process parameters more intuitive and convenient. By using the virtual joint model combined with the B-spline curve affine invariant, the welding process parameters are indirectly controlled by controlling the motion curve of the real joint. To solve the optimal time solution as the goal, the welding process parameters and joint space trajectory joint planning are optimized.
NASA Astrophysics Data System (ADS)
Ugolnikov, Oleg S.; Maslov, Igor A.
2018-03-01
Polarization measurements of the twilight background with Wide-Angle Polarization Camera (WAPC) are used to detect the depolarization effect caused by stratospheric aerosol near the altitude of 20 km. Based on a number of observations in central Russia in spring and summer 2016, we found the parameters of lognormal size distribution of aerosol particles. This confirmed the previously published results of the colorimetric method as applied to the same twilights. The mean particle radius (about 0.1 micrometers) and size distribution are also in agreement with the recent data of in situ and space-based remote sensing of stratospheric aerosol. Methods considered here provide two independent techniques of the stratospheric aerosol study based on the twilight sky analysis.
Study of pseudo noise CW diode laser for ranging applications
NASA Technical Reports Server (NTRS)
Lee, Hyo S.; Ramaswami, Ravi
1992-01-01
A new Pseudo Random Noise (PN) modulated CW diode laser radar system is being developed for real time ranging of targets at both close and large distances (greater than 10 KM) to satisy a wide range of applications: from robotics to future space applications. Results from computer modeling and statistical analysis, along with some preliminary data obtained from a prototype system, are presented. The received signal is averaged for a short time to recover the target response function. It is found that even with uncooperative targets, based on the design parameters used (200-mW laser and 20-cm receiver), accurate ranging is possible up to about 15 KM, beyond which signal to noise ratio (SNR) becomes too small for real time analog detection.
Satisfying the Einstein-Podolsky-Rosen criterion with massive particles
NASA Astrophysics Data System (ADS)
Peise, J.; Kruse, I.; Lange, K.; Lücke, B.; Pezzè, L.; Arlt, J.; Ertmer, W.; Hammerer, K.; Santos, L.; Smerzi, A.; Klempt, C.
2016-03-01
In 1935, Einstein, Podolsky and Rosen (EPR) questioned the completeness of quantum mechanics by devising a quantum state of two massive particles with maximally correlated space and momentum coordinates. The EPR criterion qualifies such continuous-variable entangled states, as shown successfully with light fields. Here, we report on the production of massive particles which meet the EPR criterion for continuous phase/amplitude variables. The created quantum state of ultracold atoms shows an EPR parameter of 0.18(3), which is 2.4 standard deviations below the threshold of 1/4. Our state presents a resource for tests of quantum nonlocality with massive particles and a wide variety of applications in the field of continuous-variable quantum information and metrology.
A Tikhonov Regularization Scheme for Focus Rotations with Focused Ultrasound Phased Arrays
Hughes, Alec; Hynynen, Kullervo
2016-01-01
Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually-driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations. PMID:27913323
On-chip broadband spectral filtering using planar double high-contrast grating reflectors
NASA Astrophysics Data System (ADS)
Horie, Yu; Arbabi, Amir; Faraon, Andrei
2015-02-01
We propose a broadband free-space on-chip spectrometer based on an array of integrated narrowband filters consisting of Fabry-Perot resonators formed by two high-contrast grating (HCG) based reflectors separated by a low-index thin layer with a fixed cavity thickness. Using numerical simulations, broadband tunability of resonance wavelengths was achieved only by changing the in-plane grating parameters such as period or duty cycle of HCGs while the substrate geometry was kept fixed. Experimentally, the HCG reflectors were fabricated on silicon on insulator (SOI) substrates and high reflectivity was measured, fabrication process for the proposed double HCG-based narrowband filter array was developed. The filtering function that can be spanned over a wide range of wavelengths was measured.
Selective encapsulation by Janus particles
NASA Astrophysics Data System (ADS)
Li, Wei; Ruth, Donovan; Gunton, James D.; Rickman, Jeffrey M.
2015-06-01
We employ Monte Carlo simulation to examine encapsulation in a system comprising Janus oblate spheroids and isotropic spheres. More specifically, the impact of variations in temperature, particle size, inter-particle interaction range, and strength is examined for a system in which the spheroids act as the encapsulating agents and the spheres as the encapsulated guests. In this picture, particle interactions are described by a quasi-square-well patch model. This study highlights the environmental adaptation and selectivity of the encapsulation system to changes in temperature and guest particle size, respectively. Moreover, we identify an important range in parameter space where encapsulation is favored, as summarized by an encapsulation map. Finally, we discuss the generalization of our results to systems having a wide range of particle geometries.
Capillary rise in a textured channel
NASA Astrophysics Data System (ADS)
Beilharz, Daniel; Clanet, Christophe; Quere, David
2016-11-01
A wetting liquid can invade a textured material, for example a forest of micropillars. The driving and the viscous forces of this motion are determined by the texture parameters and the influence of shape, height and spacing of posts has been widely studied for the last decade. In this work, we build a channel with textured walls. Brought into contact with a reservoir of wetting liquid, we observe in some cases two advancing fronts. A first one ahead invading the forest of micropillars, and a second one behind filling the remaining gap. We study and model the conditions of existence and the dynamics of these two fronts as a function of the characteristics of both microstructure and gap of this elementary porous medium.
A Tikhonov Regularization Scheme for Focus Rotations With Focused Ultrasound-Phased Arrays.
Hughes, Alec; Hynynen, Kullervo
2016-12-01
Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound-phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations.
NASA Astrophysics Data System (ADS)
Tselyaev, V.; Lyutorovich, N.; Speth, J.; Krewald, S.; Reinhard, P.-G.
2016-09-01
We present results of the time blocking approximation (TBA) for giant resonances in light-, medium-, and heavy-mass nuclei. The TBA is an extension of the widely used random-phase approximation (RPA) adding complex configurations by coupling to phonon excitations. A new method for handling the single-particle continuum is developed and applied in the present calculations. We investigate in detail the dependence of the numerical results on the size of the single-particle space and the number of phonons as well as on nuclear matter properties. Our approach is self-consistent, based on an energy-density functional of Skyrme type where we used seven different parameter sets. The numerical results are compared with experimental data.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1987-01-01
The approximation of optimal discrete-time linear quadratic Gaussian (LQG) compensators for distributed parameter control systems with boundary input and unbounded measurement is considered. The approach applies to a wide range of problems that can be formulated in a state space on which both the discrete-time input and output operators are continuous. Approximating compensators are obtained via application of the LQG theory and associated approximation results for infinite dimensional discrete-time control systems with bounded input and output. Numerical results for spline and modal based approximation schemes used to compute optimal compensators for a one dimensional heat equation with either Neumann or Dirichlet boundary control and pointwise measurement of temperature are presented and discussed.
Hard X-Ray Constraints on Small-Scale Coronal Heating Events
NASA Astrophysics Data System (ADS)
Marsh, Andrew; Smith, David M.; Glesener, Lindsay; Klimchuk, James A.; Bradshaw, Stephen; Hannah, Iain; Vievering, Juliana; Ishikawa, Shin-Nosuke; Krucker, Sam; Christe, Steven
2017-08-01
A large body of evidence suggests that the solar corona is heated impulsively. Small-scale heating events known as nanoflares may be ubiquitous in quiet and active regions of the Sun. Hard X-ray (HXR) observations with unprecedented sensitivity >3 keV have recently been enabled through the use of focusing optics. We analyze active region spectra from the FOXSI-2 sounding rocket and the NuSTAR satellite to constrain the physical properties of nanoflares simulated with the EBTEL field-line-averaged hydrodynamics code. We model a wide range of X-ray spectra by varying the nanoflare heating amplitude, duration, delay time, and filling factor. Additional constraints on the nanoflare parameter space are determined from energy constraints and EUV/SXR data.
Chen, Xiao; Yan, Bin-bin; Song, Fei-jun; Wang, Yi-quan; Xiao, Feng; Alameh, Kamal
2012-10-20
A digital micromirror device (DMD) is a kind of widely used spatial light modulator. We apply DMD as wavelength selector in tunable fiber lasers. Based on the two-dimensional diffraction theory, the diffraction of DMD and its effect on properties of fiber laser parameters are analyzed in detail. The theoretical results show that the diffraction efficiency is strongly dependent upon the angle of incident light and the pixel spacing of DMD. Compared with the other models of DMDs, the 0.55 in. DMD grating is an approximate blazed state in our configuration, which makes most of the diffracted radiation concentrated into one order. It is therefore a better choice to improve the stability and reliability of tunable fiber laser systems.
Haptic control with environment force estimation for telesurgery.
Bhattacharjee, Tapomayukh; Son, Hyoung Il; Lee, Doo Yong
2008-01-01
Success of telesurgical operations depends on better position tracking ability of the slave device. Improved position tracking of the slave device can lead to safer and less strenuous telesurgical operations. The two-channel force-position control architecture is widely used for better position tracking ability. This architecture requires force sensors for direct force feedback. Force sensors may not be a good choice in the telesurgical environment because of the inherent noise, and limitation in the deployable place and space. Hence, environment force estimation is developed using the concept of the robot function parameter matrix and a recursive least squares method. Simulation results show efficacy of the proposed method. The slave device successfully tracks the position of the master device, and the estimation error quickly becomes negligible.
Quantitative imaging of mammalian transcriptional dynamics: from single cells to whole embryos.
Zhao, Ziqing W; White, Melanie D; Bissiere, Stephanie; Levi, Valeria; Plachta, Nicolas
2016-12-23
Probing dynamic processes occurring within the cell nucleus at the quantitative level has long been a challenge in mammalian biology. Advances in bio-imaging techniques over the past decade have enabled us to directly visualize nuclear processes in situ with unprecedented spatial and temporal resolution and single-molecule sensitivity. Here, using transcription as our primary focus, we survey recent imaging studies that specifically emphasize the quantitative understanding of nuclear dynamics in both time and space. These analyses not only inform on previously hidden physical parameters and mechanistic details, but also reveal a hierarchical organizational landscape for coordinating a wide range of transcriptional processes shared by mammalian systems of varying complexity, from single cells to whole embryos.
Braggio, Simone; Montanari, Dino; Rossi, Tino; Ratti, Emiliangelo
2010-07-01
As a result of their wide acceptance and conceptual simplicity, drug-like concepts are having a major influence on the drug discovery process, particularly in the selection of the 'optimal' absorption, distribution, metabolism, excretion and toxicity and physicochemical parameters space. While they have an undisputable value when assessing the potential of lead series or in evaluating inherent risk of a portfolio of drug candidates, they result much less useful in weighing up compounds for the selection of the best potential clinical candidate. We introduce the concept of drug efficiency as a new tool both to guide the drug discovery program teams during the lead optimization phase and to better assess the developability potential of a drug candidate.
NASA Astrophysics Data System (ADS)
Cui, Tiangang; Marzouk, Youssef; Willcox, Karen
2016-06-01
Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.
Andreozzi, Stefano; Miskovic, Ljubisa; Hatzimanikatis, Vassily
2016-01-01
Accurate determination of physiological states of cellular metabolism requires detailed information about metabolic fluxes, metabolite concentrations and distribution of enzyme states. Integration of fluxomics and metabolomics data, and thermodynamics-based metabolic flux analysis contribute to improved understanding of steady-state properties of metabolism. However, knowledge about kinetics and enzyme activities though essential for quantitative understanding of metabolic dynamics remains scarce and involves uncertainty. Here, we present a computational methodology that allow us to determine and quantify the kinetic parameters that correspond to a certain physiology as it is described by a given metabolic flux profile and a given metabolite concentration vector. Though we initially determine kinetic parameters that involve a high degree of uncertainty, through the use of kinetic modeling and machine learning principles we are able to obtain more accurate ranges of kinetic parameters, and hence we are able to reduce the uncertainty in the model analysis. We computed the distribution of kinetic parameters for glucose-fed E. coli producing 1,4-butanediol and we discovered that the observed physiological state corresponds to a narrow range of kinetic parameters of only a few enzymes, whereas the kinetic parameters of other enzymes can vary widely. Furthermore, this analysis suggests which are the enzymes that should be manipulated in order to engineer the reference state of the cell in a desired way. The proposed approach also sets up the foundations of a novel type of approaches for efficient, non-asymptotic, uniform sampling of solution spaces. Copyright © 2015 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
Numerical simulations of high-energy flows in accreting magnetic white dwarfs
NASA Astrophysics Data System (ADS)
Van Box Som, Lucile; Falize, É.; Bonnet-Bidaud, J.-M.; Mouchet, M.; Busschaert, C.; Ciardi, A.
2018-01-01
Some polars show quasi-periodic oscillations (QPOs) in their optical light curves that have been interpreted as the result of shock oscillations driven by the cooling instability. Although numerical simulations can recover this physics, they wrongly predict QPOs in the X-ray luminosity and have also failed to reproduce the observed frequencies, at least for the limited range of parameters explored so far. Given the uncertainties on the observed polar parameters, it is still unclear whether simulations can reproduce the observations. The aim of this work is to study QPOs covering all relevant polars showing QPOs. We perform numerical simulations including gravity, cyclotron and bremsstrahlung radiative losses, for a wide range of polar parameters, and compare our results with the astronomical data using synthetic X-ray and optical luminosities. We show that shock oscillations are the result of complex shock dynamics triggered by the interplay of two radiative instabilities. The secondary shock forms at the acoustic horizon in the post-shock region in agreement with our estimates from steady-state solutions. We also demonstrate that the secondary shock is essential to sustain the accretion shock oscillations at the average height predicted by our steady-state accretion model. Finally, in spite of the large explored parameter space, matching the observed QPO parameters requires a combination of parameters inconsistent with the observed ones. This difficulty highlights the limits of one-dimensional simulations, suggesting that multi-dimensional effects are needed to understand the non-linear dynamics of accretion columns in polars and the origins of QPOs.
Space Launch System Upper Stage Technology Assessment
NASA Technical Reports Server (NTRS)
Holladay, Jon; Hampton, Bryan; Monk, Timothy
2014-01-01
The Space Launch System (SLS) is envisioned as a heavy-lift vehicle that will provide the foundation for future beyond low-Earth orbit (LEO) exploration missions. Previous studies have been performed to determine the optimal configuration for the SLS and the applicability of commercial off-the-shelf in-space stages for Earth departure. Currently NASA is analyzing the concept of a Dual Use Upper Stage (DUUS) that will provide LEO insertion and Earth departure burns. This paper will explore candidate in-space stages based on the DUUS design for a wide range of beyond LEO missions. Mission payloads will range from small robotic systems up to human systems with deep space habitats and landers. Mission destinations will include cislunar space, Mars, Jupiter, and Saturn. Given these wide-ranging mission objectives, a vehicle-sizing tool has been developed to determine the size of an Earth departure stage based on the mission objectives. The tool calculates masses for all the major subsystems of the vehicle including propellant loads, avionics, power, engines, main propulsion system components, tanks, pressurization system and gases, primary structural elements, and secondary structural elements. The tool uses an iterative sizing algorithm to determine the resulting mass of the stage. Any input into one of the subsystem sizing routines or the mission parameters can be treated as a parametric sweep or as a distribution for use in Monte Carlo analysis. Taking these factors together allows for multi-variable, coupled analysis runs. To increase confidence in the tool, the results have been verified against two point-of-departure designs of the DUUS. The tool has also been verified against Apollo moon mission elements and other manned space systems. This paper will focus on trading key propulsion technologies including chemical, Nuclear Thermal Propulsion (NTP), and Solar Electric Propulsion (SEP). All of the key performance inputs and relationships will be presented and discussed in light of the various missions. For each mission there are several trajectory options and each will be discussed in terms of delta-v required and transit duration. Each propulsion system will be modeled, sized, and judged based on their applicability to the whole range of beyond LEO missions. Criteria for scoring will include the resulting dry mass of the stage, resulting propellant required, time to destination, and an assessment of key enabling technologies. In addition to the larger metrics, this paper will present the results of several coupled sensitivity studies. The ultimate goals of these tools and studies are to provide NASA with the most mass-, technology-, and cost-effective in-space stage for its future exploration missions.
Exploring Replica-Exchange Wang-Landau sampling in higher-dimensional parameter space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valentim, Alexandra; Rocha, Julio C. S.; Tsai, Shan-Ho
We considered a higher-dimensional extension for the replica-exchange Wang-Landau algorithm to perform a random walk in the energy and magnetization space of the two-dimensional Ising model. This hybrid scheme combines the advantages of Wang-Landau and Replica-Exchange algorithms, and the one-dimensional version of this approach has been shown to be very efficient and to scale well, up to several thousands of computing cores. This approach allows us to split the parameter space of the system to be simulated into several pieces and still perform a random walk over the entire parameter range, ensuring the ergodicity of the simulation. Previous work, inmore » which a similar scheme of parallel simulation was implemented without using replica exchange and with a different way to combine the result from the pieces, led to discontinuities in the final density of states over the entire range of parameters. From our simulations, it appears that the replica-exchange Wang-Landau algorithm is able to overcome this diculty, allowing exploration of higher parameter phase space by keeping track of the joint density of states.« less
Trap configuration and spacing influences parameter estimates in spatial capture-recapture models
Sun, Catherine C.; Fuller, Angela K.; Royle, J. Andrew
2014-01-01
An increasing number of studies employ spatial capture-recapture models to estimate population size, but there has been limited research on how different spatial sampling designs and trap configurations influence parameter estimators. Spatial capture-recapture models provide an advantage over non-spatial models by explicitly accounting for heterogeneous detection probabilities among individuals that arise due to the spatial organization of individuals relative to sampling devices. We simulated black bear (Ursus americanus) populations and spatial capture-recapture data to evaluate the influence of trap configuration and trap spacing on estimates of population size and a spatial scale parameter, sigma, that relates to home range size. We varied detection probability and home range size, and considered three trap configurations common to large-mammal mark-recapture studies: regular spacing, clustered, and a temporal sequence of different cluster configurations (i.e., trap relocation). We explored trap spacing and number of traps per cluster by varying the number of traps. The clustered arrangement performed well when detection rates were low, and provides for easier field implementation than the sequential trap arrangement. However, performance differences between trap configurations diminished as home range size increased. Our simulations suggest it is important to consider trap spacing relative to home range sizes, with traps ideally spaced no more than twice the spatial scale parameter. While spatial capture-recapture models can accommodate different sampling designs and still estimate parameters with accuracy and precision, our simulations demonstrate that aspects of sampling design, namely trap configuration and spacing, must consider study area size, ranges of individual movement, and home range sizes in the study population.
NASA Astrophysics Data System (ADS)
Susyanto, Nanang
2017-12-01
We propose a simple derivation of the Cramer-Rao Lower Bound (CRLB) of parameters under equality constraints from the CRLB without constraints in regular parametric models. When a regular parametric model and an equality constraint of the parameter are given, a parametric submodel can be defined by restricting the parameter under that constraint. The tangent space of this submodel is then computed with the help of the implicit function theorem. Finally, the score function of the restricted parameter is obtained by projecting the efficient influence function of the unrestricted parameter on the appropriate inner product spaces.
Identification of damage in composite structures using Gaussian mixture model-processed Lamb waves
NASA Astrophysics Data System (ADS)
Wang, Qiang; Ma, Shuxian; Yue, Dong
2018-04-01
Composite materials have comprehensively better properties than traditional materials, and therefore have been more and more widely used, especially because of its higher strength-weight ratio. However, the damage of composite structures is usually varied and complicated. In order to ensure the security of these structures, it is necessary to monitor and distinguish the structural damage in a timely manner. Lamb wave-based structural health monitoring (SHM) has been proved to be effective in online structural damage detection and evaluation; furthermore, the characteristic parameters of the multi-mode Lamb wave varies in response to different types of damage in the composite material. This paper studies the damage identification approach for composite structures using the Lamb wave and the Gaussian mixture model (GMM). The algorithm and principle of the GMM, and the parameter estimation, is introduced. Multi-statistical characteristic parameters of the excited Lamb waves are extracted, and the parameter space with reduced dimensions is adopted by principal component analysis (PCA). The damage identification system using the GMM is then established through training. Experiments on a glass fiber-reinforced epoxy composite laminate plate are conducted to verify the feasibility of the proposed approach in terms of damage classification. The experimental results show that different types of damage can be identified according to the value of the likelihood function of the GMM.
Constraining neutron guide optimizations with phase-space considerations
NASA Astrophysics Data System (ADS)
Bertelsen, Mads; Lefmann, Kim
2016-09-01
We introduce a method named the Minimalist Principle that serves to reduce the parameter space for neutron guide optimization when the required beam divergence is limited. The reduced parameter space will restrict the optimization to guides with a minimal neutron intake that are still theoretically able to deliver the maximal possible performance. The geometrical constraints are derived using phase-space propagation from moderator to guide and from guide to sample, while assuming that the optimized guides will achieve perfect transport of the limited neutron intake. Guide systems optimized using these constraints are shown to provide performance close to guides optimized without any constraints, however the divergence received at the sample is limited to the desired interval, even when the neutron transport is not limited by the supermirrors used in the guide. As the constraints strongly limit the parameter space for the optimizer, two control parameters are introduced that can be used to adjust the selected subspace, effectively balancing between maximizing neutron transport and avoiding background from unnecessary neutrons. One parameter is needed to describe the expected focusing abilities of the guide to be optimized, going from perfectly focusing to no correlation between position and velocity. The second parameter controls neutron intake into the guide, so that one can select exactly how aggressively the background should be limited. We show examples of guides optimized using these constraints which demonstrates the higher signal to noise than conventional optimizations. Furthermore the parameter controlling neutron intake is explored which shows that the simulated optimal neutron intake is close to the analytically predicted, when assuming that the guide is dominated by multiple scattering events.
Utilization of a Curved Local Surface Array in a 3.5m Wide field of View Telescope
2013-09-01
ABSTRACT Wide field of view optical telescopes have a range of uses for both astronomical and space -surveillance purposes. In designing these...Agency (DARPA) 3.5-m Space Surveillance Telescope (SST)), the choice was made to curve the array to best satisfy the stressing telescope performance...dramatically improves the nation’s space surveillance capabilities. This paper will discuss the implementation of the curved focal-surface array, the
Occlusal traits of deciduous dentition of preschool children of Indian children
Bahadure, Rakesh N.; Thosar, Nilima; Gaikwad, Rahul
2012-01-01
Objectives: To assess the occlusal relationship, canine relationship, crowding, primate spaces, and anterior spacing in both maxillary and mandibular arches of primary dentition of Indian children of Wardha District and also to study the age-wise differences in occlusal characteristics. Materials and Methods: A total of 1053 (609 males and 444 females) children of 3-5 year age group with complete primary dentition were examined for occlusal relationship, canine relationship, crowding, primate spaces, and anterior spacing in both maxillary and mandibular arches. Results: The data after evaluation showed significant values for all parameters except mandibular anterior spacing, which was 47.6%. Mild crowding was prevalent at 5 year age group and moderate crowding was common at 3 year-age group. Conclusion: Evaluated parameters such as terminal molar relationship and canine relationship were predominantly progressing toward to normal but contacts and crowding status were contributing almost equal to physiologic anterior spacing. Five-year-age group showed higher values with respect to all the parameters. PMID:23633806
A Real-Time Apple Grading System Using Multicolor Space
2014-01-01
This study was focused on the multicolor space which provides a better specification of the color and size of the apple in an image. In the study, a real-time machine vision system classifying apples into four categories with respect to color and size was designed. In the analysis, different color spaces were used. As a result, 97% identification success for the red fields of the apple was obtained depending on the values of the parameter “a” of CIE L*a*b*color space. Similarly, 94% identification success for the yellow fields was obtained depending on the values of the parameter y of CIE XYZ color space. With the designed system, three kinds of apples (Golden, Starking, and Jonagold) were investigated by classifying them into four groups with respect to two parameters, color and size. Finally, 99% success rate was achieved in the analyses conducted for 595 apples. PMID:24574880
NASA Astrophysics Data System (ADS)
Lim, Teik-Cheng; Dawson, James Alexander
2018-05-01
This study explores the close-range, short-range and long-range relationships between the parameters of the Morse and Buckingham potential energy functions. The results show that the close-range and short-range relationships are valid for bond compression and for very small changes in bond length, respectively, while the long-range relationship is valid for bond stretching. A wide-range relationship is proposed to combine the comparative advantages of the close-range, short-range and long-range parameter relationships. The wide-range relationship is useful for replacing the close-range, short-range and long-range parameter relationships, thereby preventing the undesired effects of potential energy jumps resulting from functional switching between the close-range, short-range and long-range interaction energies.