Parameter space of experimental chaotic circuits with high-precision control parameters.
de Sousa, Francisco F G; Rubinger, Rero M; Sartorelli, José C; Albuquerque, Holokx A; Baptista, Murilo S
2016-08-01
We report high-resolution measurements that experimentally confirm a spiral cascade structure and a scaling relationship of shrimps in the Chua's circuit. Circuits constructed using this component allow for a comprehensive characterization of the circuit behaviors through high resolution parameter spaces. To illustrate the power of our technological development for the creation and the study of chaotic circuits, we constructed a Chua circuit and study its high resolution parameter space. The reliability and stability of the designed component allowed us to obtain data for long periods of time (∼21 weeks), a data set from which an accurate estimation of Lyapunov exponents for the circuit characterization was possible. Moreover, this data, rigorously characterized by the Lyapunov exponents, allows us to reassure experimentally that the shrimps, stable islands embedded in a domain of chaos in the parameter spaces, can be observed in the laboratory. Finally, we confirm that their sizes decay exponentially with the period of the attractor, a result expected to be found in maps of the quadratic family.
Exploring theory space with Monte Carlo reweighting
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; ...
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less
NASA Technical Reports Server (NTRS)
1973-01-01
The HD 220 program was created as part of the space shuttle solid rocket booster recovery system definition. The model was generated to investigate the damage to SRB components under water impact loads. The random nature of environmental parameters, such as ocean waves and wind conditions, necessitates estimation of the relative frequency of occurrence for these parameters. The nondeterministic nature of component strengths also lends itself to probabilistic simulation. The Monte Carlo technique allows the simultaneous perturbation of multiple independent parameters and provides outputs describing the probability distribution functions of the dependent parameters. This allows the user to determine the required statistics for each output parameter.
Exploring Replica-Exchange Wang-Landau sampling in higher-dimensional parameter space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valentim, Alexandra; Rocha, Julio C. S.; Tsai, Shan-Ho
We considered a higher-dimensional extension for the replica-exchange Wang-Landau algorithm to perform a random walk in the energy and magnetization space of the two-dimensional Ising model. This hybrid scheme combines the advantages of Wang-Landau and Replica-Exchange algorithms, and the one-dimensional version of this approach has been shown to be very efficient and to scale well, up to several thousands of computing cores. This approach allows us to split the parameter space of the system to be simulated into several pieces and still perform a random walk over the entire parameter range, ensuring the ergodicity of the simulation. Previous work, inmore » which a similar scheme of parallel simulation was implemented without using replica exchange and with a different way to combine the result from the pieces, led to discontinuities in the final density of states over the entire range of parameters. From our simulations, it appears that the replica-exchange Wang-Landau algorithm is able to overcome this diculty, allowing exploration of higher parameter phase space by keeping track of the joint density of states.« less
DSGRN: Examining the Dynamics of Families of Logical Models.
Cummins, Bree; Gedeon, Tomas; Harker, Shaun; Mischaikow, Konstantin
2018-01-01
We present a computational tool DSGRN for exploring the dynamics of a network by computing summaries of the dynamics of switching models compatible with the network across all parameters. The network can arise directly from a biological problem, or indirectly as the interaction graph of a Boolean model. This tool computes a finite decomposition of parameter space such that for each region, the state transition graph that describes the coarse dynamical behavior of a network is the same. Each of these parameter regions corresponds to a different logical description of the network dynamics. The comparison of dynamics across parameters with experimental data allows the rejection of parameter regimes or entire networks as viable models for representing the underlying regulatory mechanisms. This in turn allows a search through the space of perturbations of a given network for networks that robustly fit the data. These are the first steps toward discovering a network that optimally matches the observed dynamics by searching through the space of networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sousa, Francisco F. G. de; Rubinger, Rero M.; Sartorelli, José C., E-mail: sartorelli@if.usp.br
We report high-resolution measurements that experimentally confirm a spiral cascade structure and a scaling relationship of shrimps in the Chua's circuit. Circuits constructed using this component allow for a comprehensive characterization of the circuit behaviors through high resolution parameter spaces. To illustrate the power of our technological development for the creation and the study of chaotic circuits, we constructed a Chua circuit and study its high resolution parameter space. The reliability and stability of the designed component allowed us to obtain data for long periods of time (∼21 weeks), a data set from which an accurate estimation of Lyapunov exponentsmore » for the circuit characterization was possible. Moreover, this data, rigorously characterized by the Lyapunov exponents, allows us to reassure experimentally that the shrimps, stable islands embedded in a domain of chaos in the parameter spaces, can be observed in the laboratory. Finally, we confirm that their sizes decay exponentially with the period of the attractor, a result expected to be found in maps of the quadratic family.« less
Effect of space allowance and flooring on the behavior of pregnant ewes.
Vik, S G; Øyrehagen, O; Bøe, K E
2017-05-01
Space allowance recommendations for pregnant ewes vary considerably. The aim of this experiment was to investigate the effect of space allowance and floor type on activity, lying position, displacements, and aggressive interactions in pregnant ewes. A 3 × 2 factorial experiment was conducted with space allowance (0.75, 1.50, and 2.25 m/ewe) and type of flooring (straw bedding and expanded metal flooring) as the main factors. A total of 48 pregnant ewes were randomly assigned to 6 groups with 8 ewes in each group. All groups were exposed to each treatment for 7 d. The ewes were video recorded for 24 h at the end of each treatment period and general activity, lying position in the pen, and social lying position were scored every 15 min. Displacements and aggressive interactions were scored continuously from 1030 to 1430 h. Mean lying time ( < 0.0001) and time spent lying simultaneously ( < 0.0001) increased whereas time spent eating ( < 0.001) and standing ( < 0.001) decreased when space allowance increased from 0.75 to 1.50 m/ewe. Further increasing the space allowance to 2.25 m/ewe, however, had no effect on these parameters. Sitting was observed only in the 0.75 m/ewe treatment. Type of flooring had no significant effect on general activity. Ewes in the straw bedding treatment spent more time lying in the middle of the pen than ewes on expanded metal ( < 0.0001), but space allowance had no significant effect on this parameter. The proportion of time spent lying against side walls increased ( < 0.0001) whereas the proportion of time spent lying against the back wall decreased ( < 0.0001) when the space allowance was increased. In general, the distance between the ewes when lying significantly increased when space allowance increased from 0.75 to 1.50 m/ewe. Total number of displacements when lying ( < 0.0001) and aggressive interactions when active ( < 0.001) decreased when space allowance increased from 0.75 to 1.50 m/ewe and further slightly decreased, although the decrease was significant only for displacements when lying, when space allowance increased to 2.25 m/ewe. Low-ranked ewes were not exposed to more aggressive behavior than high-ranked ewes. In conclusion, increasing space allowance from 0.75 to 1.50 m/ewe had positive effects on activity and behavior in pregnant ewes, but further increasing space allowance to 2.25 m/ewe had limited effects, as did type of flooring. Hence, recommended space allowance for pregnant ewes should not be lower than 1.50 m/ewe.
pypet: A Python Toolkit for Data Management of Parameter Explorations
Meyer, Robert; Obermayer, Klaus
2016-01-01
pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines. PMID:27610080
pypet: A Python Toolkit for Data Management of Parameter Explorations.
Meyer, Robert; Obermayer, Klaus
2016-01-01
pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines.
Superheavy dark matter through Higgs portal operators
NASA Astrophysics Data System (ADS)
Kolb, Edward W.; Long, Andrew J.
2017-11-01
The WIMPzilla hypothesis is that the dark matter is a super-weakly-interacting and superheavy particle. Conventionally, the WIMPzilla abundance is set by gravitational particle production during or at the end of inflation. In this study we allow the WIMPzilla to interact directly with Standard Model fields through the Higgs portal, and we calculate the thermal production (freeze-in) of WIMPzilla dark matter from the annihilation of Higgs boson pairs in the plasma. The two particle-physics model parameters are the WIMPzilla mass and the Higgs-WIMPzilla coupling. The two cosmological parameters are the reheating temperature and the expansion rate of the universe at the end of inflation. We delineate the regions of parameter space where either gravitational or thermal production is dominant, and within those regions we identify the parameters that predict the observed dark matter relic abundance. Allowing for thermal production opens up the parameter space, even for Planck-suppressed Higgs-WIMPzilla interactions.
NASA Astrophysics Data System (ADS)
Jamróz, Dariusz; Niedoba, Tomasz; Surowiak, Agnieszka; Tumidajski, Tadeusz; Szostek, Roman; Gajer, Mirosław
2017-09-01
The application of methods drawing upon multi-parameter visualization of data by transformation of multidimensional space into two-dimensional one allow to show multi-parameter data on computer screen. Thanks to that, it is possible to conduct a qualitative analysis of this data in the most natural way for human being, i.e. by the sense of sight. An example of such method of multi-parameter visualization is multidimensional scaling. This method was used in this paper to present and analyze a set of seven-dimensional data obtained from Janina Mining Plant and Wieczorek Coal Mine. It was decided to examine whether the method of multi-parameter data visualization allows to divide the samples space into areas of various applicability to fluidal gasification process. The "Technological applicability card for coals" was used for this purpose [Sobolewski et al., 2012; 2017], in which the key parameters, important and additional ones affecting the gasification process were described.
NASA Astrophysics Data System (ADS)
Cianci, D.; Furmanski, A.; Karagiorgi, G.; Ross-Lonergan, M.
2017-09-01
We investigate the ability of the short baseline neutrino (SBN) experimental program at Fermilab to test the globally-allowed (3 +N ) sterile neutrino oscillation parameter space. We explicitly consider the globally-allowed parameter space for the (3 +1 ), (3 +2 ), and (3 +3 ) sterile neutrino oscillation scenarios. We find that SBN can probe with 5 σ sensitivity more than 85%, 95% and 55% of the parameter space currently allowed at 99% confidence level for the (3 +1 ), (3 +2 ) and (3 +3 ) scenarios, respectively, with the (3 +N ) allowed space used in these studies closely resembling that of previous studies [J. M. Conrad, C. M. Ignarra, G. Karagiorgi, M. H. Shaevitz, and J. Spitz, Adv. High Energy Phys. 2013, 1 (2013)., 10.1155/2013/163897], calculated using the same methodology. In the case of the (3 +2 ) and (3 +3 ) scenarios, C P -violating phases appear in the oscillation probability terms, leading to observable differences in the appearance probabilities of neutrinos and antineutrinos. We explore SBN's sensitivity to those phases for the (3 +2 ) scenario through the currently planned neutrino beam running, and investigate potential improvements through additional antineutrino beam running. We show that, if antineutrino exposure is considered, for maximal values of the (3 +2 ) C P -violating phase ϕ54, SBN could be the first experiment to directly observe ˜2 σ hints of C P violation associated with an extended lepton sector.
Construction of non-Abelian gauge theories on noncommutative spaces
NASA Astrophysics Data System (ADS)
Jurčo, B.; Möller, L.; Schraml, S.; Schupp, P.; Wess, J.
We present a formalism to explicitly construct non-Abelian gauge theories on noncommutative spaces (induced via a star product with a constant Poisson tensor) from a consistency relation. This results in an expansion of the gauge parameter, the noncommutative gauge potential and fields in the fundamental representation, in powers of a parameter of the noncommutativity. This allows the explicit construction of actions for these gauge theories.
NASA Astrophysics Data System (ADS)
Jubb, Thomas; Kirk, Matthew; Lenz, Alexander
2017-12-01
We have considered a model of Dark Minimal Flavour Violation (DMFV), in which a triplet of dark matter particles couple to right-handed up-type quarks via a heavy colour-charged scalar mediator. By studying a large spectrum of possible constraints, and assessing the entire parameter space using a Markov Chain Monte Carlo (MCMC), we can place strong restrictions on the allowed parameter space for dark matter models of this type.
On the identifiability of inertia parameters of planar Multi-Body Space Systems
NASA Astrophysics Data System (ADS)
Nabavi-Chashmi, Seyed Yaser; Malaek, Seyed Mohammad-Bagher
2018-04-01
This work describes a new formulation to study the identifiability characteristics of Serially Linked Multi-body Space Systems (SLMBSS). The process exploits the so called "Lagrange Formulation" to develop a linear form of Equations of Motion w.r.t the system Inertia Parameters (IPs). Having developed a specific form of regressor matrix, we aim to expedite the identification process. The new approach allows analytical as well as numerical identification and identifiability analysis for different SLMBSSs' configurations. Moreover, the explicit forms of SLMBSSs identifiable parameters are derived by analyzing the identifiability characteristics of the robot. We further show that any SLMBSS designed with Variable Configurations Joint allows all IPs to be identifiable through comparing two successive identification outcomes. This feature paves the way to design new class of SLMBSS for which accurate identification of all IPs is at hand. Different case studies reveal that proposed formulation provides fast and accurate results, as required by the space applications. Further studies might be necessary for cases where planar-body assumption becomes inaccurate.
Oktem, Figen S; Ozaktas, Haldun M
2010-08-01
Linear canonical transforms (LCTs) form a three-parameter family of integral transforms with wide application in optics. We show that LCT domains correspond to scaled fractional Fourier domains and thus to scaled oblique axes in the space-frequency plane. This allows LCT domains to be labeled and ordered by the corresponding fractional order parameter and provides insight into the evolution of light through an optical system modeled by LCTs. If a set of signals is highly confined to finite intervals in two arbitrary LCT domains, the space-frequency (phase space) support is a parallelogram. The number of degrees of freedom of this set of signals is given by the area of this parallelogram, which is equal to the bicanonical width product but usually smaller than the conventional space-bandwidth product. The bicanonical width product, which is a generalization of the space-bandwidth product, can provide a tighter measure of the actual number of degrees of freedom, and allows us to represent and process signals with fewer samples.
ISS Mini AERCam Radio Frequency (RF) Coverage Analysis Using iCAT Development Tool
NASA Technical Reports Server (NTRS)
Bolen, Steve; Vazquez, Luis; Sham, Catherine; Fredrickson, Steven; Fink, Patrick; Cox, Jan; Phan, Chau; Panneton, Robert
2003-01-01
The long-term goals of the National Aeronautics and Space Administration's (NASA's) Human Exploration and Development of Space (HEDS) enterprise may require the development of autonomous free-flier (FF) robotic devices to operate within the vicinity of low-Earth orbiting spacecraft to supplement human extravehicular activities (EVAs) in space. Future missions could require external visual inspection of the spacecraft that would be difficult, or dangerous, for humans to perform. Under some circumstance, it may be necessary to employ an un-tethered communications link between the FF and the users. The interactive coverage analysis tool (ICAT) is a software tool that has been developed to perform critical analysis of the communications link performance for a FF operating in the vicinity of the International Space Station (ISS) external environment. The tool allows users to interactively change multiple parameters of the communications link parameters to efficiently perform systems engineering trades on network performance. These trades can be directly translated into design and requirements specifications. This tool significantly reduces the development time in determining a communications network topology by allowing multiple parameters to be changed, and the results of link coverage to be statistically characterized and plotted interactively.
Effects of SO(10)-inspired scalar non-universality on the MSSM parameter space at large tanβ
NASA Astrophysics Data System (ADS)
Ramage, M. R.
2005-08-01
We analyze the parameter space of the ( μ>0, A=0) CMSSM at large tanβ with a small degree of non-universality originating from D-terms and Higgs-sfermion splitting inspired by SO(10) GUT models. The effects of such non-universalities on the sparticle spectrum and observables such as (, B(b→Xγ), the SUSY threshold corrections to the bottom mass and Ωh are examined in detail and the consequences for the allowed parameter space of the model are investigated. We find that even small deviations to universality can result in large qualitative differences compared to the universal case; for certain values of the parameters, we find, even at low m and m, that radiative electroweak symmetry breaking fails as a consequence of either |<0 or mA2<0. We find particularly large departures from the mSugra case for the neutralino relic density, which is sensitive to significant changes in the position and shape of the A resonance and a substantial increase in the Higgsino component of the LSP. However, we find that the corrections to the bottom mass are not sufficient to allow for Yukawa unification.
Search space mapping: getting a picture of coherent laser control.
Shane, Janelle C; Lozovoy, Vadim V; Dantus, Marcos
2006-10-12
Search space mapping is a method for quickly visualizing the experimental parameters that can affect the outcome of a coherent control experiment. We demonstrate experimental search space mapping for the selective fragmentation and ionization of para-nitrotoluene and show how this method allows us to gather information about the dominant trends behind our achieved control.
On the theory of multi-pulse vibro-impact mechanisms
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Metrikin, V. S.; Nikiforova, I. V.; Ipatov, A. A.
2017-11-01
This paper presents a mathematical model of a new multi-striker eccentric shock-vibration mechanism with a crank-sliding bar vibration exciter and an arbitrary number of pistons. Analytical solutions for the parameters of the model are obtained to determine the regions of existence of stable periodic motions. Under the assumption of an absolutely inelastic collision of the piston, we derive equations that single out a bifurcational unattainable boundary in the parameter space, which has a countable number of arbitrarily complex stable periodic motions in its neighbourhood. We present results of numerical simulations, which illustrate the existence of periodic and stochastic motions. The methods proposed in this paper for investigating the dynamical characteristics of the new crank-type conrod mechanisms allow practitioners to indicate regions in the parameter space, which allow tuning these mechanisms into the most efficient periodic mode of operation, and to effectively analyze the main changes in their operational regimes when the system parameters are changed.
Theoretical Analysis of Spacing Parameters of Anisotropic 3D Surface Roughness
NASA Astrophysics Data System (ADS)
Rudzitis, J.; Bulaha, N.; Lungevics, J.; Linins, O.; Berzins, K.
2017-04-01
The authors of the research have analysed spacing parameters of anisotropic 3D surface roughness crosswise to machining (friction) traces RSm1 and lengthwise to machining (friction) traces RSm2. The main issue arises from the RSm2 values being limited by values of sampling length l in the measuring devices; however, on many occasions RSm2 values can exceed l values. Therefore, the mean spacing values of profile irregularities in the longitudinal direction in many cases are not reliable and they should be determined by another method. Theoretically, it is proved that anisotropic surface roughness anisotropy coefficient c=RSm1/RSm2 equals texture aspect ratio Str, which is determined by surface texture standard EN ISO 25178-2. This allows using parameter Str to determine mean spacing of profile irregularities and estimate roughness anisotropy.
Astrophysical neutrinos flavored with beyond the Standard Model physics
NASA Astrophysics Data System (ADS)
Rasmussen, Rasmus W.; Lechner, Lukas; Ackermann, Markus; Kowalski, Marek; Winter, Walter
2017-10-01
We systematically study the allowed parameter space for the flavor composition of astrophysical neutrinos measured at Earth, including beyond the Standard Model theories at production, during propagation, and at detection. One motivation is to illustrate the discrimination power of the next-generation neutrino telescopes such as IceCube-Gen2. We identify several examples that lead to potential deviations from the standard neutrino mixing expectation such as significant sterile neutrino production at the source, effective operators modifying the neutrino propagation at high energies, dark matter interactions in neutrino propagation, or nonstandard interactions in Earth matter. IceCube-Gen2 can exclude about 90% of the allowed parameter space in these cases, and hence will allow us to efficiently test and discriminate between models. More detailed information can be obtained from additional observables such as the energy dependence of the effect, fraction of electron antineutrinos at the Glashow resonance, or number of tau neutrino events.
Natural implementation of neutralino dark matter
NASA Astrophysics Data System (ADS)
King, Steve F.; Roberts, Jonathan P.
2006-09-01
The prediction of neutralino dark matter is generally regarded as one of the successes of the Minimal Supersymmetric Standard Model (MSSM). However the successful regions of parameter space allowed by WMAP and collider constraints are quite restricted. We discuss fine-tuning with respect to both dark matter and Electroweak Symmetry Breaking (EWSB) and explore regions of MSSM parameter space with non-universal gaugino and third family scalar masses in which neutralino dark matter may be implemented naturally. In particular allowing non-universal gauginos opens up the bulk region that allows Bino annihilation via t-channel slepton exchange, leading to ``supernatural dark matter'' corresponding to no fine-tuning at all with respect to dark matter. By contrast we find that the recently proposed ``well tempered neutralino'' regions involve substantial fine-tuning of MSSM parameters in order to satisfy the dark matter constraints, although the fine tuning may be ameliorated if several annihilation channels act simultaneously. Although we have identified regions of ``supernatural dark matter'' in which there is no fine tuning to achieve successful dark matter, the usual MSSM fine tuning to achieve EWSB always remains.
Callahan, S R; Cross, A J; DeDecker, A E; Lindemann, M D; Estienne, M J
2017-01-01
The objective was to determine effects of nursery group-size-floor space allowance on growth, physiology, and hematology of replacement gilts. A 3 × 3 factorial arrangement of treatments was used wherein gilts classified as large, medium, or small ( = 2537; BW = 5.6 ± 0.6 kg) from 13 groups of weaned pigs were placed in pens of 14, 11, or 8 pigs resulting in floor space allowances of 0.15, 0.19, or 0.27 m/pig, respectively. Pigs were weighed on d 0 (weaning) and d 46 (exit from nursery). The ADG was affected by group-size-floor space allowance × pig size ( = 0.04). Large- and medium-size gilts allowed the most floor space had greater ( < 0.05) ADG than similar size gilts allowed the least floor space but for small size gilts there was no effect ( > 0.05) of group size-floor space allowance. Mortality in the nursery was not affected ( > 0.05) by treatment, size, or treatment × size and overall was approximately 2.1%. Complete blood counts and blood chemistry analyses were performed on samples collected at d 6 and 43 from a subsample of gilts ( = 18/group-size-floor space allowance) within a single group. The concentration ( < 0.01) and percentage ( = 0.03) of reticulocytes was the least and red blood cell distribution width the greatest ( < 0.01) in gilts allowed 0.15 m floor space (effects of treatment). Blood calcium was affected by treatment ( = 0.02) and concentrations for gilts allowed the greatest and intermediate amounts of floor space were greater ( < 0.05) than for gilts allowed the least floor space. Serum concentrations of cortisol were not affected by treatment × day ( = 0.27). Cortisol concentrations increased from d 6 to d 43 in all groups and were affected by day ( < 0.01) but not treatment ( = 0.53). Greater space allowance achieved by placing fewer pigs per pen in the nursery affected blood parameters and resulted in large- and medium-size replacement gilts displaying increased ADG. Further study will determine if these effects influence lifetime reproductive capacity and sow longevity.
Figures of merit for present and future dark energy probes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mortonson, Michael J.; Huterer, Dragan; Hu, Wayne
2010-09-15
We compare current and forecasted constraints on dynamical dark energy models from Type Ia supernovae and the cosmic microwave background using figures of merit based on the volume of the allowed dark energy parameter space. For a two-parameter dark energy equation of state that varies linearly with the scale factor, and assuming a flat universe, the area of the error ellipse can be reduced by a factor of {approx}10 relative to current constraints by future space-based supernova data and CMB measurements from the Planck satellite. If the dark energy equation of state is described by a more general basis ofmore » principal components, the expected improvement in volume-based figures of merit is much greater. While the forecasted precision for any single parameter is only a factor of 2-5 smaller than current uncertainties, the constraints on dark energy models bounded by -1{<=}w{<=}1 improve for approximately 6 independent dark energy parameters resulting in a reduction of the total allowed volume of principal component parameter space by a factor of {approx}100. Typical quintessence models can be adequately described by just 2-3 of these parameters even given the precision of future data, leading to a more modest but still significant improvement. In addition to advances in supernova and CMB data, percent-level measurement of absolute distance and/or the expansion rate is required to ensure that dark energy constraints remain robust to variations in spatial curvature.« less
NASA Astrophysics Data System (ADS)
Wang, Weijian; Guo, Shu-Yuan; Wang, Zhi-Gang
2016-04-01
In this paper, we study the cofactor 2 zero neutrino mass matrices with the Fritzsch-type structure in charged lepton mass matrix (CLMM). In the numerical analysis, we perform a scan over the parameter space of all the 15 possible patterns to get a large sample of viable scattering points. Among the 15 possible patterns, three of them can accommodate the latest lepton mixing and neutrino mass data. We compare the predictions of the allowed patterns with their counterparts with diagonal CLMM. In this case, the severe cosmology bound on the neutrino mass set a strong constraint on the parameter space, rendering two patterns only marginally allowed. The Fritzsch-type CLMM will have impact on the viable parameter space and give rise to different phenomenological predictions. Each allowed pattern predicts the strong correlations between physical variables, which is essential for model selection and can be probed in future experiments. It is found that under the no-diagonal CLMM, the cofactor zeros structure in neutrino mass matrix is unstable as the running of renormalization group (RG) from seesaw scale to the electroweak scale. A way out of the problem is to propose the flavor symmetry under the models with a TeV seesaw scale. The inverse seesaw model and a loop-induced model are given as two examples.
International Docking Standard (IDSS) Interface Definition Document (IDD) . E; Revision
NASA Technical Reports Server (NTRS)
Kelly, Sean M.; Cryan, Scott P.
2016-01-01
This International Docking System Standard (IDSS) Interface Definition Document (IDD) is the result of a collaboration by the International Space Station membership to establish a standard docking interface to enable on-orbit crew rescue operations and joint collaborative endeavors utilizing different spacecraft. This IDSS IDD details the physical geometric mating interface and design loads requirements. The physical geometric interface requirements must be strictly followed to ensure physical spacecraft mating compatibility. This includes both defined components and areas that are void of components. The IDD also identifies common design parameters as identified in section 3.0, e.g., docking initial conditions and vehicle mass properties. This information represents a recommended set of design values enveloping a broad set of design reference missions and conditions, which if accommodated in the docking system design, increases the probability of successful docking between different spacecraft. This IDD does not address operational procedures or off-nominal situations, nor does it dictate implementation or design features behind the mating interface. It is the responsibility of the spacecraft developer to perform all hardware verification and validation, and to perform final docking analyses to ensure the needed docking performance and to develop the final certification loads for their application. While there are many other critical requirements needed in the development of a docking system such as fault tolerance, reliability, and environments (e.g. vibration, etc.), it is not the intent of the IDSS IDD to mandate all of these requirements; these requirements must be addressed as part of the specific developer's unique program, spacecraft and mission needs. This approach allows designers the flexibility to design and build docking mechanisms to their unique program needs and requirements. The purpose of the IDSS IDD is to provide basic common design parameters to allow developers to independently design compatible docking systems. The IDSS is intended for uses ranging from crewed to autonomous space vehicles, and from Low Earth Orbit (LEO) to deep-space exploration missions.The purpose of the IDSS IDD is to provide basic common design parameters to allow developers to independently design compatible docking systems. The IDSS is intended for uses ranging from crewed to autonomous space vehicles, and from Low Earth Orbit (LEO) to deep-space exploration missions. The purpose of the IDSS IDD is to provide basic common design parameters to allow developers to independently design compatible docking systems. The IDSS is intended for uses ranging from crewed to autonomous space vehicles, and from Low Earth Orbit (LEO) to deep-space exploration missions.
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.
NASA Technical Reports Server (NTRS)
Fabanich, William
2014-01-01
SpaceClaim/TD Direct has been used extensively in the development of the Advanced Stirling Radioisotope Generator (ASRG) thermal model. This paper outlines the workflow for that aspect of the task and includes proposed best practices and lessons learned. The ASRG thermal model was developed to predict component temperatures and power output and to provide insight into the prime contractors thermal modeling efforts. The insulation blocks, heat collectors, and cold side adapter flanges (CSAFs) were modeled with this approach. The model was constructed using mostly TD finite difference (FD) surfaces solids. However, some complex geometry could not be reproduced with TD primitives while maintaining the desired degree of geometric fidelity. Using SpaceClaim permitted the import of original CAD files and enabled the defeaturing repair of those geometries. TD Direct (a SpaceClaim add-on from CRTech) adds features that allowed the mark-up of that geometry. These so-called mark-ups control how finite element (FE) meshes were generated and allowed the tagging of features (e.g. edges, solids, surfaces). These tags represent parameters that include: submodels, material properties, material orienters, optical properties, and radiation analysis groups. TD aliases were used for most tags to allow analysis to be performed with a variety of parameter values. Domain-tags were also attached to individual and groups of surfaces and solids to allow them to be used later within TD to populate objects like, for example, heaters and contactors. These tools allow the user to make changes to the geometry in SpaceClaim and then easily synchronize the mesh in TD without having to redefine these objects each time as one would if using TD Mesher.The use of SpaceClaim/TD Direct has helped simplify the process for importing existing geometries and in the creation of high fidelity FE meshes to represent complex parts. It has also saved time and effort in the subsequent analysis.
Robust global identifiability theory using potentials--Application to compartmental models.
Wongvanich, N; Hann, C E; Sirisena, H R
2015-04-01
This paper presents a global practical identifiability theory for analyzing and identifying linear and nonlinear compartmental models. The compartmental system is prolonged onto the potential jet space to formulate a set of input-output equations that are integrals in terms of the measured data, which allows for robust identification of parameters without requiring any simulation of the model differential equations. Two classes of linear and non-linear compartmental models are considered. The theory is first applied to analyze the linear nitrous oxide (N2O) uptake model. The fitting accuracy of the identified models from differential jet space and potential jet space identifiability theories is compared with a realistic noise level of 3% which is derived from sensor noise data in the literature. The potential jet space approach gave a match that was well within the coefficient of variation. The differential jet space formulation was unstable and not suitable for parameter identification. The proposed theory is then applied to a nonlinear immunological model for mastitis in cows. In addition, the model formulation is extended to include an iterative method which allows initial conditions to be accurately identified. With up to 10% noise, the potential jet space theory predicts the normalized population concentration infected with pathogens, to within 9% of the true curve. Copyright © 2015 Elsevier Inc. All rights reserved.
Giordano, Anna; Barresi, Antonello A; Fissore, Davide
2011-01-01
The aim of this article is to show a procedure to build the design space for the primary drying of a pharmaceuticals lyophilization process. Mathematical simulation of the process is used to identify the operating conditions that allow preserving product quality and meeting operating constraints posed by the equipment. In fact, product temperature has to be maintained below a limit value throughout the operation, and the sublimation flux has to be lower than the maximum value allowed by the capacity of the condenser, besides avoiding choking flow in the duct connecting the drying chamber to the condenser. Few experimental runs are required to get the values of the parameters of the model: the dynamic parameters estimation algorithm, an advanced tool based on the pressure rise test, is used to this purpose. A simple procedure is proposed to take into account parameters uncertainty and, thus, it is possible to find the recipes that allow fulfilling the process constraints within the required uncertainty range. The same approach can be effective to take into account the heterogeneity of the batch when designing the freeze-drying recipe. Copyright © 2010 Wiley-Liss, Inc. and the American Pharmacists Association
Aggarwal, Ankush
2017-08-01
Motivated by the well-known result that stiffness of soft tissue is proportional to the stress, many of the constitutive laws for soft tissues contain an exponential function. In this work, we analyze properties of the exponential function and how it affects the estimation and comparison of elastic parameters for soft tissues. In particular, we find that as a consequence of the exponential function there are lines of high covariance in the elastic parameter space. As a result, one can have widely varying mechanical parameters defining the tissue stiffness but similar effective stress-strain responses. Drawing from elementary algebra, we propose simple changes in the norm and the parameter space, which significantly improve the convergence of parameter estimation and robustness in the presence of noise. More importantly, we demonstrate that these changes improve the conditioning of the problem and provide a more robust solution in the case of heterogeneous material by reducing the chances of getting trapped in a local minima. Based upon the new insight, we also propose a transformed parameter space which will allow for rational parameter comparison and avoid misleading conclusions regarding soft tissue mechanics.
Effects of Space Flight on Ovarian-Hypophyseal Function in Postpartum Rats
NASA Technical Reports Server (NTRS)
Burden, H. W.; Zary, J.; Lawrence, I. E.; Jonnalagadda, P.; Davis, M.; Hodson, C. A.
1997-01-01
The effect of space flight in a National Aeronautics and Space Administration (NASA) shuttle was studied in pregnant rats. Rats were launched on day 9 of gestation and recovered on day 20 of gestation. On day 20 of gestation, rats were unilaterally hysterectomized and subsequently allowed to go to term and deliver vaginally. There was no effect of space flight on pituitary and ovary mass postpartum. In addition, space flight did not alter healthy and atretic ovarian antral follicle populations, fetal wastage in utero, plasma concentrations of progesterone and luteinizing hormone (LH) or pituitary content of follicle stimulating hormone (FSH). Space flight significantly increased plasma concentrations of FSH and decreased pituitary content of LH at the postpartum sampling time. Collectively, these data show that space flight, initiated during the postimplantation period of pregnancy, and concluded before parturition, is compatible with maintenance of pregnancy and has minimal effects on postpartum hypophyseal parameters; however, none of the ovarian parameters examined was altered by space flight.
Space Weather and the State of Cardiovascular System of a Healthy Human Being
NASA Astrophysics Data System (ADS)
Samsonov, S. N.; Manykina, V. I.; Krymsky, G. F.; Petrova, P. G.; Palshina, A. M.; Vishnevsky, V. V.
The term "space weather" characterizes a state of the near-Earth environmental space. An organism of human being represents an open system so the change of conditions in the environment including the near-Earth environmental space influences the health state of a human being.In recent years many works devoted to the effect of space weather on the life on the Earth, and the degree of such effect has been represented from a zero-order up to apocalypse. To reveal a real effect of space weather on the health of human being the international Russian- Ukrainian experiment "Geliomed" is carried out since 2005 (http://geliomed.immsp.kiev.ua) [Vishnevsky et al., 2009]. The analysis of observational set of data has allowed to show a synchronism and globality of such effect (simultaneous manifestation of space weather parameters in a state of cardiovascular system of volunteer groups removed from each other at a distance over 6000 km). The response of volunteer' cardiovascular system to the changes of space weather parameters were observed even at insignificant values of the Earth's geomagnetic field. But even at very considerable disturbances of space weather parameters a human being healthy did not feel painful symptoms though measurements of objective physiological indices showed their changes.
Strength of the singularities, equation of state and asymptotic expansion in Kaluza-Klein space time
NASA Astrophysics Data System (ADS)
Samanta, G. C.; Goel, Mayank; Myrzakulov, R.
2018-04-01
In this paper an explicit cosmological model which allows cosmological singularities are discussed in Kaluza-Klein space time. The generalized power-law and asymptotic expansions of the baro-tropic fluid index ω and equivalently the deceleration parameter q, in terms of cosmic time 't' are considered. Finally, the strength of the found singularities is discussed.
Markov Chain Monte Carlo Used in Parameter Inference of Magnetic Resonance Spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hock, Kiel; Earle, Keith
2016-02-06
In this paper, we use Boltzmann statistics and the maximum likelihood distribution derived from Bayes’ Theorem to infer parameter values for a Pake Doublet Spectrum, a lineshape of historical significance and contemporary relevance for determining distances between interacting magnetic dipoles. A Metropolis Hastings Markov Chain Monte Carlo algorithm is implemented and designed to find the optimum parameter set and to estimate parameter uncertainties. In conclusion, the posterior distribution allows us to define a metric on parameter space that induces a geometry with negative curvature that affects the parameter uncertainty estimates, particularly for spectra with low signal to noise.
Application of "FLUOR-P" device for analysis of the space flight effects on the intracellular level.
NASA Astrophysics Data System (ADS)
Grigorieva, Olga; Rudimov, Evgeny; Buravkova, Ludmila; Galchuk, Sergey
The mechanisms of cellular gravisensitivity still remain unclear despite the intensive research in the hypogravity effects on cellular function. In most cell culture experiments on unmanned vehicles "Bion" and "Photon", as well as on the ISS only allow post-flight analysis of biological material, including fixed cells is provided. The dynamic evaluation cellular parameters over a prolonged period of time is not possible. Thus, a promising direction is the development of equipment for onboard autonomous experiments. For this purpose, the SSC RF IBMP RAS has developed "FLUOR-P" device for measurement and recording of the dynamic differential fluorescent signal from nano- and microsized objects of organic and inorganic nature (human and animal cells, unicellular algae, bacteria, cellular organelles suspension) in hermetically sealed cuvettes. Besides, the device allows to record the main physical factors affecting the analyzed object (temperature and gravity loads: position in space, any vector acceleration, shock) in sync with the main measurements. The device is designed to perform long-term programmable autonomous experiments in space flight on biological satellites. The device software of allows to carry out complex experiments using cell. Permanent registration of data on built-in flash will give the opportunity to analyze the dynamics of the estimated parameters. FLUOR-P is designed as a monobloc (5.5 kg weight), 8 functional blocks are located in the inner space of the device. Each registration unit of the FLUOR-P has two channels of fluorescence intensity and excitation light source with the wavelength range from 300 nm to 700 nm. During biosatellite "Photon" flight is supposed to conduct a full analysis of the most important intracellular parameters (mitochondria activity and intracellular pH) dynamics under space flight factors and to assess the possible contribution of temperature on the effects of microgravity. Work is supported by Roskosmos and the Russian Academy of Sciences.
Mobile robot motion estimation using Hough transform
NASA Astrophysics Data System (ADS)
Aldoshkin, D. N.; Yamskikh, T. N.; Tsarev, R. Yu
2018-05-01
This paper proposes an algorithm for estimation of mobile robot motion. The geometry of surrounding space is described with range scans (samples of distance measurements) taken by the mobile robot’s range sensors. A similar sample of space geometry in any arbitrary preceding moment of time or the environment map can be used as a reference. The suggested algorithm is invariant to isotropic scaling of samples or map that allows using samples measured in different units and maps made at different scales. The algorithm is based on Hough transform: it maps from measurement space to a straight-line parameters space. In the straight-line parameters, space the problems of estimating rotation, scaling and translation are solved separately breaking down a problem of estimating mobile robot localization into three smaller independent problems. The specific feature of the algorithm presented is its robustness to noise and outliers inherited from Hough transform. The prototype of the system of mobile robot orientation is described.
NASA Technical Reports Server (NTRS)
Fabanich, William A., Jr.
2014-01-01
SpaceClaim/TD Direct has been used extensively in the development of the Advanced Stirling Radioisotope Generator (ASRG) thermal model. This paper outlines the workflow for that aspect of the task and includes proposed best practices and lessons learned. The ASRG thermal model was developed to predict component temperatures and power output and to provide insight into the prime contractor's thermal modeling efforts. The insulation blocks, heat collectors, and cold side adapter flanges (CSAFs) were modeled with this approach. The model was constructed using mostly TD finite difference (FD) surfaces/solids. However, some complex geometry could not be reproduced with TD primitives while maintaining the desired degree of geometric fidelity. Using SpaceClaim permitted the import of original CAD files and enabled the defeaturing/repair of those geometries. TD Direct (a SpaceClaim add-on from CRTech) adds features that allowed the "mark-up" of that geometry. These so-called "mark-ups" control how finite element (FE) meshes are to be generated through the "tagging" of features (e.g. edges, solids, surfaces). These tags represent parameters that include: submodels, material properties, material orienters, optical properties, and radiation analysis groups. TD aliases were used for most tags to allow analysis to be performed with a variety of parameter values. "Domain-tags" were also attached to individual and groups of surfaces and solids to allow them to be used later within TD to populate objects like, for example, heaters and contactors. These tools allow the user to make changes to the geometry in SpaceClaim and then easily synchronize the mesh in TD without having to redefine the objects each time as one would if using TDMesher. The use of SpaceClaim/TD Direct helps simplify the process for importing existing geometries and in the creation of high fidelity FE meshes to represent complex parts. It also saves time and effort in the subsequent analysis.
Bansmann, J; Kielbassa, S; Hoster, H; Weigl, F; Boyen, H G; Wiedwald, U; Ziemann, P; Behm, R J
2007-09-25
The self-organization of diblock copolymers into micellar structures in an appropriate solvent allows the deposition of well ordered arrays of pure metal and alloy nanoparticles on flat surfaces with narrow distributions in particle size and interparticle spacing. Here we investigated the influence of the materials (substrate and polymer) and deposition parameters (temperature and emersion velocity) on the deposition of metal salt loaded micelles by dip-coating from solution and on the order and inter-particle spacing of the micellar deposits and thus of the metal nanoparticle arrays resulting after plasma removal of the polymer shell. For identical substrate and polymer, variation of the process parameters temperature and emersion velocity enables the controlled modification of the interparticle distance within a certain length regime. Moreover, also the degree of hexagonal order of the final array depends sensitively on these parameters.
General gauge mediation at the weak scale
Knapen, Simon; Redigolo, Diego; Shih, David
2016-03-09
We completely characterize General Gauge Mediation (GGM) at the weak scale by solving all IR constraints over the full parameter space. This is made possible through a combination of numerical and analytical methods, based on a set of algebraic relations among the IR soft masses derived from the GGM boundary conditions in the UV. We show how tensions between just a few constraints determine the boundaries of the parameter space: electroweak symmetry breaking (EWSB), the Higgs mass, slepton tachyons, and left-handed stop/sbottom tachyons. While these constraints allow the left-handed squarks to be arbitrarily light, they place strong lower bounds onmore » all of the right-handed squarks. Meanwhile, light EW superpartners are generic throughout much of the parameter space. This is especially the case at lower messenger scales, where a positive threshold correction to m h coming from light Higgsinos and winos is essential in order to satisfy the Higgs mass constraint.« less
Continuous Improvements to East Coast Abort Landings for Space Shuttle Aborts
NASA Technical Reports Server (NTRS)
Butler, Kevin D.
2003-01-01
Improvement initiatives in the areas of guidance, flight control, and mission operations provide increased capability for successful East Coast Abort Landings (ECAL). Automating manual crew procedures in the Space Shuttle's onboard guidance allows faster and more precise commanding of flight control parameters needed for successful ECALs. Automation also provides additional capability in areas not possible with manual control. Operational changes in the mission concept allow for the addition of new landing sites and different ascent trajectories that increase the regions of a successful landing. The larger regions of ECAL capability increase the safety of the crew and Orbiter.
X-Ray diffraction on large single crystals using a powder diffractometer
Jesche, A.; Fix, M.; Kreyssig, A.; ...
2016-06-16
Information on the lattice parameter of single crystals with known crystallographic structure allows for estimations of sample quality and composition. In many cases it is sufficient to determine one lattice parameter or the lattice spacing along a certain, high- symmetry direction, e.g. in order to determine the composition in a substitution series by taking advantage of Vegard’s rule. Here we present a guide to accurate measurements of single crystals with dimensions ranging from 200 μm up to several millimeter using a standard powder diffractometer in Bragg-Brentano geometry. The correction of the error introduced by the sample height and the optimizationmore » of the alignment are discussed in detail. Finally, in particular for single crystals with a plate-like habit, the described procedure allows for measurement of the lattice spacings normal to the plates with high accuracy on a timescale of minutes.« less
A periodic table of effective field theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Clifford; Kampf, Karol; Novotny, Jiri
We systematically explore the space of scalar effective field theories (EFTs) consistent with a Lorentz invariant and local S-matrix. To do so we define an EFT classification based on four parameters characterizing 1) the number of derivatives per interaction, 2) the soft properties of amplitudes, 3) the leading valency of the interactions, and 4) the spacetime dimension. Carving out the allowed space of EFTs, we prove that exceptional EFTs like the non-linear sigma model, Dirac-Born-Infeld theory, and the special Galileon lie precisely on the boundary of allowed theory space. Using on-shell momentum shifts and recursion relations, we prove that EFTsmore » with arbitrarily soft behavior are forbidden and EFTs with leading valency much greater than the spacetime dimension cannot have enhanced soft behavior. We then enumerate all single scalar EFTs in d < 6 and verify that they correspond to known theories in the literature. Finally, our results suggest that the exceptional theories are the natural EFT analogs of gauge theory and gravity because they are one-parameter theories whose interactions are strictly dictated by properties of the S-matrix.« less
A periodic table of effective field theories
Cheung, Clifford; Kampf, Karol; Novotny, Jiri; ...
2017-02-06
We systematically explore the space of scalar effective field theories (EFTs) consistent with a Lorentz invariant and local S-matrix. To do so we define an EFT classification based on four parameters characterizing 1) the number of derivatives per interaction, 2) the soft properties of amplitudes, 3) the leading valency of the interactions, and 4) the spacetime dimension. Carving out the allowed space of EFTs, we prove that exceptional EFTs like the non-linear sigma model, Dirac-Born-Infeld theory, and the special Galileon lie precisely on the boundary of allowed theory space. Using on-shell momentum shifts and recursion relations, we prove that EFTsmore » with arbitrarily soft behavior are forbidden and EFTs with leading valency much greater than the spacetime dimension cannot have enhanced soft behavior. We then enumerate all single scalar EFTs in d < 6 and verify that they correspond to known theories in the literature. Finally, our results suggest that the exceptional theories are the natural EFT analogs of gauge theory and gravity because they are one-parameter theories whose interactions are strictly dictated by properties of the S-matrix.« less
Lago, Laura; Rilo, Benito; Fernández-Formoso, Noelia; DaSilva, Luis
2017-08-01
Rehabilitation with implants is a challenge. Having previous evaluation criteria is key to establishing the best treatment for the patient. In addition to clinical and radiological aspects, the prosthetic parameters must be taken into account in the initial workup, since they allow discrimination between fixed and removable rehabilitation. We present a study protocol that analyzes three basic prosthetic aspects. First, denture space defines the need to replace teeth, tissue, or both. Second, lip support focuses on whether or not to include a flange. Third, the smile line warns of potential risks in esthetic rehabilitation. Combining these parameters allows us to make a decision as to the most suitable type of prosthesis. The proposed protocol is useful for assessing the prosthetic parameters that influence decision making as to the best-suited type of restoration. From this point of view, we think it is appropriate for the initial approach to the patient. In any case, other considerations of study may amend the proposal. © 2016 by the American College of Prosthodontists.
40 CFR 761.347 - First level sampling-waste from existing piles.
Code of Federal Regulations, 2012 CFR
2012-07-01
... separate 19-liter container, allowing only sufficient space at the top of the container to secure the lid... of 1 foot or 30 cm, or if there are too many piles to spread out in the space available, use the... parameters: a particular radial direction, “r,” from the peak at the center of the pile to the outer edge at...
Space motion sickness medications - Interference with biomedical parameters
NASA Technical Reports Server (NTRS)
Vernikos-Danellis, J.; Winget, C. M.; Leach, C. S.; Rosenblatt, L. S.; Lyman, J.; Beljan, J. R.
1976-01-01
The possibility that drugs administered to Skylab 3 and 4 crewmen for space motion sickness may have interfered with their biomedical evaluation in space is investigated. The mixture of scopolamine and dextroamphetamine produced changes which allow a more valid interpretation of the early biomedical changes ocurring in weightlessness. There is no doubt that the dramatic increase in aldosterone excretion is not attributable to the drug, while the drug could have contributed to the in-flight changes observed in cortisol, epinephrine, heart rate and possibly urine volume.
The N2HDM under theoretical and experimental scrutiny
NASA Astrophysics Data System (ADS)
Mühlleitner, Margarete; Sampaio, Marco O. P.; Santos, Rui; Wittbrodt, Jonas
2017-03-01
The N2HDM is based on the CP-conserving 2HDM extended by a real scalar singlet field. Its enlarged parameter space and its fewer symmetry conditions as compared to supersymmetric models allow for an interesting phenomenology compatible with current experimental constraints, while adding to the 2HDM sector the possibility of Higgs-to-Higgs decays with three different Higgs bosons. In this paper the N2HDM is subjected to detailed scrutiny. Regarding the theoretical constraints we implement tests of tree-level perturbativity and vacuum stability. Moreover, we present, for the first time, a thorough analysis of the global minimum of the N2HDM. The model and the theoretical constraints have been implemented in ScannerS, and we provide N2HDECAY, a code based on HDECAY, for the computation of the N2HDM branching ratios and total widths including the state-of-the-art higher order QCD corrections and off-shell decays. We then perform an extensive parameter scan in the N2HDM parameter space, with all theoretical and experimental constraints applied, and analyse its allowed regions. We find that large singlet admixtures are still compatible with the Higgs data and investigate which observables will allow to restrict the singlet nature most effectively in the next runs of the LHC. Similarly to the 2HDM, the N2HDM exhibits a wrong-sign parameter regime, which will be constrained by future Higgs precision measurements.
Exploring Neutrino Oscillation Parameter Space with a Monte Carlo Algorithm
NASA Astrophysics Data System (ADS)
Espejel, Hugo; Ernst, David; Cogswell, Bernadette; Latimer, David
2015-04-01
The χ2 (or likelihood) function for a global analysis of neutrino oscillation data is first calculated as a function of the neutrino mixing parameters. A computational challenge is to obtain the minima or the allowed regions for the mixing parameters. The conventional approach is to calculate the χ2 (or likelihood) function on a grid for a large number of points, and then marginalize over the likelihood function. As the number of parameters increases with the number of neutrinos, making the calculation numerically efficient becomes necessary. We implement a new Monte Carlo algorithm (D. Foreman-Mackey, D. W. Hogg, D. Lang and J. Goodman, Publications of the Astronomical Society of the Pacific, 125 306 (2013)) to determine its computational efficiency at finding the minima and allowed regions. We examine a realistic example to compare the historical and the new methods.
NASA Astrophysics Data System (ADS)
Kern, Nicholas S.; Liu, Adrian; Parsons, Aaron R.; Mesinger, Andrei; Greig, Bradley
2017-10-01
Current and upcoming radio interferometric experiments are aiming to make a statistical characterization of the high-redshift 21 cm fluctuation signal spanning the hydrogen reionization and X-ray heating epochs of the universe. However, connecting 21 cm statistics to the underlying physical parameters is complicated by the theoretical challenge of modeling the relevant physics at computational speeds quick enough to enable exploration of the high-dimensional and weakly constrained parameter space. In this work, we use machine learning algorithms to build a fast emulator that can accurately mimic an expensive simulation of the 21 cm signal across a wide parameter space. We embed our emulator within a Markov Chain Monte Carlo framework in order to perform Bayesian parameter constraints over a large number of model parameters, including those that govern the Epoch of Reionization, the Epoch of X-ray Heating, and cosmology. As a worked example, we use our emulator to present an updated parameter constraint forecast for the Hydrogen Epoch of Reionization Array experiment, showing that its characterization of a fiducial 21 cm power spectrum will considerably narrow the allowed parameter space of reionization and heating parameters, and could help strengthen Planck's constraints on {σ }8. We provide both our generalized emulator code and its implementation specifically for 21 cm parameter constraints as publicly available software.
NASA Astrophysics Data System (ADS)
Hughes, J. D.; White, J.; Doherty, J.
2011-12-01
Linear prediction uncertainty analysis in a Bayesian framework was applied to guide the conditioning of an integrated surface water/groundwater model that will be used to predict the effects of groundwater withdrawals on surface-water and groundwater flows. Linear prediction uncertainty analysis is an effective approach for identifying (1) raw and processed data most effective for model conditioning prior to inversion, (2) specific observations and periods of time critically sensitive to specific predictions, and (3) additional observation data that would reduce model uncertainty relative to specific predictions. We present results for a two-dimensional groundwater model of a 2,186 km2 area of the Biscayne aquifer in south Florida implicitly coupled to a surface-water routing model of the actively managed canal system. The model domain includes 5 municipal well fields withdrawing more than 1 Mm3/day and 17 operable surface-water control structures that control freshwater releases from the Everglades and freshwater discharges to Biscayne Bay. More than 10 years of daily observation data from 35 groundwater wells and 24 surface water gages are available to condition model parameters. A dense parameterization was used to fully characterize the contribution of the inversion null space to predictive uncertainty and included bias-correction parameters. This approach allows better resolution of the boundary between the inversion null space and solution space. Bias-correction parameters (e.g., rainfall, potential evapotranspiration, and structure flow multipliers) absorb information that is present in structural noise that may otherwise contaminate the estimation of more physically-based model parameters. This allows greater precision in predictions that are entirely solution-space dependent, and reduces the propensity for bias in predictions that are not. Results show that application of this analysis is an effective means of identifying those surface-water and groundwater data, both raw and processed, that minimize predictive uncertainty, while simultaneously identifying the maximum solution-space dimensionality of the inverse problem supported by the data.
Results of the Irkutsk Incoherent Scattering Radar for space debris studies in 2013
NASA Astrophysics Data System (ADS)
Lebedev, Valentin; Kushnarev, Dmitriy; Nevidimov, Nikolay
We present result of space object (SO) registration received on the Irkutsk Incoherent Scattering Radar (IISR) in June 2013 during regular ionospheric measurement. Diagnostic the of the radar for definition of the SO characteristics: range, beam velocity, azimuth angle, elevation, and signal amplitude were improved after the carried-out technological modernization and SO we have possibility of simultaneous measurement of parameters of parameters ionosphere and SO. Now the IISR new hardware-software complex allows to operate in a mode of ionospheric measurements up to 1000 SO flights per day, and to register objects of 10 cm in size at range of 800-900 km.
In-Space Radiator Shape Optimization using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Hull, Patrick V.; Kittredge, Ken; Tinker, Michael; SanSoucie, Michael
2006-01-01
Future space exploration missions will require the development of more advanced in-space radiators. These radiators should be highly efficient and lightweight, deployable heat rejection systems. Typical radiators for in-space heat mitigation commonly comprise a substantial portion of the total vehicle mass. A small mass savings of even 5-10% can greatly improve vehicle performance. The objective of this paper is to present the development of detailed tools for the analysis and design of in-space radiators using evolutionary computation techniques. The optimality criterion is defined as a two-dimensional radiator with a shape demonstrating the smallest mass for the greatest overall heat transfer, thus the end result is a set of highly functional radiator designs. This cross-disciplinary work combines topology optimization and thermal analysis design by means of a genetic algorithm The proposed design tool consists of the following steps; design parameterization based on the exterior boundary of the radiator, objective function definition (mass minimization and heat loss maximization), objective function evaluation via finite element analysis (thermal radiation analysis) and optimization based on evolutionary algorithms. The radiator design problem is defined as follows: the input force is a driving temperature and the output reaction is heat loss. Appropriate modeling of the space environment is added to capture its effect on the radiator. The design parameters chosen for this radiator shape optimization problem fall into two classes, variable height along the width of the radiator and a spline curve defining the -material boundary of the radiator. The implementation of multiple design parameter schemes allows the user to have more confidence in the radiator optimization tool upon demonstration of convergence between the two design parameter schemes. This tool easily allows the user to manipulate the driving temperature regions thus permitting detailed design of in-space radiators for unique situations. Preliminary results indicate an optimized shape following that of the temperature distribution regions in the "cooler" portions of the radiator. The results closely follow the expected radiator shape.
NASA Technical Reports Server (NTRS)
Poberezhskiy, Ilya; Chang, Daniel; Erlig, Hernan
2011-01-01
Non Planar Ring Oscillator (NPRO) lasers are highly attractive for metrology applications. NPRO reliability for prolonged space missions is limited by reliability of 808 nm pump diodes. Combined laser farm aging parameter allows comparing different bias approaches. Monte-Carlo software developed to calculate the reliability of laser pump architecture, perform parameter sensitivity studies To meet stringent Space Interferometry Mission (SIM) Lite lifetime reliability / output power requirements, we developed a single-mode Laser Pump Module architecture that: (1) provides 2 W of power at 808 nm with >99.7% reliability for 5.5 years (2) consists of 37 de-rated diode lasers operating at -5C, with outputs combined in a very low loss 37x1 all-fiber coupler
The INAF/IAPS Plasma Chamber for ionospheric simulation experiment
NASA Astrophysics Data System (ADS)
Diego, Piero
2016-04-01
The plasma chamber is particularly suitable to perform studies for the following applications: - plasma compatibility and functional tests on payloads envisioned to operate in the ionosphere (e.g. sensors onboard satellites, exposed to the external plasma environment); - calibration/testing of plasma diagnostic sensors; - characterization and compatibility tests on components for space applications (e.g. optical elements, harness, satellite paints, photo-voltaic cells, etc.); - experiments on satellite charging in a space plasma environment; - tests on active experiments which use ion, electron or plasma sources (ion thrusters, hollow cathodes, field effect emitters, plasma contactors, etc.); - possible studies relevant to fundamental space plasma physics. The facility consists of a large volume vacuum tank (a cylinder of length 4.5 m and diameter 1.7 m) equipped with a Kaufman type plasma source, operating with Argon gas, capable to generate a plasma beam with parameters (i.e. density and electron temperature) close to the values encountered in the ionosphere at F layer altitudes. The plasma beam (A+ ions and electrons) is accelerated into the chamber at a velocity that reproduces the relative motion between an orbiting satellite and the ionosphere (≈ 8 km/s). This feature, in particular, allows laboratory simulations of the actual compression and depletion phenomena which take place in the ram and wake regions around satellites moving through the ionosphere. The reproduced plasma environment is monitored using Langmuir Probes (LP) and Retarding Potential Analyzers (RPA). These sensors can be automatically moved within the experimental space using a sled mechanism. Such a feature allows the acquisition of the plasma parameters all around the space payload installed into the chamber for testing. The facility is currently in use to test the payloads of CSES satellite (Chinese Seismic Electromagnetic Satellite) devoted to plasma parameters and electric field measurements in a polar orbit at 500 km altitude.
Biomedical engineering strategies in system design space.
Savageau, Michael A
2011-04-01
Modern systems biology and synthetic bioengineering face two major challenges in relating properties of the genetic components of a natural or engineered system to its integrated behavior. The first is the fundamental unsolved problem of relating the digital representation of the genotype to the analog representation of the parameters for the molecular components. For example, knowing the DNA sequence does not allow one to determine the kinetic parameters of an enzyme. The second is the fundamental unsolved problem of relating the parameters of the components and the environment to the phenotype of the global system. For example, knowing the parameters does not tell one how many qualitatively distinct phenotypes are in the organism's repertoire or the relative fitness of the phenotypes in different environments. These also are challenges for biomedical engineers as they attempt to develop therapeutic strategies to treat pathology or to redirect normal cellular functions for biotechnological purposes. In this article, the second of these fundamental challenges will be addressed, and the notion of a "system design space" for relating the parameter space of components to the phenotype space of bioengineering systems will be focused upon. First, the concept of a system design space will be motivated by introducing one of its key components from an intuitive perspective. Second, a simple linear example will be used to illustrate a generic method for constructing the design space in which qualitatively distinct phenotypes can be identified and counted, their fitness analyzed and compared, and their tolerance to change measured. Third, two examples of nonlinear systems from different areas of biomedical engineering will be presented. Finally, after giving reference to a few other applications that have made use of the system design space approach to reveal important design principles, some concluding remarks concerning challenges and opportunities for further development will be made.
ATTIRE (analytical tools for thermal infrared engineering): A sensor simulation and modeling package
NASA Astrophysics Data System (ADS)
Jaggi, S.
1993-02-01
The Advanced Sensor Development Laboratory (ASDL) at the Stennis Space Center develops, maintains and calibrates remote sensing instruments for the National Aeronautics & Space Administration (NASA). To perform system design trade-offs, analysis, and establish system parameters, ASDL has developed a software package for analytical simulation of sensor systems. This package called 'Analytical Tools for Thermal InfraRed Engineering' - ATTIRE, simulates the various components of a sensor system. The software allows each subsystem of the sensor to be analyzed independently for its performance. These performance parameters are then integrated to obtain system level information such as Signal-to-Noise Ratio (SNR), Noise Equivalent Radiance (NER), Noise Equivalent Temperature Difference (NETD) etc. This paper describes the uses of the package and the physics that were used to derive the performance parameters.
Clustering fossils in solid inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akhshik, Mohammad, E-mail: m.akhshik@ipm.ir
In solid inflation the single field non-Gaussianity consistency condition is violated. As a result, the long tenor perturbation induces observable clustering fossils in the form of quadrupole anisotropy in large scale structure power spectrum. In this work we revisit the bispectrum analysis for the scalar-scalar-scalar and tensor-scalar-scalar bispectrum for the general parameter space of solid. We consider the parameter space of the model in which the level of non-Gaussianity generated is consistent with the Planck constraints. Specializing to this allowed range of model parameter we calculate the quadrupole anisotropy induced from the long tensor perturbations on the power spectrum ofmore » the scalar perturbations. We argue that the imprints of clustering fossil from primordial gravitational waves on large scale structures can be detected from the future galaxy surveys.« less
Pinching parameters for open (super) strings
NASA Astrophysics Data System (ADS)
Playle, Sam; Sciuto, Stefano
2018-02-01
We present an approach to the parametrization of (super) Schottky space obtained by sewing together three-punctured discs with strips. Different cubic ribbon graphs classify distinct sets of pinching parameters; we show how they are mapped onto each other. The parametrization is particularly well-suited to describing the region within (super) moduli space where open bosonic or Neveu-Schwarz string propagators become very long and thin, which dominates the IR behaviour of string theories. We show how worldsheet objects such as the Green's function converge to graph theoretic objects such as the Symanzik polynomials in the α ' → 0 limit, allowing us to see how string theory reproduces the sum over Feynman graphs. The (super) string measure takes on a simple and elegant form when expressed in terms of these parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porter, Edward K.; Cornish, Neil J.
Massive black hole binaries are key targets for the space based gravitational wave Laser Interferometer Space Antenna (LISA). Several studies have investigated how LISA observations could be used to constrain the parameters of these systems. Until recently, most of these studies have ignored the higher harmonic corrections to the waveforms. Here we analyze the effects of the higher harmonics in more detail by performing extensive Monte Carlo simulations. We pay particular attention to how the higher harmonics impact parameter correlations, and show that the additional harmonics help mitigate the impact of having two laser links fail, by allowing for anmore » instantaneous measurement of the gravitational wave polarization with a single interferometer channel. By looking at parameter correlations we are able to explain why certain mass ratios provide dramatic improvements in certain parameter estimations, and illustrate how the improved polarization measurement improves the prospects for single interferometer operation.« less
Finite frequency shear wave splitting tomography: a model space search approach
NASA Astrophysics Data System (ADS)
Mondal, P.; Long, M. D.
2017-12-01
Observations of seismic anisotropy provide key constraints on past and present mantle deformation. A common method for upper mantle anisotropy is to measure shear wave splitting parameters (delay time and fast direction). However, the interpretation is not straightforward, because splitting measurements represent an integration of structure along the ray path. A tomographic approach that allows for localization of anisotropy is desirable; however, tomographic inversion for anisotropic structure is a daunting task, since 21 parameters are needed to describe general anisotropy. Such a large parameter space does not allow a straightforward application of tomographic inversion. Building on previous work on finite frequency shear wave splitting tomography, this study aims to develop a framework for SKS splitting tomography with a new parameterization of anisotropy and a model space search approach. We reparameterize the full elastic tensor, reducing the number of parameters to three (a measure of strength based on symmetry considerations for olivine, plus the dip and azimuth of the fast symmetry axis). We compute Born-approximation finite frequency sensitivity kernels relating model perturbations to splitting intensity observations. The strong dependence of the sensitivity kernels on the starting anisotropic model, and thus the strong non-linearity of the inverse problem, makes a linearized inversion infeasible. Therefore, we implement a Markov Chain Monte Carlo technique in the inversion procedure. We have performed tests with synthetic data sets to evaluate computational costs and infer the resolving power of our algorithm for synthetic models with multiple anisotropic layers. Our technique can resolve anisotropic parameters on length scales of ˜50 km for realistic station and event configurations for dense broadband experiments. We are proceeding towards applications to real data sets, with an initial focus on the High Lava Plains of Oregon.
NASA Technical Reports Server (NTRS)
Sonnenfeld, Gerald
1995-01-01
The purpose of this study is to support Russian space flight experiments carried out on rats flown aboard Space Shuttle Mission SLS-2. The Russian experiments were designed to determine the effects of space flight on immunological parameters. The Russian experiment included the first in-flight dissection of rodents that allowed the determination of kinetics of when space flight affected immune responses. The support given the Russians by this laboratory was to carry out assays for immunologically important cytokines that could not readily be carried out in their home laboratories. These included essays of interleukin-1, interleukin-6, interferon-gamma and possibly other cytokines.
Unmanned Systems: A Lab Based Robotic Arm for Grasping Phase II
2016-12-01
Leap Motion Controller, inverse kinematics, DH parameters. 15. NUMBER OF PAGES 89 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT...robotic actuator. Inverse kinematics and Denavit-Hartenberg (DH) parameters will be briefly explained. A. POSITION ANALYSIS According to [3] and... inverse kinematic” method and allows us to calculate the actuator’s position in order to move the robot’s end effector to a specific point in space
Space station contamination modeling
NASA Technical Reports Server (NTRS)
Gordon, T. D.
1989-01-01
Current plans for the operation of Space Station Freedom allow the orbit to decay to approximately an altitude of 200 km before reboosting to approximately 450 km. The Space Station will encounter dramatically increasing ambient and induced environmental effects as the orbit decays. Unfortunately, Shuttle docking, which has been of concern as a high contamination period, will likely occur during the time when the station is in the lowest orbit. The combination of ambient and induced environments along with the presence of the docked Shuttle could cause very severe contamination conditions at the lower orbital altitudes prior to Space Station reboost. The purpose here is to determine the effects on the induced external environment of Space Station Freedom with regard to the proposed changes in altitude. The change in the induced environment will be manifest in several parameters. The ambient density buildup in front of ram facing surfaces will change. The source of such contaminants can be outgassing/offgassing surfaces, leakage from the pressurized modules or experiments, purposeful venting, and thruster firings. The third induced environment parameter with altitude dependence is the glow. In order to determine the altitude dependence of the induced environment parameters, researchers used the integrated Spacecraft Environment Model (ISEM) which was developed for Marshall Space Flight Center. The analysis required numerous ISEM runs. The assumptions and limitations for the ISEM runs are described.
Magnetic resonance spectra and statistical geometry
USDA-ARS?s Scientific Manuscript database
Methods of statistical geometry are introduced which allow one to estimate, on the basis of computable criteria, the conditions under which maximally informative data may be collected. We note the important role of constraints that introduce curvature into parameter space and discuss the appropriate...
Movement Limitation and Immune Responses of Rhesus Monkeys
NASA Technical Reports Server (NTRS)
Sonnenfeld, Gerald; Morton, Darla S.; Swiggett, Jeanene P.; Hakenewerth, Anne M.; Fowler, Nina A.
1993-01-01
The effects of restraint on immunological parameters was determined in an 18 day ARRT (adult rhesus restraint test). The monkeys were restrained for 18 days in the experimental station for the orbiting primate (ESOP), the chair of choice for Space Shuttle experiments. Several immunological parameters were determined using peripheral blood, bone marrow, and lymph node specimens from the monkeys. The parameters included: response of bone marrow cells to GM-CSF (granulocyte-macrophage colony stimulating factor), leukocyte subset distribution, and production of IFN-alpha (interferon-alpha) and IFN-gamma (interferon-gamma). The only parameter changed after 18 days of restraint was the percentage of CDB+ T cells. No other immunological parameters showed changes due to restraint. Handling and changes in housing prior to the restraint period did apparently result in some restraint-independent immunological changes. Handling must be kept to a minimum and the animals allowed time to recover prior to flight. All experiments must be carefully controlled. Restraint does not appear to be a major issue regarding the effects of space flight on immune responses.
Spaceflight and immune responses of rhesus monkeys
NASA Technical Reports Server (NTRS)
Sonnenfeld, Gerald; Morton, Darla S.; Swiggett, Jeanene P.; Hakenewerth, Anne M.; Fowler, Nina A.
1995-01-01
The effects of restraint on immunological parameters was determined in an 18 day ARRT (adult rhesus restraint test). The monkeys were restrained for 18 days in the experimental station for the orbiting primate (ESOP), the chair of choice for Space Shuttle experiments. Several immunological parameters were determined using peripheral blood, bone marrow, and lymph node specimens from the monkeys. The parameters included: response of bone marrow cells to GM-CSF (granulocyte-macrophage colony stimulating factor), leukocyte subset distribution, and production of IFN-a (interferon-alpha) and IFN-gamma (interferon-gamma). The only parameter changed after 18 days of restraint was the percentage of CD8+ T cells. No other immunological parameters showed changes due to restraint. Handling and changes in housing prior to the restraint period did apparently result in some restraint-independent immunological changes. Handling must be kept to a minimum and the animals allowed time to recover prior to flight. All experiments must be carefully controlled. Restraint does not appear to be a major issue regarding the effects of space flight on immune responses.
Tommasino, Paolo; Campolo, Domenico
2017-01-01
A major challenge in robotics and computational neuroscience is relative to the posture/movement problem in presence of kinematic redundancy. We recently addressed this issue using a principled approach which, in conjunction with nonlinear inverse optimization, allowed capturing postural strategies such as Donders' law. In this work, after presenting this general model specifying it as an extension of the Passive Motion Paradigm, we show how, once fitted to capture experimental postural strategies, the model is actually able to also predict movements. More specifically, the passive motion paradigm embeds two main intrinsic components: joint damping and joint stiffness. In previous work we showed that joint stiffness is responsible for static postures and, in this sense, its parameters are regressed to fit to experimental postural strategies. Here, we show how joint damping, in particular its anisotropy, directly affects task-space movements. Rather than using damping parameters to fit a posteriori task-space motions, we make the a priori hypothesis that damping is proportional to stiffness. This remarkably allows a postural-fitted model to also capture dynamic performance such as curvature and hysteresis of task-space trajectories during wrist pointing tasks, confirming and extending previous findings in literature. PMID:29249954
NASA Astrophysics Data System (ADS)
Puente, Carlos E.; Maskey, Mahesh L.; Sivakumar, Bellie
2017-04-01
A deterministic geometric approach, the fractal-multifractal (FM) method, is adapted in order to encode highly intermittent daily rainfall records observed over a year. Using such a notion, this research investigates the complexity of rainfall in various stations within the State of California. Specifically, records gathered at (from South to North) Cherry Valley, Merced, Sacramento and Shasta Dam, containing 59, 116, 115 and 72 years, all ending at water year 2015, were encoded and analyzed in detail. The analysis reveals that: (a) the FM approach yields faithful encodings of all records, by years, with mean square and maximum errors in accumulated rain that are less than a mere 2% and 10%, respectively; (b) the evolution of the corresponding "best" FM parameters, allowing visualization of the inter-annual rainfall dynamics from a reduced vantage point, exhibit implicit variability that precludes discriminating between sites and extrapolating to the future; (c) the evolution of the FM parameters, restricted to specific regions within space, allows finding sensible future simulations; and (d) the rain signals at all sites may be termed "equally complex," as usage of k-means clustering and conventional phase space analysis of FM parameters yields comparable results for all sites.
Top-philic dark matter within and beyond the WIMP paradigm
NASA Astrophysics Data System (ADS)
Garny, Mathias; Heisig, Jan; Hufnagel, Marco; Lülf, Benedikt
2018-04-01
We present a comprehensive analysis of top-philic Majorana dark matter that interacts via a colored t -channel mediator. Despite the simplicity of the model—introducing three parameters only—it provides an extremely rich phenomenology allowing us to accommodate the relic density for a large range of coupling strengths spanning over 6 orders of magnitude. This model features all "exceptional" mechanisms for dark matter freeze-out, including the recently discovered conversion-driven freeze-out mode, with interesting signatures of long-lived colored particles at colliders. We constrain the cosmologically allowed parameter space with current experimental limits from direct, indirect and collider searches, with special emphasis on light dark matter below the top mass. In particular, we explore the interplay between limits from Xenon1T, Fermi-LAT and AMS-02 as well as limits from stop, monojet and Higgs invisible decay searches at the LHC. We find that several blind spots for light dark matter evade current constraints. The region in parameter space where the relic density is set by the mechanism of conversion-driven freeze-out can be conclusively tested by R -hadron searches at the LHC with 300 fb-1 .
On the generation of magnetized collisionless shocks in the large plasma device
NASA Astrophysics Data System (ADS)
Schaeffer, D. B.; Winske, D.; Larson, D. J.; Cowee, M. M.; Constantin, C. G.; Bondarenko, A. S.; Clark, S. E.; Niemann, C.
2017-04-01
Collisionless shocks are common phenomena in space and astrophysical systems, and in many cases, the shocks can be modeled as the result of the expansion of a magnetic piston though a magnetized ambient plasma. Only recently, however, have laser facilities and diagnostic capabilities evolved sufficiently to allow the detailed study in the laboratory of the microphysics of piston-driven shocks. We review experiments on collisionless shocks driven by a laser-produced magnetic piston undertaken with the Phoenix laser laboratory and the Large Plasma Device at the University of California, Los Angeles. The experiments span a large parameter space in laser energy, background magnetic field, and ambient plasma properties that allow us to probe the physics of piston-ambient energy coupling, the launching of magnetosonic solitons, and the formation of subcritical shocks. The results indicate that piston-driven magnetized collisionless shocks in the laboratory can be characterized with a small set of dimensionless formation parameters that place the formation process in an organized and predictive framework.
Lepton flavorful fifth force and depth-dependent neutrino matter interactions
NASA Astrophysics Data System (ADS)
Wise, Mark B.; Zhang, Yue
2018-06-01
We consider a fifth force to be an interaction that couples to matter with a strength that grows with the number of atoms. In addition to competing with the strength of gravity a fifth force can give rise to violations of the equivalence principle. Current long range constraints on the strength and range of fifth forces are very impressive. Amongst possible fifth forces are those that couple to lepton flavorful charges L e - L μ or L e - L τ . They have the property that their range and strength are also constrained by neutrino interactions with matter. In this brief note we review the existing constraints on the allowed parameter space in gauged U{(1)}_{L_e-{L}_{μ },{L}_{τ }} . We find two regions where neutrino oscillation experiments are at the frontier of probing such a new force. In particular, there is an allowed range of parameter space where neutrino matter interactions relevant for long baseline oscillation experiments depend on the depth of the neutrino beam below the surface of the earth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaeffer, D. B.; Winske, D.; Larson, D. J.
Collisionless shocks are common phenomena in space and astrophysical systems, and in many cases, the shocks can be modeled as the result of the expansion of a magnetic piston though a magnetized ambient plasma. Only recently, however, have laser facilities and diagnostic capabilities evolved sufficiently to allow the detailed study in the laboratory of the microphysics of piston-driven shocks. We review experiments on collisionless shocks driven by a laser-produced magnetic piston undertaken with the Phoenix laser laboratory and the Large Plasma Device at the University of California, Los Angeles. The experiments span a large parameter space in laser energy, backgroundmore » magnetic field, and ambient plasma properties that allow us to probe the physics of piston-ambient energy coupling, the launching of magnetosonic solitons, and the formation of subcritical shocks. Here, the results indicate that piston-driven magnetized collisionless shocks in the laboratory can be characterized with a small set of dimensionless formation parameters that place the formation process in an organized and predictive framework.« less
On the generation of magnetized collisionless shocks in the large plasma device
Schaeffer, D. B.; Winske, D.; Larson, D. J.; ...
2017-03-22
Collisionless shocks are common phenomena in space and astrophysical systems, and in many cases, the shocks can be modeled as the result of the expansion of a magnetic piston though a magnetized ambient plasma. Only recently, however, have laser facilities and diagnostic capabilities evolved sufficiently to allow the detailed study in the laboratory of the microphysics of piston-driven shocks. We review experiments on collisionless shocks driven by a laser-produced magnetic piston undertaken with the Phoenix laser laboratory and the Large Plasma Device at the University of California, Los Angeles. The experiments span a large parameter space in laser energy, backgroundmore » magnetic field, and ambient plasma properties that allow us to probe the physics of piston-ambient energy coupling, the launching of magnetosonic solitons, and the formation of subcritical shocks. Here, the results indicate that piston-driven magnetized collisionless shocks in the laboratory can be characterized with a small set of dimensionless formation parameters that place the formation process in an organized and predictive framework.« less
NASA Astrophysics Data System (ADS)
Fraternale, Federico; Domenicale, Loris; Staffilani, Gigliola; Tordella, Daniela
2018-06-01
This study provides sufficient conditions for the temporal monotonic decay of enstrophy for two-dimensional perturbations traveling in the incompressible, viscous, plane Poiseuille, and Couette flows. Extension of Synge's procedure [J. L. Synge, Proc. Fifth Int. Congress Appl. Mech. 2, 326 (1938); Semicentenn. Publ. Am. Math. Soc. 2, 227 (1938)] to the initial-value problem allow us to find the region of the wave-number-Reynolds-number map where the enstrophy of any initial disturbance cannot grow. This region is wider than that of the kinetic energy. We also show that the parameter space is split into two regions with clearly distinct propagation and dispersion properties.
Constraining sterile neutrinos with AMANDA and IceCube atmospheric neutrino data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Esmaili, Arman; Peres, O.L.G.; Halzen, Francis, E-mail: aesmaili@ifi.unicamp.br, E-mail: halzen@icecube.wisc.edu, E-mail: orlando@ifi.unicamp.br
2012-11-01
We demonstrate that atmospheric neutrino data accumulated with the AMANDA and the partially deployed IceCube experiments constrain the allowed parameter space for a hypothesized fourth sterile neutrino beyond the reach of a combined analysis of all other experiments, for Δm{sup 2}{sub 41}∼<1 eV{sup 2}. Although the IceCube data wins the statistics in the analysis, the advantage of a combined analysis of AMANDA and IceCube data is the partial remedy of yet unknown instrumental systematic uncertainties. We also illustrate the sensitivity of the completed IceCube detector, that is now taking data, to the parameter space of 3+1 model.
Phase transitions in tumor growth: V what can be expected from cancer glycolytic oscillations?
NASA Astrophysics Data System (ADS)
Martin, R. R.; Montero, S.; Silva, E.; Bizzarri, M.; Cocho, G.; Mansilla, R.; Nieto-Villar, J. M.
2017-11-01
Experimental evidence confirms the existence of glycolytic oscillations in cancer, which allows it to self-organize in time and space far from thermodynamic equilibrium, and provides it with high robustness, complexity and adaptability. A kinetic model is proposed for HeLa tumor cells grown in hypoxia conditions. It shows oscillations in a wide range of parameters. Two control parameters (glucose and inorganic phosphate concentration) were varied to explore the phase space, showing also the presence of limit cycles and bifurcations. The complexity of the system was evaluated by focusing on stationary state stability and Lempel-Ziv complexity. Moreover, the calculated entropy production rate was demonstrated behaving as a Lyapunov function.
Covey, Curt; Lucas, Donald D.; Tannahill, John; ...
2013-07-01
Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less
Future missions for observing Earth's changing gravity field: a closed-loop simulation tool
NASA Astrophysics Data System (ADS)
Visser, P. N.
2008-12-01
The GRACE mission has successfully demonstrated the observation from space of the changing Earth's gravity field at length and time scales of typically 1000 km and 10-30 days, respectively. Many scientific communities strongly advertise the need for continuity of observing Earth's gravity field from space. Moreover, a strong interest is being expressed to have gravity missions that allow a more detailed sampling of the Earth's gravity field both in time and in space. Designing a gravity field mission for the future is a complicated process that involves making many trade-offs, such as trade-offs between spatial, temporal resolution and financial budget. Moreover, it involves the optimization of many parameters, such as orbital parameters (height, inclination), distinction between which gravity sources to observe or correct for (for example are gravity changes due to ocean currents a nuisance or a signal to be retrieved?), observation techniques (low-low satellite-to-satellite tracking, satellite gravity gradiometry, accelerometers), and satellite control systems (drag-free?). A comprehensive tool has been developed and implemented that allows the closed-loop simulation of gravity field retrievals for different satellite mission scenarios. This paper provides a description of this tool. Moreover, its capabilities are demonstrated by a few case studies. Acknowledgments. The research that is being done with the closed-loop simulation tool is partially funded by the European Space Agency (ESA). An important component of the tool is the GEODYN software, kindly provided by NASA Goddard Space Flight Center in Greenbelt, Maryland.
Remote Access to Earth Science Data by Content, Space and Time
NASA Technical Reports Server (NTRS)
Dobinson, E.; Raskin, G.
1998-01-01
This demo presents the combination on an http-based client/server application that facilitates internet access to Earth science data coupled with a Java applet GUI that allows the user to graphically select data based on spatial and temporal coverage plots and scientific parameters.
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
Nowke, Christian; Diaz-Pier, Sandra; Weyers, Benjamin; Hentschel, Bernd; Morrison, Abigail; Kuhlen, Torsten W.; Peyser, Alexander
2018-01-01
Simulation models in many scientific fields can have non-unique solutions or unique solutions which can be difficult to find. Moreover, in evolving systems, unique final state solutions can be reached by multiple different trajectories. Neuroscience is no exception. Often, neural network models are subject to parameter fitting to obtain desirable output comparable to experimental data. Parameter fitting without sufficient constraints and a systematic exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. To address this issue, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models. In this work, we focus particularly on connectivity generation, since finding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The development of the tool has been guided by several use cases—the tool allows researchers to steer the parameters of the connectivity generation during the simulation, thus quickly growing networks composed of multiple populations with a targeted mean activity. The flexibility of the software allows scientists to explore other connectivity and neuron variables apart from the ones presented as use cases. With this tool, we enable an interactive exploration of parameter spaces and a better understanding of neural network models and grapple with the crucial problem of non-unique network solutions and trajectories. In addition, we observe a reduction in turn around times for the assessment of these models, due to interactive visualization while the simulation is computed. PMID:29937723
Implications of improved Higgs mass calculations for supersymmetric models.
Buchmueller, O; Dolan, M J; Ellis, J; Hahn, T; Heinemeyer, S; Hollik, W; Marrouche, J; Olive, K A; Rzehak, H; de Vries, K J; Weiglein, G
We discuss the allowed parameter spaces of supersymmetric scenarios in light of improved Higgs mass predictions provided by FeynHiggs 2.10.0. The Higgs mass predictions combine Feynman-diagrammatic results with a resummation of leading and subleading logarithmic corrections from the stop/top sector, which yield a significant improvement in the region of large stop masses. Scans in the pMSSM parameter space show that, for given values of the soft supersymmetry-breaking parameters, the new logarithmic contributions beyond the two-loop order implemented in FeynHiggs tend to give larger values of the light CP-even Higgs mass, [Formula: see text], in the region of large stop masses than previous predictions that were based on a fixed-order Feynman-diagrammatic result, though the differences are generally consistent with the previous estimates of theoretical uncertainties. We re-analyse the parameter spaces of the CMSSM, NUHM1 and NUHM2, taking into account also the constraints from CMS and LHCb measurements of [Formula: see text]and ATLAS searches for [Formula: see text] events using 20/fb of LHC data at 8 TeV. Within the CMSSM, the Higgs mass constraint disfavours [Formula: see text], though not in the NUHM1 or NUHM2.
NASA Astrophysics Data System (ADS)
Ozkat, Erkan Caner; Franciosa, Pasquale; Ceglarek, Dariusz
2017-08-01
Remote laser welding technology offers opportunities for high production throughput at a competitive cost. However, the remote laser welding process of zinc-coated sheet metal parts in lap joint configuration poses a challenge due to the difference between the melting temperature of the steel (∼1500 °C) and the vapourizing temperature of the zinc (∼907 °C). In fact, the zinc layer at the faying surface is vapourized and the vapour might be trapped within the melting pool leading to weld defects. Various solutions have been proposed to overcome this problem over the years. Among them, laser dimpling has been adopted by manufacturers because of its flexibility and effectiveness along with its cost advantages. In essence, the dimple works as a spacer between the two sheets in lap joint and allows the zinc vapour escape during welding process, thereby preventing weld defects. However, there is a lack of comprehensive characterization of dimpling process for effective implementation in real manufacturing system taking into consideration inherent changes in variability of process parameters. This paper introduces a methodology to develop (i) surrogate model for dimpling process characterization considering multiple-inputs (i.e. key control characteristics) and multiple-outputs (i.e. key performance indicators) system by conducting physical experimentation and using multivariate adaptive regression splines; (ii) process capability space (Cp-Space) based on the developed surrogate model that allows the estimation of a desired process fallout rate in the case of violation of process requirements in the presence of stochastic variation; and, (iii) selection and optimization of the process parameters based on the process capability space. The proposed methodology provides a unique capability to: (i) simulate the effect of process variation as generated by manufacturing process; (ii) model quality requirements with multiple and coupled quality requirements; and (iii) optimize process parameters under competing quality requirements such as maximizing the dimple height while minimizing the dimple lower surface area.
Gauge-independent renormalization of the N2HDM
NASA Astrophysics Data System (ADS)
Krause, Marcel; López-Val, David; Mühlleitner, Margarete; Santos, Rui
2017-12-01
The Next-to-Minimal 2-Higgs-Doublet Model (N2HDM) is an interesting benchmark model for a Higgs sector consisting of two complex doublet and one real singlet fields. Like the Next-to-Minimal Supersymmetric extension (NMSSM) it features light Higgs bosons that could have escaped discovery due to their singlet admixture. Thereby, the model allows for various different Higgs-to-Higgs decay modes. Contrary to the NMSSM, however, the model is not subject to supersymmetric relations restraining its allowed parameter space and its phenomenology. For the correct determination of the allowed parameter space, the correct interpretation of the LHC Higgs data and the possible distinction of beyond-the-Standard Model Higgs sectors higher order corrections to the Higgs boson observables are crucial. This requires not only their computation but also the development of a suitable renormalization scheme. In this paper we have worked out the renormalization of the complete N2HDM and provide a scheme for the gauge-independent renormalization of the mixing angles. We discuss the renormalization of the Z_2 soft breaking parameter m 12 2 and the singlet vacuum expectation value v S . Both enter the Higgs self-couplings relevant for Higgs-to-Higgs decays. We apply our renormalization scheme to different sample processes such as Higgs decays into Z bosons and decays into a lighter Higgs pair. Our results show that the corrections may be sizable and have to be taken into account for reliable predictions.
VIP: A knowledge-based design aid for the engineering of space systems
NASA Technical Reports Server (NTRS)
Lewis, Steven M.; Bellman, Kirstie L.
1990-01-01
The Vehicles Implementation Project (VIP), a knowledge-based design aid for the engineering of space systems is described. VIP combines qualitative knowledge in the form of rules, quantitative knowledge in the form of equations, and other mathematical modeling tools. The system allows users rapidly to develop and experiment with models of spacecraft system designs. As information becomes available to the system, appropriate equations are solved symbolically and the results are displayed. Users may browse through the system, observing dependencies and the effects of altering specific parameters. The system can also suggest approaches to the derivation of specific parameter values. In addition to providing a tool for the development of specific designs, VIP aims at increasing the user's understanding of the design process. Users may rapidly examine the sensitivity of a given parameter to others in the system and perform tradeoffs or optimizations of specific parameters. A second major goal of VIP is to integrate the existing corporate knowledge base of models and rules into a central, symbolic form.
Space orientation of total hip prosthesis. A method for three-dimensional determination.
Herrlin, K; Selvik, G; Pettersson, H
1986-01-01
A method for in vivo determination of orientation and relation in space of components of total hip prosthesis is described. The method allows for determination of the orientation of the prosthetic components in well defined anatomic planes of the body. Furthermore the range of free motion from neutral position to the point of contact between the edge of the acetabular opening and the neck of the femoral component can be determined in various directions. To assess the accuracy of the calculations a phantom prosthesis was studied in nine different positions and the measurements of the space oriented parameters according to the present method correlated to measurements of the same parameters according to Selvik's stereophotogrammetric method. Good correlation was found. The role of prosthetic malpositioning and component interaction evaluated with the present method in the development of prosthetic loosening and displacement is discussed.
Optimal directed searches for continuous gravitational waves
NASA Astrophysics Data System (ADS)
Ming, Jing; Krishnan, Badri; Papa, Maria Alessandra; Aulbert, Carsten; Fehrmann, Henning
2016-03-01
Wide parameter space searches for long-lived continuous gravitational wave signals are computationally limited. It is therefore critically important that the available computational resources are used rationally. In this paper we consider directed searches, i.e., targets for which the sky position is known accurately but the frequency and spin-down parameters are completely unknown. Given a list of such potential astrophysical targets, we therefore need to prioritize. On which target(s) should we spend scarce computing resources? What parameter space region in frequency and spin-down should we search through? Finally, what is the optimal search setup that we should use? In this paper we present a general framework that allows us to solve all three of these problems. This framework is based on maximizing the probability of making a detection subject to a constraint on the maximum available computational cost. We illustrate the method for a simplified problem.
Searching for dark absorption with direct detection experiments
Bloch, Itay M.; Essig, Rouven; Tobioka, Kohsaku; ...
2017-06-16
We consider the absorption by bound electrons of dark matter in the form of dark photons and axion-like particles, as well as of dark photons from the Sun, in current and next-generation direct detection experiments. Experiments sensitive to electron recoils can detect such particles with masses between a few eV to more than 10 keV. For dark photon dark matter, we update a previous bound based on XENON10 data and derive new bounds based on data from XENON100 and CDMSlite. We find these experiments to disfavor previously allowed parameter space. Moreover, we derive sensitivity projections for SuperCDMS at SNOLAB formore » silicon and germanium targets, as well as for various possible experiments with scintillating targets (cesium iodide, sodium iodide, and gallium arsenide). The projected sensitivity can probe large new regions of parameter space. For axion-like particles, the same current direction detection data improves on previously known direct-detection constraints but does not bound new parameter space beyond known stellar cooling bounds. However, projected sensitivities of the upcoming SuperCDMS SNOLAB using germanium can go beyond these and even probe parameter space consistent with possible hints from the white dwarf luminosity function. We find similar results for dark photons from the sun. For all cases, direct-detection experiments can have unprecedented sensitivity to dark-sector particles.« less
Searching for dark absorption with direct detection experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bloch, Itay M.; Essig, Rouven; Tobioka, Kohsaku
We consider the absorption by bound electrons of dark matter in the form of dark photons and axion-like particles, as well as of dark photons from the Sun, in current and next-generation direct detection experiments. Experiments sensitive to electron recoils can detect such particles with masses between a few eV to more than 10 keV. For dark photon dark matter, we update a previous bound based on XENON10 data and derive new bounds based on data from XENON100 and CDMSlite. We find these experiments to disfavor previously allowed parameter space. Moreover, we derive sensitivity projections for SuperCDMS at SNOLAB formore » silicon and germanium targets, as well as for various possible experiments with scintillating targets (cesium iodide, sodium iodide, and gallium arsenide). The projected sensitivity can probe large new regions of parameter space. For axion-like particles, the same current direction detection data improves on previously known direct-detection constraints but does not bound new parameter space beyond known stellar cooling bounds. However, projected sensitivities of the upcoming SuperCDMS SNOLAB using germanium can go beyond these and even probe parameter space consistent with possible hints from the white dwarf luminosity function. We find similar results for dark photons from the sun. For all cases, direct-detection experiments can have unprecedented sensitivity to dark-sector particles.« less
Vortex topology of rolling and pitching wings
NASA Astrophysics Data System (ADS)
Johnson, Kyle; Thurow, Brian; Wabick, Kevin; Buchholz, James; Berdon, Randall
2017-11-01
A flat, rectangular plate with an aspect ratio of 2 was articulated in roll and pitch, individually and simultaneously, to isolate the effects of each motion. The plate was immersed into a Re = 10,000 flow (based on chord length) to simulate forward, flapping flight. Measurements were made using a 3D-3C plenoptic PIV system to allow for the study of vortex topology in the instantaneous flow, in addition to phase-averaged results. The prominent focus is leading-edge vortex (LEV) stability and the lifespan of shed LEVs. The parameter space involves multiple values of advance coefficient J and reduced frequency k for roll and pitch, respectively. This space aims to determine the influence of each parameter on LEVs, which has been identified as an important factor for the lift enhancement seen in flapping wing flight. A variety of results are to be presented characterizing the variations in vortex topology across this parameter space. This work is supported by the Air Force Office of Scientific Research (Grant Number FA9550-16-1-0107, Dr. Douglas Smith, program manager).
Analytical model of the structureborne interior noise induced by a propeller wake
NASA Technical Reports Server (NTRS)
Junger, M. C.; Garrelick, J. M.; Martinez, R.; Cole, J. E., III
1984-01-01
The structure-borne contribution to the interior noise that is induced by the propeller wake acting on the wing was studied. Analytical models were developed to describe each aspect of this path including the excitation loads, the wing and fuselage structures, and the interior acoustic space. The emphasis is on examining a variety of parameters, and as a result different models were developed to examine specific parameters. The excitation loading on the wing by the propeller wake is modeled by a distribution of rotating potential vortices whose strength is related to the thrust per blade. The response of the wing to this loading is examined using beam models. A model of a beam structurally connected to a cylindrical shell with an internal acoustic fluid was developed to examine the coupling of energy from the wing to the interior space. The model of the acoustic space allows for arbitrary end conditions (e.g., rigid or vibrating end caps). Calculations are presented using these models to compare with a laboratory test configuration as well as for parameters of a prop-fan aircraft.
A proposed experimental search for chameleons using asymmetric parallel plates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burrage, Clare; Copeland, Edmund J.; Stevenson, James A., E-mail: Clare.Burrage@nottingham.ac.uk, E-mail: ed.copeland@nottingham.ac.uk, E-mail: james.stevenson@nottingham.ac.uk
2016-08-01
Light scalar fields coupled to matter are a common consequence of theories of dark energy and attempts to solve the cosmological constant problem. The chameleon screening mechanism is commonly invoked in order to suppress the fifth forces mediated by these scalars, sufficiently to avoid current experimental constraints, without fine tuning. The force is suppressed dynamically by allowing the mass of the scalar to vary with the local density. Recently it has been shown that near future cold atoms experiments using atom-interferometry have the ability to access a large proportion of the chameleon parameter space. In this work we demonstrate howmore » experiments utilising asymmetric parallel plates can push deeper into the remaining parameter space available to the chameleon.« less
NASA Astrophysics Data System (ADS)
Kettermann, Michael; von Hagke, Christoph; Urai, Janos L.
2017-04-01
Dilatant faults often form in rocks containing pre-existing joints, but the effects of joints on fault segment linkage and fracture connectivity is not well understood. Studying evolution of dilatancy and influence of fractures on fault development provides insights into geometry of fault zones in brittle rocks and will eventually allow for predicting their subsurface appearance. In an earlier study we recognized the effect of different angles between strike direction of vertical joints and a basement fault on the geometry of a developing fault zone. We now systematically extend the results by varying geometric joint parameters such as joint spacing and vertical extent of the joints and measuring fracture density and connectivity. A reproducibility study shows a small error-range for the measurements, allowing for a confident use of the experimental setup. Analogue models were carried out in a manually driven deformation box (30x28x20 cm) with a 60° dipping pre-defined basement fault and 4.5 cm of displacement. To produce open joints prior to faulting, sheets of paper were mounted in the box to a depth of 5 cm at a spacing of 2.5 cm. We varied the vertical extent of the joints from 5 to 50 mm. Powder was then sieved into the box, embedding the paper almost entirely (column height of 19 cm), and the paper was removed. During deformation we captured structural information by time-lapse photography that allows particle imaging velocimetry analyses (PIV) to detect localized deformation at every increment of displacement. Post-mortem photogrammetry preserves the final 3-dimensional structure of the fault zone. A counterintuitive result is that joint depth is of only minor importance for the evolution of the fault zone. Even very shallow joints form weak areas at which the fault starts to form and propagate. More important is joint spacing. Very large joint spacing leads to faults and secondary fractures that form subparallel to the basement fault. In contrast, small joint spacing results in fault strands that only localize at the pre-existing joints, and secondary fractures that are oriented at high angles to the pre-existing joints. With this new set of experiments we can now quantitatively constrain how (i) the angle between joints and basement fault, (ii) the joint depth and (iii) the joint spacing affect fault zone parameters such as (1) the damage zone width, (2) the density of secondary fractures, (3) map-view area of open gaps or (4) the fracture connectivity. We apply these results to predict subsurface geometries of joint-fault networks in cohesive rocks, e.g. basaltic sequences in Iceland and sandstones in the Canyonlands NP, USA.
Space qualified Nd:YAG laser (phase 1 - design)
NASA Technical Reports Server (NTRS)
Foster, J. D.; Kirk, R. F.
1971-01-01
Results of a design study and preliminary design of a space qualified Nd:YAG laser are presented. A theoretical model of the laser was developed to allow the evaluation of the effects of various parameters on its performance. Various pump lamps were evaluated and sum pumping was considered. Cooling requirements were examined and cooling methods such as radiation, cryogenic and conductive were analysed. Power outputs and efficiences of various configurations and the pump and laser lifetime are discussed. Also considered were modulation and modulating methods.
Gravity at a Quantum Condensate
NASA Astrophysics Data System (ADS)
Atanasov, Victor
2017-07-01
Provided a quantum superconducting condensate is allowed to occupy a curved hyper-plane of space-time, a geometric potential from the kinetic term arises. An energy conservation relation involving the geometric field at every material point in the superconductor can be demonstrated. The induced three-dimensional scalar curvature is directly related to the wavefunction/order parameter of the quantum condensate thus pointing the way to a possible experimental procedure to artificially induce curvature of space-time via change in the electric/probability current density.
Higgs-portal assisted Higgs inflation with a sizeable tensor-to-scalar ratio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jinsu; Ko, Pyungwon; Park, Wan-Il, E-mail: kimjinsu@kias.re.kr, E-mail: pko@kias.re.kr, E-mail: Wanil.Park@uv.es
We show that the Higgs portal interactions involving extra dark Higgs field can save generically the original Higgs inflation of the standard model (SM) from the problem of a deep non-SM vacuum in the SM Higgs potential. Specifically, we show that such interactions disconnect the top quark pole mass from inflationary observables and allow multi-dimensional parameter space to save the Higgs inflation, thanks to the additional parameters (the dark Higgs boson mass m {sub φ}, the mixing angle α between the SM Higgs H and dark Higgs Φ, and the mixed quartic coupling) affecting RG-running of the Higgs quartic coupling.more » The effect of Higgs portal interactions may lead to a larger tensor-to-scalar ratio, 0.08 ∼< r ∼< 0.1, by adjusting relevant parameters in wide ranges of α and m {sub φ}, some region of which can be probed at future colliders. Performing a numerical analysis we find an allowed region of parameters, matching the latest Planck data.« less
NASA Astrophysics Data System (ADS)
Nechitailo, Galina S.
2016-07-01
MODIFICATIONS OF MORPHOMETRICAL AND PHYSIOLOGICAL PARAMETERS OF PEPPER PLANTS GROWN ON ARTIFICIAL NUTRIENT MEDIUM FOR EXPERIMENTS IN SPACEFLIGHT Lui Min*, Zhao Hui*, Chen Yu*, Lu Jinying*, Li Huasheng*, Sun Qiao*, Nechitajlo G.S.**, Glushchenko N.N.*** *Shenzhou Space Biotechnology Group, China Academy of Space Technology (CAST), **Emanuel Institute of Biochemical Physics of Russian Academy of Sciences (IBCP RAS) mail: spacemal@mail.ru ***V.L. Talrose Institute for Energy Problems of Chemical Physics of Russian Academy of Science (INEPCP RAS) mail: nnglu@ mail.ru In circumstances of space flights, long residence of the staff at space stations and space settlements an optimal engineering system of the life-support allowing to solve a number of technical and psychological problems for successful work and a life of cosmonauts, researchers, etc. is important and prime. In this respect it is necessary to consider growing plants on board of spacecraft as one of the units in a life-support system. It is feasible due to modern development of biotechnologies in growing plants allowing us to receive materials with new improved properties. Thus, a composition and ratio of components of nutrient medium can considerably influence on plants properties. We have developed the nutrient medium in which essential metals such as iron, zinc, copper were added in an electroneutral state in the form of nanoparticles instead of sulfates or other salts of the same metals. Such replacement is appropriate through unique nanoparticles properties: metal nanoparticles are less toxic than their corresponding ionic forms; nanoparticles produce a prolonged effect, serving as a depot for elements in an organism; nanoparticles introduced in biotic doses stimulate the metabolic processes of the organism; nanoparticles effect is multifunctional. Pepper strain LJ-king was used for growing on a nutrient medium with ferrous, zinc, copper nanoparticles in different concentrations. Pepper plants grown on the nutrient medium with metal nanoparticles showed good morphometrical and physiological parameters: seedlings and plants were compact with the developed and active root system.
Pulse shape optimization for electron-positron production in rotating fields
NASA Astrophysics Data System (ADS)
Fillion-Gourdeau, François; Hebenstreit, Florian; Gagnon, Denis; MacLean, Steve
2017-07-01
We optimize the pulse shape and polarization of time-dependent electric fields to maximize the production of electron-positron pairs via strong field quantum electrodynamics processes. The pulse is parametrized in Fourier space by a B -spline polynomial basis, which results in a relatively low-dimensional parameter space while still allowing for a large number of electric field modes. The optimization is performed by using a parallel implementation of the differential evolution, one of the most efficient metaheuristic algorithms. The computational performance of the numerical method and the results on pair production are compared with a local multistart optimization algorithm. These techniques allow us to determine the pulse shape and field polarization that maximize the number of produced pairs in computationally accessible regimes.
THE LITTLEST HIGGS MODEL AND ONE-LOOP ELECTROWEAK PRECISION CONSTRAINTS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
CHEN, M.C.; DAWSON,S.
2004-06-16
We present in this talk the one-loop electroweak precision constraints in the Littlest Higgs model, including the logarithmically enhanced contributions from both fermion and scalar loops. We find the one-loop contributions are comparable to the tree level corrections in some regions of parameter space. A low cutoff scale is allowed for a non-zero triplet VEV. Constraints on various other parameters in the model are also discussed. The role of triplet scalars in constructing a consistent renormalization scheme is emphasized.
Using sobol sequences for planning computer experiments
NASA Astrophysics Data System (ADS)
Statnikov, I. N.; Firsov, G. I.
2017-12-01
Discusses the use for research of problems of multicriteria synthesis of dynamic systems method of Planning LP-search (PLP-search), which not only allows on the basis of the simulation model experiments to revise the parameter space within specified ranges of their change, but also through special randomized nature of the planning of these experiments is to apply a quantitative statistical evaluation of influence of change of varied parameters and their pairwise combinations to analyze properties of the dynamic system.Start your abstract here...
An approach to determination of shunt circuits parameters for damping vibrations
NASA Astrophysics Data System (ADS)
Matveenko; Iurlova; Oshmarin; Sevodina; Iurlov
2018-04-01
This paper considers the problem of natural vibrations of a deformable structure containing elements made of piezomaterials. The piezoelectric elements are connected through electrodes to an external electric circuit, which consists of resistive, inductive and capacitive elements. Based on the solution of this problem, the parameters of external electric circuits are searched for to allow optimal passive control of the structural vibrations. The solution to the problem is complex natural vibration frequencies, the real part of which corresponds to the circular eigenfrequency of vibrations and the imaginary part corresponds to its damping rate (damping ratio). The analysis of behaviour of the imaginary parts of complex eigenfrequencies in the space of external circuit parameters allows one to damp given modes of structure vibrations. The effectiveness of the proposed approach is demonstrated using a cantilever-clamped plate and a shell structure in the form of a semi-cylinder connected to series resonant ? circuits.
Ground-based solar astrometric measurements during the PICARD mission
NASA Astrophysics Data System (ADS)
Irbah, A.; Meftah, M.; Corbard, T.; Ikhlef, R.; Morand, F.; Assus, P.; Fodil, M.; Lin, M.; Ducourt, E.; Lesueur, P.; Poiet, G.; Renaud, C.; Rouze, M.
2011-11-01
PICARD is a space mission developed mainly to study the geometry of the Sun. The satellite was launched in June 2010. The PICARD mission has a ground program which is based at the Calern Observatory (Observatoire de la C^ote d'Azur). It will allow recording simultaneous solar images from ground. Astrometric observations of the Sun using ground-based telescopes need however an accurate modelling of optical e®ects induced by atmospheric turbulence. Previous works have revealed a dependence of the Sun radius measurements with the observation conditions (Fried's parameter, atmospheric correlation time(s) ...). The ground instruments consist mainly in SODISM II, replica of the PICARD space instrument and MISOLFA, a generalized daytime seeing monitor. They are complemented by standard sun-photometers and a pyranometer for estimating a global sky quality index. MISOLFA is founded on the observation of Angle-of-Arrival (AA) °uctuations and allows us to analyze atmospheric turbulence optical e®ects on measurements performed by SODISM II. It gives estimations of the coherence parameters characterizing wave-fronts degraded by the atmospheric turbulence (Fried's parameter, size of the isoplanatic patch, the spatial coherence outer scale and atmospheric correlation times). This paper presents an overview of the ground based instruments of PICARD and some results obtained from observations performed at Calern observatory in 2011.
Lavrentyev, A I; Rokhlin, S I
2001-04-01
An ultrasonic method proposed by us for determination of the complete set of acoustical and geometrical properties of a thin isotropic layer between semispaces (J. Acoust. Soc. Am. 102 (1997) 3467) is extended to determination of the properties of a coating on a thin plate. The method allows simultaneous determination of the coating thickness, density, elastic moduli and attenuation (longitudinal and shear) from normal and oblique incidence reflection (transmission) frequency spectra. Reflection (transmission) from the coated plate is represented as a function of six nondimensional parameters of the coating which are determined from two experimentally measured spectra: one at normal and one at oblique incidence. The introduction of the set of nondimensional parameters allows one to transform the reconstruction process from one search in a six-dimensional space to two searches in three-dimensional spaces (one search for normal incidence and one for oblique). Thickness, density, and longitudinal and shear elastic moduli of the coating are calculated from the nondimensional parameters determined. The sensitivity of the method to individual properties and its stability against experimental noise are studied and the inversion algorithm is accordingly optimized. An example of the method and experimental measurement for comparison is given for a polypropylene coating on a steel foil.
Production of high-quality polydisperse construction mixes for additive 3D technologies.
NASA Astrophysics Data System (ADS)
Gerasimov, M. D.; Brazhnik, Yu V.; Gorshkov, P. S.; Latyshev, S. S.
2018-03-01
The paper describes a new design of a mixer allowing production of high quality polydisperse powders, used in additive 3D technologies. A new principle of dry powder particle mixing is considered, implementing a possibility of a close-to-ideal distribution of such particles in common space. A mathematical model of the mixer is presented, allowing evaluating quality indicators of the produced mixture. Experimental results are shown and rational values of process parameters of the mixer are obtained.
Spectrum-doubled heavy vector bosons at the LHC
Appelquist, Thomas; Bai, Yang; Ingoldby, James; ...
2016-01-19
We study a simple effective field theory incorporating six heavy vector bosons together with the standard-model field content. The new particles preserve custodial symmetry as well as an approximate left-right parity symmetry. The enhanced symmetry of the model allows it to satisfy precision electroweak constraints and bounds from Higgs physics in a regime where all the couplings are perturbative and where the amount of fine-tuning is comparable to that in the standard model itself. We find that the model could explain the recently observed excesses in di-boson processes at invariant mass close to 2TeV from LHC Run 1 for amore » range of allowed parameter space. The masses of all the particles differ by no more than roughly 10%. In a portion of the allowed parameter space only one of the new particles has a production cross section large enough to be detectable with the energy and luminosity of Run 1, both via its decay to WZ and to Wh, while the others have suppressed production rates. Furthermore, the model can be tested at the higher-energy and higher-luminosity run of the LHC even for an overall scale of the new particles higher than 3TeV.« less
Lepton Flavorful Fifth Force and Depth-Dependent Neutrino Matter Interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wise, Mark B.; Zhang, Yue
We consider a fifth force to be an interaction that couples to matter with a strength that grows with the number of atoms. In addition to competing with the strength of gravity a fifth force can give rise to violations of the equivalence principle. Current long range constraints on the strength and range of fifth forces are very impressive. Amongst possible fifth forces are those that couple to lepton flavorful chargesmore » $$L_e-L_{\\mu}$$ or $$L_e-L_{\\tau}$$. They have the property that their range and strength are also constrained by neutrino interactions with matter. In this brief note we review the existing constraints on the allowed parameter space in gauged $$U(1)_{L_e-L_{\\mu}, L_{\\tau}}$$. We find two regions where neutrino oscillation experiments are at the frontier of probing such a new force. In particular, there is an allowed range of parameter space where neutrino matter interactions relevant for long baseline oscillation experiments depend on the depth of the neutrino beam below the surface of the earth.« less
Predicting Instability Timescales in Closely-Packed Planetary Systems
NASA Astrophysics Data System (ADS)
Tamayo, Daniel; Hadden, Samuel; Hussain, Naireen; Silburt, Ari; Gilbertson, Christian; Rein, Hanno; Menou, Kristen
2018-04-01
Many of the multi-planet systems discovered around other stars are maximally packed. This implies that simulations with masses or orbital parameters too far from the actual values will destabilize on short timescales; thus, long-term dynamics allows one to constrain the orbital architectures of many closely packed multi-planet systems. A central challenge in such efforts is the large computational cost of N-body simulations, which preclude a full survey of the high-dimensional parameter space of orbital architectures allowed by observations. I will present our recent successes in training machine learning models capable of reliably predicting orbital stability a million times faster than N-body simulations. By engineering dynamically relevant features that we feed to a gradient-boosted decision tree algorithm (XGBoost), we are able to achieve a precision and recall of 90% on a holdout test set of N-body simulations. This opens a wide discovery space for characterizing new exoplanet discoveries and for elucidating how orbital architectures evolve through time as the next generation of spaceborne exoplanet surveys prepare for launch this year.
Trends in shuttle entry heating from the correction of flight test maneuvers
NASA Technical Reports Server (NTRS)
Hodge, J. K.
1983-01-01
A new technique was developed to systematically expand the aerothermodynamic envelope of the Space Shuttle Protection System (TPS). The technique required transient flight test maneuvers which were performed on the second, fourth, and fifth Shuttle reentries. Kalman filtering and parameter estimation were used for the reduction of embedded thermocouple data to obtain best estimates of aerothermal parameters. Difficulties in reducing the data were overcome or minimized. Thermal parameters were estimated to minimize uncertainties, and heating rate parameters were estimated to correlate with angle of attack, sideslip, deflection angle, and Reynolds number changes. Heating trends from the maneuvers allow for rapid and safe envelope expansion needed for future missions, except for some local areas.
Colliders as a simultaneous probe of supersymmetric dark matter and Terascale cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barenboim, Gabriela; /Valencia U.; Lykken, Joseph D.
2006-08-01
Terascale supersymmetry has the potential to provide a natural explanation of the dominant dark matter component of the standard {Lambda}CDM cosmology. However once we impose the constraints on minimal supersymmetry parameters from current particle physics data, a satisfactory dark matter abundance is no longer prima facie natural. This Neutralino Tuning Problem could be a hint of nonstandard cosmology during and/or after the Terascale era. To quantify this possibility, we introduce an alternative cosmological benchmark based upon a simple model of quintessential inflation. This benchmark has no free parameters, so for a given supersymmetry model it allows an unambiguous prediction ofmore » the dark matter relic density. As a example, we scan over the parameter space of the CMSSM, comparing the neutralino relic density predictions with the bounds from WMAP. We find that the WMAP-allowed regions of the CMSSM are an order of magnitude larger if we use the alternative cosmological benchmark, as opposed to {Lambda}CDM. Initial results from the CERN Large Hadron Collider will distinguish between the two allowed regions.« less
Colliders as a simultaneous probe of supersymmetric dark matter and Terascale cosmology
NASA Astrophysics Data System (ADS)
Barenboim, Gabriela; Lykken, Joseph D.
2006-12-01
Terascale supersymmetry has the potential to provide a natural explanation of the dominant dark matter component of the standard ΛCDM cosmology. However once we impose the constraints on minimal supersymmetry parameters from current particle physics data, a satisfactory dark matter abundance is no longer prima facie natural. This Neutralino Tuning Problem could be a hint of nonstandard cosmology during and/or after the Terascale era. To quantify this possibility, we introduce an alternative cosmological benchmark based upon a simple model of quintessential inflation. This benchmark has no free parameters, so for a given supersymmetry model it allows an unambiguous prediction of the dark matter relic density. As a example, we scan over the parameter space of the CMSSM, comparing the neutralino relic density predictions with the bounds from WMAP. We find that the WMAP allowed regions of the CMSSM are an order of magnitude larger if we use the alternative cosmological benchmark, as opposed to ΛCDM. Initial results from the CERN Large Hadron Collider will distinguish between the two allowed regions.
WEIBEL, TWO-STREAM, FILAMENTATION, OBLIQUE, BELL, BUNEMAN...WHICH ONE GROWS FASTER?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bret, A.
2009-07-10
Many competing linear instabilities are likely to occur in astrophysical settings, and it is important to assess which one grows faster for a given situation. An analytical model including the main beam plasma instabilities is developed. The full three-dimensional dielectric tensor is thus explained for a cold relativistic electron beam passing through a cold plasma, accounting for a guiding magnetic field, a return electronic current, and moving protons. Considering any orientations of the wave vector allows to retrieve the most unstable mode for any parameters set. An unified description of the filamentation (Weibel), two-stream, Buneman, Bell instabilities (and more) ismore » thus provided, allowing for the exact determination of their hierarchy in terms of the system parameters. For relevance to both real situations and PIC simulations, the electron-to-proton mass ratio is treated as a parameter, and numerical calculations are conducted with two different values, namely 1/1836 and 1/100. In the system parameter phase space, the shape of the domains governed by each kind of instability is far from being trivial. For low-density beams, the ultra-magnetized regime tends to be governed by either the two-stream or the Buneman instabilities. For beam densities equaling the plasma one, up to four kinds of modes are likely to play a role, depending of the beam Lorentz factor. In some regions of the system parameters phase space, the dominant mode may vary with the electron-to-proton mass ratio. Application is made to solar flares, intergalactic streams, and relativistic shocks physics.« less
NASA Technical Reports Server (NTRS)
Lux, James P.; Taylor, Gregory H.; Lang, Minh; Stern, Ryan A.
2011-01-01
An FPGA module leverages the previous work from Goddard Space Flight Center (GSFC) relating to NASA s Space Telecommunications Radio System (STRS) project. The STRS SpaceWire FPGA Module is written in the Verilog Register Transfer Level (RTL) language, and it encapsulates an unmodified GSFC core (which is written in VHDL). The module has the necessary inputs/outputs (I/Os) and parameters to integrate seamlessly with the SPARC I/O FPGA Interface module (also developed for the STRS operating environment, OE). Software running on the SPARC processor can access the configuration and status registers within the SpaceWire module. This allows software to control and monitor the SpaceWire functions, but it is also used to give software direct access to what is transmitted and received through the link. SpaceWire data characters can be sent/received through the software interface, as well as through the dedicated interface on the GSFC core. Similarly, SpaceWire time codes can be sent/received through the software interface or through a dedicated interface on the core. This innovation is designed for plug-and-play integration in the STRS OE. The SpaceWire module simplifies the interfaces to the GSFC core, and synchronizes all I/O to a single clock. An interrupt output (with optional masking) identifies time-sensitive events within the module. Test modes were added to allow internal loopback of the SpaceWire link and internal loopback of the client-side data interface.
NASA Astrophysics Data System (ADS)
Ferrari, Ulisse
A maximal entropy model provides the least constrained probability distribution that reproduces experimental averages of an observables set. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a ``rectified'' Data-Driven algorithm that is fast and by sampling from the parameters posterior avoids both under- and over-fitting along all the directions of the parameters space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method. This research was supported by a Grant from the Human Brain Project (HBP CLAP).
The dynamics of blood biochemical parameters in cosmonauts during long-term space flights
NASA Astrophysics Data System (ADS)
Markin, Andrei; Strogonova, Lubov; Balashov, Oleg; Polyakov, Valery; Tigner, Timoty
Most of the previously obtained data on cosmonauts' metabolic state concerned certain stages of the postflight period. In this connection, all conclusions, as to metabolism peculiarities during the space flight, were to a large extent probabilistic. The purpose of this work was study of metabolism characteristics in cosmonauts directly during long-term space flights. In the capillary blood samples taken from a finger, by "Reflotron IV" biochemical analyzer, "Boehringer Mannheim" GmbH, Germany, adapted to weightlessness environments, the activity of GOT, GPT, CK, gamma-GT, total and pancreatic amylase, as well as concentration of hemoglobin, glucose, total bilirubin, uric acid, urea, creatinine, total, HDL- and LDL cholesterol, triglycerides had been determined. HDL/LDL-cholesterol ratio also was computed. The crewmembers of 6 main missions to the "Mir" orbital station, a total of 17 cosmonauts, were examined. Biochemical tests were carryed out 30-60 days before lounch, and in the flights different stages between the 25-th and the 423-rd days of flights. In cosmonauts during space flight had been found tendency to increase, in compare with basal level, GOT, GPT, total amylase activity, glucose and total cholesterol concentration, and tendency to decrease of CK activity, hemoglobin, HDL-cholesterol concentration, and HDL/LDL — cholesterol ratio. Some definite trends in variations of other determined biochemical parameters had not been found. The same trends of mentioned biochemical parameters alterations observed in majority of tested cosmonauts, allows to suppose existence of connection between noted metabolic alterations with influence of space flight conditions upon cosmonaut's body. Variations of other studied blood biochemical parameters depends on, probably, pure individual causes.
Planetary and Space Simulation Facilities PSI at DLR for Astrobiology
NASA Astrophysics Data System (ADS)
Rabbow, E.; Rettberg, P.; Panitz, C.; Reitz, G.
2008-09-01
Ground based experiments, conducted in the controlled planetary and space environment simulation facilities PSI at DLR, are used to investigate astrobiological questions and to complement the corresponding experiments in LEO, for example on free flying satellites or on space exposure platforms on the ISS. In-orbit exposure facilities can only accommodate a limited number of experiments for exposure to space parameters like high vacuum, intense radiation of galactic and solar origin and microgravity, sometimes also technically adapted to simulate extraterrestrial planetary conditions like those on Mars. Ground based experiments in carefully equipped and monitored simulation facilities allow the investigation of the effects of simulated single environmental parameters and selected combinations on a much wider variety of samples. In PSI at DLR, international science consortia performed astrobiological investigations and space experiment preparations, exposing organic compounds and a wide range of microorganisms, reaching from bacterial spores to complex microbial communities, lichens and even animals like tardigrades to simulated planetary or space environment parameters in pursuit of exobiological questions on the resistance to extreme environments and the origin and distribution of life. The Planetary and Space Simulation Facilities PSI of the Institute of Aerospace Medicine at DLR in Köln, Germany, providing high vacuum of controlled residual composition, ionizing radiation of a X-ray tube, polychromatic UV radiation in the range of 170-400 nm, VIS and IR or individual monochromatic UV wavelengths, and temperature regulation from -20°C to +80°C at the sample size individually or in selected combinations in 9 modular facilities of varying sizes are presented with selected experiments performed within.
Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations
NASA Astrophysics Data System (ADS)
Bang, Youngsuk
Reduced order modeling (ROM) has been recognized as an indispensable approach when the engineering analysis requires many executions of high fidelity simulation codes. Examples of such engineering analyses in nuclear reactor core calculations, representing the focus of this dissertation, include the functionalization of the homogenized few-group cross-sections in terms of the various core conditions, e.g. burn-up, fuel enrichment, temperature, etc. This is done via assembly calculations which are executed many times to generate the required functionalization for use in the downstream core calculations. Other examples are sensitivity analysis used to determine important core attribute variations due to input parameter variations, and uncertainty quantification employed to estimate core attribute uncertainties originating from input parameter uncertainties. ROM constructs a surrogate model with quantifiable accuracy which can replace the original code for subsequent engineering analysis calculations. This is achieved by reducing the effective dimensionality of the input parameter, the state variable, or the output response spaces, by projection onto the so-called active subspaces. Confining the variations to the active subspace allows one to construct an ROM model of reduced complexity which can be solved more efficiently. This dissertation introduces a new algorithm to render reduction with the reduction errors bounded based on a user-defined error tolerance which represents the main challenge of existing ROM techniques. Bounding the error is the key to ensuring that the constructed ROM models are robust for all possible applications. Providing such error bounds represents one of the algorithmic contributions of this dissertation to the ROM state-of-the-art. Recognizing that ROM techniques have been developed to render reduction at different levels, e.g. the input parameter space, the state space, and the response space, this dissertation offers a set of novel hybrid ROM algorithms which can be readily integrated into existing methods and offer higher computational efficiency and defendable accuracy of the reduced models. For example, the snapshots ROM algorithm is hybridized with the range finding algorithm to render reduction in the state space, e.g. the flux in reactor calculations. In another implementation, the perturbation theory used to calculate first order derivatives of responses with respect to parameters is hybridized with a forward sensitivity analysis approach to render reduction in the parameter space. Reduction at the state and parameter spaces can be combined to render further reduction at the interface between different physics codes in a multi-physics model with the accuracy quantified in a similar manner to the single physics case. Although the proposed algorithms are generic in nature, we focus here on radiation transport models used in support of the design and analysis of nuclear reactor cores. In particular, we focus on replacing the traditional assembly calculations by ROM models to facilitate the generation of homogenized cross-sections for downstream core calculations. The implication is that assembly calculations could be done instantaneously therefore precluding the need for the expensive evaluation of the few-group cross-sections for all possible core conditions. Given the generic natures of the algorithms, we make an effort to introduce the material in a general form to allow non-nuclear engineers to benefit from this work.
Discriminative Cooperative Networks for Detecting Phase Transitions
NASA Astrophysics Data System (ADS)
Liu, Ye-Hua; van Nieuwenburg, Evert P. L.
2018-04-01
The classification of states of matter and their corresponding phase transitions is a special kind of machine-learning task, where physical data allow for the analysis of new algorithms, which have not been considered in the general computer-science setting so far. Here we introduce an unsupervised machine-learning scheme for detecting phase transitions with a pair of discriminative cooperative networks (DCNs). In this scheme, a guesser network and a learner network cooperate to detect phase transitions from fully unlabeled data. The new scheme is efficient enough for dealing with phase diagrams in two-dimensional parameter spaces, where we can utilize an active contour model—the snake—from computer vision to host the two networks. The snake, with a DCN "brain," moves and learns actively in the parameter space, and locates phase boundaries automatically.
Least-squares sequential parameter and state estimation for large space structures
NASA Technical Reports Server (NTRS)
Thau, F. E.; Eliazov, T.; Montgomery, R. C.
1982-01-01
This paper presents the formulation of simultaneous state and parameter estimation problems for flexible structures in terms of least-squares minimization problems. The approach combines an on-line order determination algorithm, with least-squares algorithms for finding estimates of modal approximation functions, modal amplitudes, and modal parameters. The approach combines previous results on separable nonlinear least squares estimation with a regression analysis formulation of the state estimation problem. The technique makes use of sequential Householder transformations. This allows for sequential accumulation of matrices required during the identification process. The technique is used to identify the modal prameters of a flexible beam.
Optimizing cosmological surveys in a crowded market
NASA Astrophysics Data System (ADS)
Bassett, Bruce A.
2005-04-01
Optimizing the major next-generation cosmological surveys (such as SNAP, KAOS, etc.) is a key problem given our ignorance of the physics underlying cosmic acceleration and the plethora of surveys planned. We propose a Bayesian design framework which (1) maximizes the discrimination power of a survey without assuming any underlying dark-energy model, (2) finds the best niche survey geometry given current data and future competing experiments, (3) maximizes the cross section for serendipitous discoveries and (4) can be adapted to answer specific questions (such as “is dark energy dynamical?”). Integrated parameter-space optimization (IPSO) is a design framework that integrates projected parameter errors over an entire dark energy parameter space and then extremizes a figure of merit (such as Shannon entropy gain which we show is stable to off-diagonal covariance matrix perturbations) as a function of survey parameters using analytical, grid or MCMC techniques. We discuss examples where the optimization can be performed analytically. IPSO is thus a general, model-independent and scalable framework that allows us to appropriately use prior information to design the best possible surveys.
Rouiller, Yolande; Solacroup, Thomas; Deparis, Véronique; Barbafieri, Marco; Gleixner, Ralf; Broly, Hervé; Eon-Duval, Alex
2012-06-01
The production bioreactor step of an Fc-Fusion protein manufacturing cell culture process was characterized following Quality by Design principles. Using scientific knowledge derived from the literature and process knowledge gathered during development studies and manufacturing to support clinical trials, potential critical and key process parameters with a possible impact on product quality and process performance, respectively, were determined during a risk assessment exercise. The identified process parameters were evaluated using a design of experiment approach. The regression models generated from the data allowed characterizing the impact of the identified process parameters on quality attributes. The main parameters having an impact on product titer were pH and dissolved oxygen, while those having the highest impact on process- and product-related impurities and variants were pH and culture duration. The models derived from characterization studies were used to define the cell culture process design space. The design space limits were set in such a way as to ensure that the drug substance material would consistently have the desired quality. Copyright © 2012 Elsevier B.V. All rights reserved.
Inverse design of bulk morphologies in block copolymers using particle swarm optimization
NASA Astrophysics Data System (ADS)
Khadilkar, Mihir; Delaney, Kris; Fredrickson, Glenn
Multiblock polymers are a versatile platform for creating a large range of nanostructured materials with novel morphologies and properties. However, achieving desired structures or property combinations is difficult due to a vast design space comprised of parameters including monomer species, block sequence, block molecular weights and dispersity, copolymer architecture, and binary interaction parameters. Navigating through such vast design spaces to achieve an optimal formulation for a target structure or property set requires an efficient global optimization tool wrapped around a forward simulation technique such as self-consistent field theory (SCFT). We report on such an inverse design strategy utilizing particle swarm optimization (PSO) as the global optimizer and SCFT as the forward prediction engine. To avoid metastable states in forward prediction, we utilize pseudo-spectral variable cell SCFT initiated from a library of defect free seeds of known block copolymer morphologies. We demonstrate that our approach allows for robust identification of block copolymers and copolymer alloys that self-assemble into a targeted structure, optimizing parameters such as block fractions, blend fractions, and Flory chi parameters.
Electromyostimulation, circuits and monitoring
NASA Technical Reports Server (NTRS)
Doerr, Donald F.
1994-01-01
One method to determine the benefit of electromyostimulation (EMS) requires an accurate strength assessment of the muscle of interest using a muscle force testing device. Several commercial devices are available. After a pre-EMS muscle assessment, a protocol with accurately controlled stimulation parameters must be applied and monitored. both the actual current and the resultant muscle force must be measured throughout the study. At the conclusion of the study, a reassessment of the muscle strength must be gathered. In our laboratory, electromyostimulation is being studied as a possible countermeasure to the muscle atrophy (degeneration) experienced in space. This muscle loss not only weakens the astronaut, but adversely affects his/her readaptation to 1-g upon return from space. Muscle atrophy is expected to have a more significant effect in long term space flight as anticipated in our space station. Our studies have concentrated on stimulating the four major muscle groups in the leg. These muscles were stimulated sequentially to allow individual muscle force quantification above the knee and ankle. The leg must be restrained in an instrumented brace to allow this measurement and preclude muscle cramping.
Uncertainty relations as Hilbert space geometry
NASA Technical Reports Server (NTRS)
Braunstein, Samuel L.
1994-01-01
Precision measurements involve the accurate determination of parameters through repeated measurements of identically prepared experimental setups. For many parameters there is a 'natural' choice for the quantum observable which is expected to give optimal information; and from this observable one can construct an Heinsenberg uncertainty principle (HUP) bound on the precision attainable for the parameter. However, the classical statistics of multiple sampling directly gives us tools to construct bounds for the precision available for the parameters of interest (even when no obvious natural quantum observable exists, such as for phase, or time); it is found that these direct bounds are more restrictive than those of the HUP. The implication is that the natural quantum observables typically do not encode the optimal information (even for observables such as position, and momentum); we show how this can be understood simply in terms of the Hilbert space geometry. Another striking feature of these bounds to parameter uncertainty is that for a large enough number of repetitions of the measurements all V quantum states are 'minimum uncertainty' states - not just Gaussian wave-packets. Thus, these bounds tell us what precision is achievable as well as merely what is allowed.
Experimental Research Regarding The Motion Capacity Of A Robotic Arm
NASA Astrophysics Data System (ADS)
Dumitru, Violeta Cristina
2015-09-01
This paper refers to the development of necessary experiments which obtained dynamic parameters (force, displacement) for a modular mechanism with multiple vertebrae. This mechanism performs functions of inspection and intervention in small spaces. Mechanical structure allows functional parameters to achieve precise movements to an imposed target. Will be analyzed the dynamic of the mechanisms using simulation instruments DimamicaRobot.tst under TestPoint programming environment and the elasticity of the tension cables. It will be changes on the mechanism so that spatial movement of the robotic arm is optimal.
Program document for Energy Systems Optimization Program 2 (ESOP2). Volume 1: Engineering manual
NASA Technical Reports Server (NTRS)
Hamil, R. G.; Ferden, S. L.
1977-01-01
The Energy Systems Optimization Program, which is used to provide analyses of Modular Integrated Utility Systems (MIUS), is discussed. Modifications to the input format to allow modular inputs in specified blocks of data are described. An optimization feature which enables the program to search automatically for the minimum value of one parameter while varying the value of other parameters is reported. New program option flags for prime mover analyses and solar energy for space heating and domestic hot water are also covered.
Modelling of Cosmic Molecular Masers: Introduction to a Computation Cookbook
NASA Astrophysics Data System (ADS)
Sobolev, Andrej M.; Gray, Malcolm D.
2012-07-01
Numerical modeling of molecular masers is necessary in order to understand their nature and diagnostic capabilities. Model construction requires elaboration of a basic description which allows computation, that is a definition of the parameter space and basic physical relations. Usually, this requires additional thorough studies that can consist of the following stages/parts: relevant molecular spectroscopy and collisional rate coefficients; conditions in and around the masing region (that part of space where population inversion is realized); geometry and size of the masing region (including the question of whether maser spots are discrete clumps or line-of-sight correlations in a much bigger region) and propagation of maser radiation. Output of the maser computer modeling can have the following forms: exploration of parameter space (where do inversions appear in particular maser transitions and their combinations, which parameter values describe a `typical' source, and so on); modeling of individual sources (line flux ratios, spectra, images and their variability); analysis of the pumping mechanism; predictions (new maser transitions, correlations in variability of different maser transitions, and the like). Described schemes (constituents and hierarchy) of the model input and output are based mainly on the experience of the authors and make no claim to be dogmatic.
NASA Astrophysics Data System (ADS)
Yang, Kwei-Chou
2018-01-01
In light of the observed Galactic center gamma-ray excess, we investigate a simplified model, for which the scalar dark matter interacts with quarks through a pseudoscalar mediator. The viable regions of the parameter space, that can also account for the relic density and evade the current searches, are identified, if the low-velocity dark matter annihilates through an s -channel off shell mediator mostly into b ¯b , and/or annihilates directly into two hidden on shell mediators, which subsequently decay into the quark pairs. These two kinds of annihilations are s wave. The projected monojet limit set by the high luminosity LHC sensitivity could constrain the favored parameter space, where the mediator's mass is larger than the dark matter mass by a factor of 2. We show that the projected sensitivity of 15-year Fermi-LAT observations of dwarf spheroidal galaxies can provide a stringent constraint on the most parameter space allowed in this model. If the on shell mediator channel contributes to the dark matter annihilation cross sections over 50%, this model with a lighter mediator can be probed in the projected PICO-500L experiment.
MAPPING GROWTH AND GRAVITY WITH ROBUST REDSHIFT SPACE DISTORTIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwan, Juliana; Lewis, Geraint F.; Linder, Eric V.
2012-04-01
Redshift space distortions (RSDs) caused by galaxy peculiar velocities provide a window onto the growth rate of large-scale structure and a method for testing general relativity. We investigate through a comparison of N-body simulations to various extensions of perturbation theory beyond the linear regime, the robustness of cosmological parameter extraction, including the gravitational growth index {gamma}. We find that the Kaiser formula and some perturbation theory approaches bias the growth rate by 1{sigma} or more relative to the fiducial at scales as large as k > 0.07 h Mpc{sup -1}. This bias propagates to estimates of the gravitational growth indexmore » as well as {Omega}{sub m} and the equation-of-state parameter and presents a significant challenge to modeling RSDs. We also determine an accurate fitting function for a combination of line-of-sight damping and higher order angular dependence that allows robust modeling of the redshift space power spectrum to substantially higher k.« less
Correlative live and super-resolution imaging reveals the dynamic structure of replication domains.
Xiang, Wanqing; Roberti, M Julia; Hériché, Jean-Karim; Huet, Sébastien; Alexander, Stephanie; Ellenberg, Jan
2018-06-04
Chromosome organization in higher eukaryotes controls gene expression, DNA replication, and DNA repair. Genome mapping has revealed the functional units of chromatin at the submegabase scale as self-interacting regions called topologically associating domains (TADs) and showed they correspond to replication domains (RDs). A quantitative structural and dynamic description of RD behavior in the nucleus is, however, missing because visualization of dynamic subdiffraction-sized RDs remains challenging. Using fluorescence labeling of RDs combined with correlative live and super-resolution microscopy in situ, we determined biophysical parameters to characterize the internal organization, spacing, and mechanical coupling of RDs. We found that RDs are typically 150 nm in size and contain four co-replicating regions spaced 60 nm apart. Spatially neighboring RDs are spaced 300 nm apart and connected by highly flexible linker regions that couple their motion only <550 nm. Our pipeline allows a robust quantitative characterization of chromosome structure in situ and provides important biophysical parameters to understand general principles of chromatin organization. © 2018 Xiang et al.
Scenarios for gluino coannihilation
Ellis, John; Evans, Jason L.; Luo, Feng; ...
2016-02-11
In this article, we study supersymmetric scenarios in which the gluino is the next-to-lightest supersymmetric particle (NLSP), with a mass sufficiently close to that of the lightest supersymmetric particle (LSP) that gluino coannihilation becomes important. One of these scenarios is the MSSM with soft supersymmetry-breaking squark and slepton masses that are universal at an input GUT renormalization scale, but with non-universal gaugino masses. The other scenario is an extension of the MSSM to include vector-like supermultiplets. In both scenarios, we identify the regions of parameter space where gluino coannihilation is important, and discuss their relations to other regions of parametermore » space where other mechanisms bring the dark matter density into the range allowed by cosmology. In the case of the non-universal MSSM scenario, we find that the allowed range of parameter space is constrained by the requirement of electroweak symmetry breaking, the avoidance of a charged LSP and the measured mass of the Higgs boson, in particular, as well as the appearance of other dark matter (co)annihilation processes. Nevertheless, LSP masses m X ≲ 8TeV with the correct dark matter density are quite possible. In the case of pure gravity mediation with additional vector-like supermultiplets, changes to the anomaly-mediated gluino mass and the threshold effects associated with these states can make the gluino almost degenerate with the LSP, and we find a similar upper bound.« less
NASA Astrophysics Data System (ADS)
Salvatelli, Valentina; Piazza, Federico; Marinoni, Christian
2016-09-01
We use the effective field theory of dark energy (EFT of DE) formalism to constrain dark energy models belonging to the Horndeski class with the recent Planck 2015 CMB data. The space of theories is spanned by a certain number of parameters determining the linear cosmological perturbations, while the expansion history is set to that of a standard ΛCDM model. We always demand that the theories be free of fatal instabilities. Additionally, we consider two optional conditions, namely that scalar and tensor perturbations propagate with subliminal speed. Such criteria severely restrict the allowed parameter space and are thus very effective in shaping the posteriors. As a result, we confirm that no theory performs better than ΛCDM when CMB data alone are analysed. Indeed, the healthy dark energy models considered here are not able to reproduce those phenomenological behaviours of the effective Newton constant and gravitational slip parameters that, according to previous studies, best fit the data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salvatelli, Valentina; Piazza, Federico; Marinoni, Christian, E-mail: Valentina.Salvatelli@cpt.univ-mrs.fr, E-mail: Federico.Piazza@cpt.univ-mrs.fr, E-mail: Christian.Marinoni@cpt.univ-mrs.fr
We use the effective field theory of dark energy (EFT of DE) formalism to constrain dark energy models belonging to the Horndeski class with the recent Planck 2015 CMB data. The space of theories is spanned by a certain number of parameters determining the linear cosmological perturbations, while the expansion history is set to that of a standard ΛCDM model. We always demand that the theories be free of fatal instabilities. Additionally, we consider two optional conditions, namely that scalar and tensor perturbations propagate with subliminal speed. Such criteria severely restrict the allowed parameter space and are thus very effectivemore » in shaping the posteriors. As a result, we confirm that no theory performs better than ΛCDM when CMB data alone are analysed. Indeed, the healthy dark energy models considered here are not able to reproduce those phenomenological behaviours of the effective Newton constant and gravitational slip parameters that, according to previous studies, best fit the data.« less
Structural kinetic modeling of metabolic networks.
Steuer, Ralf; Gross, Thilo; Selbig, Joachim; Blasius, Bernd
2006-08-08
To develop and investigate detailed mathematical models of metabolic processes is one of the primary challenges in systems biology. However, despite considerable advance in the topological analysis of metabolic networks, kinetic modeling is still often severely hampered by inadequate knowledge of the enzyme-kinetic rate laws and their associated parameter values. Here we propose a method that aims to give a quantitative account of the dynamical capabilities of a metabolic system, without requiring any explicit information about the functional form of the rate equations. Our approach is based on constructing a local linear model at each point in parameter space, such that each element of the model is either directly experimentally accessible or amenable to a straightforward biochemical interpretation. This ensemble of local linear models, encompassing all possible explicit kinetic models, then allows for a statistical exploration of the comprehensive parameter space. The method is exemplified on two paradigmatic metabolic systems: the glycolytic pathway of yeast and a realistic-scale representation of the photosynthetic Calvin cycle.
Gariano, John; Neifeld, Mark; Djordjevic, Ivan
2017-01-20
Here, we present the engineering trade studies of a free-space optical communication system operating over a 30 km maritime channel for the months of January and July. The system under study follows the BB84 protocol with the following assumptions: a weak coherent source is used, Eve is performing the intercept resend attack and photon number splitting attack, prior knowledge of Eve's location is known, and Eve is allowed to know a small percentage of the final key. In this system, we examine the effect of changing several parameters in the following areas: the implementation of the BB84 protocol over the public channel, the technology in the receiver, and our assumptions about Eve. For each parameter, we examine how different values impact the secure key rate for a constant brightness. Additionally, we will optimize the brightness of the source for each parameter to study the improvement in the secure key rate.
Wing optimization for space shuttle orbiter vehicles
NASA Technical Reports Server (NTRS)
Surber, T. E.; Bornemann, W. E.; Miller, W. D.
1972-01-01
The results were presented of a parametric study performed to determine the optimum wing geometry for a proposed space shuttle orbiter. The results of the study establish the minimum weight wing for a series of wing-fuselage combinations subject to constraints on aerodynamic heating, wing trailing edge sweep, and wing over-hang. The study consists of a generalized design evaluation which has the flexibility of arbitrarily varying those wing parameters which influence the vehicle system design and its performance. The study is structured to allow inputs of aerodynamic, weight, aerothermal, structural and material data in a general form so that the influence of these parameters on the design optimization process can be isolated and identified. This procedure displays the sensitivity of the system design of variations in wing geometry. The parameters of interest are varied in a prescribed fashion on a selected fuselage and the effect on the total vehicle weight is determined. The primary variables investigated are: wing loading, aspect ratio, leading edge sweep, thickness ratio, and taper ratio.
[Visual and motor functions in schizophrenic patients].
Del Vecchio, S; Gargiulo, P A
1992-12-01
In the present work, visual and motor functions have been explored in 26 chronic schizophrenic patients, and 7 acute schizophrenic patients, compared with 26 normal controls, by means of the Bender-Gestalt Test. Parameters under consideration were: Form distortion, rotation, integration, perseveration, use of space, subtle motricity, score (global parameter), and time employed. As regards distortion and rotation there have been highly significant differences between chronic patients and control group. Among acute patients, it was observed that perseveration was also highly significant. Conversely, integration and use of space did not differ significantly among the three groups involved. The global score, resulting from all the above mentioned parameters showed important differences between both patient groups on the one hand, and control group on the other hand. Taking into account that patients were being administered neuroleptic drugs, it can safely be said, however, that the Bender-Gestalt Test allows to recognize alteration in perceptual closure consistent with a loss of the objective structure of perceived phenomena, in both chronic and acute patients.
SAChES: Scalable Adaptive Chain-Ensemble Sampling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swiler, Laura Painton; Ray, Jaideep; Ebeida, Mohamed Salah
We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the usemore » of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.« less
Atmospheric seeing measurements obtained with MISOLFA in the framework of the PICARD Mission
NASA Astrophysics Data System (ADS)
Ikhlef, R.; Corbard, T.; Irbah, A.; Morand, F.; Fodil, M.; Chauvineau, B.; Assus, P.; Renaud, C.; Meftah, M.; Abbaki, S.; Borgnino, J.; Cissé, E. M.; D'Almeida, E.; Hauchecorne, A.; Laclare, F.; Lesueur, P.; Lin, M.; Martin, F.; Poiet, G.; Rouzé, M.; Thuillier, G.; Ziad, A.
2012-09-01
PICARD is a space mission launched in June 2010 to study mainly the geometry of the Sun. The PICARD mission has a ground program consisting mostly in four instruments based at the Calern Observatory (Observatoire de la Côte d’Azur). They allow recording simultaneous solar images and various atmospheric data from ground. The ground instruments consist in the qualification model of the PICARD space instrument (SODISM II: Solar Diameter Imager and Surface Mapper), standard sun-photometers, a pyranometer for estimating a global sky quality index, and MISOLFA a generalized daytime seeing monitor. Indeed, astrometric observations of the Sun using ground-based telescopes need an accurate modeling of optical effects induced by atmospheric turbulence. MISOLFA is founded on the observation of Angle-of-Arrival (AA) fluctuations and allows us to analyze atmospheric turbulence optical effects on measurements performed by SODISM II. It gives estimations of the coherence parameters characterizing wave-fronts degraded by the atmospheric turbulence (Fried parameter r0, size of the isoplanatic patch, the spatial coherence outer scale L0 and atmospheric correlation times). We present in this paper simulations showing how the Fried parameter infered from MISOLFA records can be used to interpret radius measurements extracted from SODISM II images. We show an example of daily and monthly evolution of r0 and present its statistics over 2 years at Calern Observatory with a global mean value of 3.5cm.
NASA Astrophysics Data System (ADS)
Angelescu, Andrei; Moreau, Grégory; Richard, François
2017-07-01
The radion scalar field might be the lightest new particle predicted by extradimensional extensions of the standard model. It could thus lead to the first signatures of new physics at the LHC collider. We perform a complete study of the radion production in association with the Z gauge boson in the custodially protected warped model with a brane-localized Higgs boson addressing the gauge hierarchy problem. Radion-Higgs mixing effects are present. Such a radion production receives possibly resonant contributions from the Kaluza-Klein excitations of the Z boson as well as the extra neutral gauge boson (Z'). All the exchange and mixing effects induced by those heavy bosons are taken into account in the radion coupling and rate calculations. The investigation of the considered radion production at the LHC allows us to be sensitive to some parts of the parameter space but only the ILC program at high luminosity would cover most of the theoretically allowed parameter space via the studied reaction. Complementary tests of the same theoretical parameters can be realized through the high accuracy measurements of the Higgs couplings at the ILC. The generic sensitivity limits on the rates discussed for the LHC and ILC potential reach can be applied to the searches for other (light) exotic scalar bosons.
NASA Astrophysics Data System (ADS)
Bruni, Sara; Rebischung, Paul; Zerbini, Susanna; Altamimi, Zuheir; Errico, Maddalena; Santi, Efisio
2018-04-01
The realization of the international terrestrial reference frame (ITRF) is currently based on the data provided by four space geodetic techniques. The accuracy of the different technique-dependent materializations of the frame physical parameters (origin and scale) varies according to the nature of the relevant observables and to the impact of technique-specific errors. A reliable computation of the ITRF requires combining the different inputs, so that the strengths of each technique can compensate for the weaknesses of the others. This combination, however, can only be performed providing some additional information which allows tying together the independent technique networks. At present, the links used for that purpose are topometric surveys (local/terrestrial ties) available at ITRF sites hosting instruments of different techniques. In principle, a possible alternative could be offered by spacecrafts accommodating the positioning payloads of multiple geodetic techniques realizing their co-location in orbit (space ties). In this paper, the GNSS-SLR space ties on-board GPS and GLONASS satellites are thoroughly examined in the framework of global reference frame computations. The investigation focuses on the quality of the realized physical frame parameters. According to the achieved results, the space ties on-board GNSS satellites cannot, at present, substitute terrestrial ties in the computation of the ITRF. The study is completed by a series of synthetic simulations investigating the impact that substantial improvements in the volume and quality of SLR observations to GNSS satellites would have on the precision of the GNSS frame parameters.
Probing Higgs-radion mixing in warped models through complementary searches at the LHC and the ILC
NASA Astrophysics Data System (ADS)
Frank, Mariana; Huitu, Katri; Maitra, Ushoshi; Patra, Monalisa
2016-09-01
We consider the Higgs-radion mixing in the context of warped space extradimensional models with custodial symmetry and investigate the prospects of detecting the mixed radion. Custodial symmetries allow the Kaluza-Klein excitations to be lighter and protect Z b b ¯ to be in agreement with experimental constraints. We perform a complementary study of discovery reaches of the Higgs-radion mixed state at the 13 and 14 TeV LHC and at the 500 and 1000 GeV International Linear Collider (ILC). We carry out a comprehensive analysis of the most significant production and decay modes of the mixed radion in the 80 GeV-1 TeV mass range and indicate the parameter space that can be probed at the LHC and the ILC. There exists a region of the parameter space which can be probed, at the LHC, through the diphoton channel even for a relatively low luminosity of 50 fb-1 . The reach of the four-lepton final state in probing the parameter space is also studied in the context of 14 TeV LHC, for a luminosity of 1000 fb-1 . At the ILC, with an integrated luminosity of 500 fb-1 , we analyze the Z -radion associated production and the W W fusion production, followed by the radion decay into b b ¯ and W+W-. The W W fusion production is favored over the Z -radion associated channel in probing regions of the parameter space beyond the LHC reach. The complementary study at the LHC and the ILC is useful both for the discovery of the radion and the understanding of its mixing sector.
Modelling the large-scale redshift-space 3-point correlation function of galaxies
NASA Astrophysics Data System (ADS)
Slepian, Zachary; Eisenstein, Daniel J.
2017-08-01
We present a configuration-space model of the large-scale galaxy 3-point correlation function (3PCF) based on leading-order perturbation theory and including redshift-space distortions (RSD). This model should be useful in extracting distance-scale information from the 3PCF via the baryon acoustic oscillation method. We include the first redshift-space treatment of biasing by the baryon-dark matter relative velocity. Overall, on large scales the effect of RSD is primarily a renormalization of the 3PCF that is roughly independent of both physical scale and triangle opening angle; for our adopted Ωm and bias values, the rescaling is a factor of ˜1.8. We also present an efficient scheme for computing 3PCF predictions from our model, important for allowing fast exploration of the space of cosmological parameters in future analyses.
Graham, Jim; Young, Nick; Jarnevich, Catherine S.; Newman, Greg; Evangelista, Paul; Stohlgren, Thomas J.
2013-01-01
Habitat suitability maps are commonly created by modeling a species’ environmental niche from occurrences and environmental characteristics. Here, we introduce the hyper-envelope modeling interface (HEMI), providing a new method for creating habitat suitability models using Bezier surfaces to model a species niche in environmental space. HEMI allows modeled surfaces to be visualized and edited in environmental space based on expert knowledge and does not require absence points for model development. The modeled surfaces require relatively few parameters compared to similar modeling approaches and may produce models that better match ecological niche theory. As a case study, we modeled the invasive species tamarisk (Tamarix spp.) in the western USA. We compare results from HEMI with those from existing similar modeling approaches (including BioClim, BioMapper, and Maxent). We used synthetic surfaces to create visualizations of the various models in environmental space and used modified area under the curve (AUC) statistic and akaike information criterion (AIC) as measures of model performance. We show that HEMI produced slightly better AUC values, except for Maxent and better AIC values overall. HEMI created a model with only ten parameters while Maxent produced a model with over 100 and BioClim used only eight. Additionally, HEMI allowed visualization and editing of the model in environmental space to develop alternative potential habitat scenarios. The use of Bezier surfaces can provide simple models that match our expectations of biological niche models and, at least in some cases, out-perform more complex approaches.
Development and Evaluation of Titanium Spacesuit Bearings
NASA Technical Reports Server (NTRS)
Rhodes, Richard; Battisti, Brian; Ytuarte, Raymond, Jr.; Schultz, Bradley
2016-01-01
The Z-2 Prototype Planetary Extravehicular Space Suit Assembly is a continuation of NASA's Z-series of spacesuits, designed with the intent of meeting a wide variety of exploration mission objectives, including human exploration of the Martian surface. Incorporating titanium bearings into the Z-series space suit architecture allows us to reduce mass by an estimated 23 lbs per suit system compared to the previously used stainless steel bearing race designs, without compromising suit functionality. There are two obstacles to overcome when using titanium for a bearing race- 1) titanium is flammable when exposed to the oxygen wetted environment inside the space suit and 2) titanium's poor wear properties are often challenging to overcome in tribology applications. In order to evaluate the ignitability of a titanium space suit bearing, a series of tests were conducted at White Sands Test Facility (WSTF) that introduced the bearings to an extreme test profile, with multiple failures imbedded into the test bearings. The testing showed no signs of ignition in the most extreme test cases; however, substantial wear of the bearing races was observed. In order to design a bearing that can last an entire exploration mission (approx. 3 years), design parameters for maximum contact stress need to be identified. To identify these design parameters, bearing test rigs were developed that allow for the quick evaluation of various bearing ball loads, ball diameters, lubricants, and surface treatments. This test data will allow designers to minimize the titanium bearing mass for a specific material and lubricant combination and design around a cycle life requirement for an exploration mission. This paper reviews the current research and testing that has been performed on titanium bearing races to evaluate the use of such materials in an enriched oxygen environment and to optimize the bearing assembly mass and tribological properties to accommodate for the high bearing cycle life for an exploration mission.
Preliminary calculation of solar cosmic ray dose to the female breast in space mission
NASA Technical Reports Server (NTRS)
Shavers, Mark; Poston, John W.; Atwell, William; Hardy, Alva C.; Wilson, John W.
1991-01-01
No regulatory dose limits are specifically assigned for the radiation exposure of female breasts during manned space flight. However, the relatively high radiosensitivity of the glandular tissue of the breasts and its potential exposure to solar flare protons on short- and long-term missions mandate a priori estimation of the associated risks. A model for estimating exposure within the breast is developed for use in future NASA missions. The female breast and torso geometry is represented by a simple interim model. A recently developed proton dose-buildup procedure is used for estimating doses. The model considers geomagnetic shielding, magnetic-storm conditions, spacecraft shielding, and body self-shielding. Inputs to the model include proton energy spectra, spacecraft orbital parameters, STS orbiter-shielding distribution at a given position, and a single parameter allowing for variation in breast size.
Streefland, M; Van Herpen, P F G; Van de Waterbeemd, B; Van der Pol, L A; Beuvery, E C; Tramper, J; Martens, D E; Toft, M
2009-10-15
A licensed pharmaceutical process is required to be executed within the validated ranges throughout the lifetime of product manufacturing. Changes to the process, especially for processes involving biological products, usually require the manufacturer to demonstrate that the safety and efficacy of the product remains unchanged by new or additional clinical testing. Recent changes in the regulations for pharmaceutical processing allow broader ranges of process settings to be submitted for regulatory approval, the so-called process design space, which means that a manufacturer can optimize his process within the submitted ranges after the product has entered the market, which allows flexible processes. In this article, the applicability of this concept of the process design space is investigated for the cultivation process step for a vaccine against whooping cough disease. An experimental design (DoE) is applied to investigate the ranges of critical process parameters that still result in a product that meets specifications. The on-line process data, including near infrared spectroscopy, are used to build a descriptive model of the processes used in the experimental design. Finally, the data of all processes are integrated in a multivariate batch monitoring model that represents the investigated process design space. This article demonstrates how the general principles of PAT and process design space can be applied for an undefined biological product such as a whole cell vaccine. The approach chosen for model development described here, allows on line monitoring and control of cultivation batches in order to assure in real time that a process is running within the process design space.
Estimation and Application of Ecological Memory Functions in Time and Space
NASA Astrophysics Data System (ADS)
Itter, M.; Finley, A. O.; Dawson, A.
2017-12-01
A common goal in quantitative ecology is the estimation or prediction of ecological processes as a function of explanatory variables (or covariates). Frequently, the ecological process of interest and associated covariates vary in time, space, or both. Theory indicates many ecological processes exhibit memory to local, past conditions. Despite such theoretical understanding, few methods exist to integrate observations from the recent past or within a local neighborhood as drivers of these processes. We build upon recent methodological advances in ecology and spatial statistics to develop a Bayesian hierarchical framework to estimate so-called ecological memory functions; that is, weight-generating functions that specify the relative importance of local, past covariate observations to ecological processes. Memory functions are estimated using a set of basis functions in time and/or space, allowing for flexible ecological memory based on a reduced set of parameters. Ecological memory functions are entirely data driven under the Bayesian hierarchical framework—no a priori assumptions are made regarding functional forms. Memory function uncertainty follows directly from posterior distributions for model parameters allowing for tractable propagation of error to predictions of ecological processes. We apply the model framework to simulated spatio-temporal datasets generated using memory functions of varying complexity. The framework is also applied to estimate the ecological memory of annual boreal forest growth to local, past water availability. Consistent with ecological understanding of boreal forest growth dynamics, memory to past water availability peaks in the year previous to growth and slowly decays to zero in five to eight years. The Bayesian hierarchical framework has applicability to a broad range of ecosystems and processes allowing for increased understanding of ecosystem responses to local and past conditions and improved prediction of ecological processes.
On the Use of the Log-Normal Particle Size Distribution to Characterize Global Rain
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Rincon, Rafael; Liao, Liang
2003-01-01
Although most parameterizations of the drop size distributions (DSD) use the gamma function, there are several advantages to the log-normal form, particularly if we want to characterize the large scale space-time variability of the DSD and rain rate. The advantages of the distribution are twofold: the logarithm of any moment can be expressed as a linear combination of the individual parameters of the distribution; the parameters of the distribution are approximately normally distributed. Since all radar and rainfall-related parameters can be written approximately as a moment of the DSD, the first property allows us to express the logarithm of any radar/rainfall variable as a linear combination of the individual DSD parameters. Another consequence is that any power law relationship between rain rate, reflectivity factor, specific attenuation or water content can be expressed in terms of the covariance matrix of the DSD parameters. The joint-normal property of the DSD parameters has applications to the description of the space-time variation of rainfall in the sense that any radar-rainfall quantity can be specified by the covariance matrix associated with the DSD parameters at two arbitrary space-time points. As such, the parameterization provides a means by which we can use the spaceborne radar-derived DSD parameters to specify in part the covariance matrices globally. However, since satellite observations have coarse temporal sampling, the specification of the temporal covariance must be derived from ancillary measurements and models. Work is presently underway to determine whether the use of instantaneous rain rate data from the TRMM Precipitation Radar can provide good estimates of the spatial correlation in rain rate from data collected in 5(sup 0)x 5(sup 0) x 1 month space-time boxes. To characterize the temporal characteristics of the DSD parameters, disdrometer data are being used from the Wallops Flight Facility site where as many as 4 disdrometers have been used to acquire data over a 2 km path. These data should help quantify the temporal form of the covariance matrix at this site.
NASA Astrophysics Data System (ADS)
Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.
2016-12-01
Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete empirical interpolation method (DEIM) to approximate the nonlinearity in the forward problem. We show that using only a limited number of forward solves, the resulting subspaces lead to an efficient method to explore the high-dimensional posterior.
From atoms to layers: in situ gold cluster growth kinetics during sputter deposition
NASA Astrophysics Data System (ADS)
Schwartzkopf, Matthias; Buffet, Adeline; Körstgens, Volker; Metwalli, Ezzeldin; Schlage, Kai; Benecke, Gunthard; Perlich, Jan; Rawolle, Monika; Rothkirch, André; Heidmann, Berit; Herzog, Gerd; Müller-Buschbaum, Peter; Röhlsberger, Ralf; Gehrke, Rainer; Stribeck, Norbert; Roth, Stephan V.
2013-05-01
The adjustment of size-dependent catalytic, electrical and optical properties of gold cluster assemblies is a very significant issue in modern applied nanotechnology. We present a real-time investigation of the growth kinetics of gold nanostructures from small nuclei to a complete gold layer during magnetron sputter deposition with high time resolution by means of in situ microbeam grazing incidence small-angle X-ray scattering (μGISAXS). We specify the four-stage growth including their thresholds with sub-monolayer resolution and identify phase transitions monitored in Yoneda intensity as a material-specific characteristic. An innovative and flexible geometrical model enables the extraction of morphological real space parameters, such as cluster size and shape, correlation distance, layer porosity and surface coverage, directly from reciprocal space scattering data. This approach enables a large variety of future investigations of the influence of different process parameters on the thin metal film morphology. Furthermore, our study allows for deducing the wetting behavior of gold cluster films on solid substrates and provides a better understanding of the growth kinetics in general, which is essential for optimization of manufacturing parameters, saving energy and resources.The adjustment of size-dependent catalytic, electrical and optical properties of gold cluster assemblies is a very significant issue in modern applied nanotechnology. We present a real-time investigation of the growth kinetics of gold nanostructures from small nuclei to a complete gold layer during magnetron sputter deposition with high time resolution by means of in situ microbeam grazing incidence small-angle X-ray scattering (μGISAXS). We specify the four-stage growth including their thresholds with sub-monolayer resolution and identify phase transitions monitored in Yoneda intensity as a material-specific characteristic. An innovative and flexible geometrical model enables the extraction of morphological real space parameters, such as cluster size and shape, correlation distance, layer porosity and surface coverage, directly from reciprocal space scattering data. This approach enables a large variety of future investigations of the influence of different process parameters on the thin metal film morphology. Furthermore, our study allows for deducing the wetting behavior of gold cluster films on solid substrates and provides a better understanding of the growth kinetics in general, which is essential for optimization of manufacturing parameters, saving energy and resources. Electronic supplementary information (ESI) available: The full GISAXS image sequence of the experiment, the model-based IsGISAXS-simulation sequence as movie files for comparison and detailed information about sample cleaning, XRR, FESEM, IsGISAXS, comparison μGIWAXS/μGISAXS, and sampling statistics. See DOI: 10.1039/c3nr34216f
NASA Technical Reports Server (NTRS)
Ley, Kenneth D.
1987-01-01
It is argued that the bioreactor being developed at NASA will allow researchers to determine the optimal conditions (e.g., pH, O sub 2, CO sub 2, nutrients) for growth of hybridoma cells, and to determine whether cell growth and antibody production are enhanced in the microgravity of space.
Optimizing RF gun cavity geometry within an automated injector design system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alicia Hofler ,Pavel Evtushenko
2011-03-28
RF guns play an integral role in the success of several light sources around the world, and properly designed and optimized cw superconducting RF (SRF) guns can provide a path to higher average brightness. As the need for these guns grows, it is important to have automated optimization software tools that vary the geometry of the gun cavity as part of the injector design process. This will allow designers to improve existing designs for present installations, extend the utility of these guns to other applications, and develop new designs. An evolutionary algorithm (EA) based system can provide this capability becausemore » EAs can search in parallel a large parameter space (often non-linear) and in a relatively short time identify promising regions of the space for more careful consideration. The injector designer can then evaluate more cavity design parameters during the injector optimization process against the beam performance requirements of the injector. This paper will describe an extension to the APISA software that allows the cavity geometry to be modified as part of the injector optimization and provide examples of its application to existing RF and SRF gun designs.« less
First principles statistical mechanics of alloys and magnetism
NASA Astrophysics Data System (ADS)
Eisenbach, Markus; Khan, Suffian N.; Li, Ying Wai
Modern high performance computing resources are enabling the exploration of the statistical physics of phase spaces with increasing size and higher fidelity of the Hamiltonian of the systems. For selected systems, this now allows the combination of Density Functional based first principles calculations with classical Monte Carlo methods for parameter free, predictive thermodynamics of materials. We combine our locally selfconsistent real space multiple scattering method for solving the Kohn-Sham equation with Wang-Landau Monte-Carlo calculations (WL-LSMS). In the past we have applied this method to the calculation of Curie temperatures in magnetic materials. Here we will present direct calculations of the chemical order - disorder transitions in alloys. We present our calculated transition temperature for the chemical ordering in CuZn and the temperature dependence of the short-range order parameter and specific heat. Finally we will present the extension of the WL-LSMS method to magnetic alloys, thus allowing the investigation of the interplay of magnetism, structure and chemical order in ferrous alloys. This research was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Science and Engineering Division and it used Oak Ridge Leadership Computing Facility resources at Oak Ridge National Laboratory.
Chaos and crises in a model for cooperative hunting: a symbolic dynamics approach.
Duarte, Jorge; Januário, Cristina; Martins, Nuno; Sardanyés, Josep
2009-12-01
In this work we investigate the population dynamics of cooperative hunting extending the McCann and Yodzis model for a three-species food chain system with a predator, a prey, and a resource species. The new model considers that a given fraction sigma of predators cooperates in prey's hunting, while the rest of the population 1-sigma hunts without cooperation. We use the theory of symbolic dynamics to study the topological entropy and the parameter space ordering of the kneading sequences associated with one-dimensional maps that reproduce significant aspects of the dynamics of the species under several degrees of cooperative hunting. Our model also allows us to investigate the so-called deterministic extinction via chaotic crisis and transient chaos in the framework of cooperative hunting. The symbolic sequences allow us to identify a critical boundary in the parameter spaces (K,C(0)) and (K,sigma) which separates two scenarios: (i) all-species coexistence and (ii) predator's extinction via chaotic crisis. We show that the crisis value of the carrying capacity K(c) decreases at increasing sigma, indicating that predator's populations with high degree of cooperative hunting are more sensitive to the chaotic crises. We also show that the control method of Dhamala and Lai [Phys. Rev. E 59, 1646 (1999)] can sustain the chaotic behavior after the crisis for systems with cooperative hunting. We finally analyze and quantify the inner structure of the target regions obtained with this control method for wider parameter values beyond the crisis, showing a power law dependence of the extinction transients on such critical parameters.
Population Synthesis of Radio and Gamma-ray Pulsars using the Maximum Likelihood Approach
NASA Astrophysics Data System (ADS)
Billman, Caleb; Gonthier, P. L.; Harding, A. K.
2012-01-01
We present the results of a pulsar population synthesis of normal pulsars from the Galactic disk using a maximum likelihood method. We seek to maximize the likelihood of a set of parameters in a Monte Carlo population statistics code to better understand their uncertainties and the confidence region of the model's parameter space. The maximum likelihood method allows for the use of more applicable Poisson statistics in the comparison of distributions of small numbers of detected gamma-ray and radio pulsars. Our code simulates pulsars at birth using Monte Carlo techniques and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and gamma-ray emission characteristics. We select measured distributions of radio pulsars from the Parkes Multibeam survey and Fermi gamma-ray pulsars to perform a likelihood analysis of the assumed model parameters such as initial period and magnetic field, and radio luminosity. We present the results of a grid search of the parameter space as well as a search for the maximum likelihood using a Markov Chain Monte Carlo method. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), the NASA Astrophysics Theory and Fundamental Program and the NASA Fermi Guest Investigator Program.
WMAP7 constraints on oscillations in the primordial power spectrum
NASA Astrophysics Data System (ADS)
Meerburg, P. Daniel; Wijers, Ralph A. M. J.; van der Schaar, Jan Pieter
2012-03-01
We use the 7-year Wilkinson Microwave Anisotropy Probe (WMAP7) data to place constraints on oscillations supplementing an almost scale-invariant primordial power spectrum. Such oscillations are predicted by a variety of models, some of which amount to assuming that there is some non-trivial choice of the vacuum state at the onset of inflation. In this paper, we will explore data-driven constraints on two distinct models of initial state modifications. In both models, the frequency, phase and amplitude are degrees of freedom of the theory for which the theoretical bounds are rather weak: both the amplitude and frequency have allowed values ranging over several orders of magnitude. This requires many computationally expensive evaluations of the model cosmic microwave background (CMB) spectra and their goodness of fit, even in a Markov chain Monte Carlo (MCMC), normally the most efficient fitting method for such a problem. To search more efficiently, we first run a densely-spaced grid, with only three varying parameters: the frequency, the amplitude and the baryon density. We obtain the optimal frequency and run an MCMC at the best-fitting frequency, randomly varying all other relevant parameters. To reduce the computational time of each power spectrum computation, we adjust both comoving momentum integration and spline interpolation (in l) as a function of frequency and amplitude of the primordial power spectrum. Applying this to the WMAP7 data allows us to improve existing constraints on the presence of oscillations. We confirm earlier findings that certain frequencies can improve the fitting over a model without oscillations. For those frequencies we compute the posterior probability, allowing us to put some constraints on the primordial parameter space of both models.
1968-12-17
Apollo 8 crew members paused before the mission simulator during training for the first manned lunar orbital mission. Frank Borman, commander; James Lovell, Command Module (CM) pilot; and William Anders, Lunar Module (LM) pilot , were also the first humans to launch aboard the massive Saturn V space vehicle. Lift off occurred on December 21, 1968 and returned safely to Earth on December 27, 1968. The mission achieved operational experience and tested the Apollo command module systems, including communications, tracking, and life-support, in cis-lunar space and lunar orbit, and allowed evaluation of crew performance on a lunar orbiting mission. The crew photographed the lunar surface, both far side and near side, obtaining information on topography and landmarks as well as other scientific information necessary for future Apollo landings. All systems operated within allowable parameters and all objectives of the mission were achieved.
Theoretical study of the hyperfine parameters of OH
NASA Technical Reports Server (NTRS)
Chong, Delano P.; Langhoff, Stephen R.; Bauschlicher, Charles W., Jr.
1991-01-01
In the present study of the hyperfine parameters of O-17H as a function of the one- and n-particle spaces, all of the parameters except oxygen's spin density, b sub F(O), are sufficiently easily tractable to allow concentration on the computational requirements for accurate determination of b sub F(O). Full configuration-interaction (FCI) calculations in six Gaussian basis sets yield unambiguous results for (1) the effect of uncontracting the O s and p basis sets; (2) that of adding diffuse s and p functions; and (3) that of adding polarization functions to O. The size-extensive modified coupled-pair functional method yields b sub F values which are in fair agreement with FCI results.
Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface
NASA Astrophysics Data System (ADS)
Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.
2016-12-01
Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.
Laboratory development and testing of spacecraft diagnostics
NASA Astrophysics Data System (ADS)
Amatucci, William; Tejero, Erik; Blackwell, Dave; Walker, Dave; Gatling, George; Enloe, Lon; Gillman, Eric
2017-10-01
The Naval Research Laboratory's Space Chamber experiment is a large-scale laboratory device dedicated to the creation of large-volume plasmas with parameters scaled to realistic space plasmas. Such devices make valuable contributions to the investigation of space plasma phenomena under controlled, reproducible conditions, allowing for the validation of theoretical models being applied to space data. However, in addition to investigations such as plasma wave and instability studies, such devices can also make valuable contributions to the development and testing of space plasma diagnostics. One example is the plasma impedance probe developed at NRL. Originally developed as a laboratory diagnostic, the sensor has now been flown on a sounding rocket, is included on a CubeSat experiment, and will be included on the DoD Space Test Program's STP-H6 experiment on the International Space Station. In this talk, we will describe how the laboratory simulation of space plasmas made this development path possible. Work sponsored by the US Naval Research Laboratory Base Program.
Coupled Boltzmann computation of mixed axion neutralino dark matter in the SUSY DFSZ axion model
NASA Astrophysics Data System (ADS)
Bae, Kyu Jung; Baer, Howard; Lessa, Andre; Serce, Hasan
2014-10-01
The supersymmetrized DFSZ axion model is highly motivated not only because it offers solutions to both the gauge hierarchy and strong CP problems, but also because it provides a solution to the SUSY μ-problem which naturally allows for a Little Hierarchy. We compute the expected mixed axion-neutralino dark matter abundance for the SUSY DFSZ axion model in two benchmark cases—a natural SUSY model with a standard neutralino underabundance (SUA) and an mSUGRA/CMSSM model with a standard overabundance (SOA). Our computation implements coupled Boltzmann equations which track the radiation density along with neutralino, axion, axion CO (produced via coherent oscillations), saxion, saxion CO, axino and gravitino densities. In the SUSY DFSZ model, axions, axinos and saxions go through the process of freeze-in—in contrast to freeze-out or out-of-equilibrium production as in the SUSY KSVZ model—resulting in thermal yields which are largely independent of the re-heat temperature. We find the SUA case with suppressed saxion-axion couplings (ξ=0) only admits solutions for PQ breaking scale falesssim 6× 1012 GeV where the bulk of parameter space tends to be axion-dominated. For SUA with allowed saxion-axion couplings (ξ =1), then fa values up to ~ 1014 GeV are allowed. For the SOA case, almost all of SUSY DFSZ parameter space is disallowed by a combination of overproduction of dark matter, overproduction of dark radiation or violation of BBN constraints. An exception occurs at very large fa~ 1015-1016 GeV where large entropy dilution from CO-produced saxions leads to allowed models.
Tuning iteration space slicing based tiled multi-core code implementing Nussinov's RNA folding.
Palkowski, Marek; Bielecki, Wlodzimierz
2018-01-15
RNA folding is an ongoing compute-intensive task of bioinformatics. Parallelization and improving code locality for this kind of algorithms is one of the most relevant areas in computational biology. Fortunately, RNA secondary structure approaches, such as Nussinov's recurrence, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. This allows us to apply powerful polyhedral compilation techniques based on the transitive closure of dependence graphs to generate parallel tiled code implementing Nussinov's RNA folding. Such techniques are within the iteration space slicing framework - the transitive dependences are applied to the statement instances of interest to produce valid tiles. The main problem at generating parallel tiled code is defining a proper tile size and tile dimension which impact parallelism degree and code locality. To choose the best tile size and tile dimension, we first construct parallel parametric tiled code (parameters are variables defining tile size). With this purpose, we first generate two nonparametric tiled codes with different fixed tile sizes but with the same code structure and then derive a general affine model, which describes all integer factors available in expressions of those codes. Using this model and known integer factors present in the mentioned expressions (they define the left-hand side of the model), we find unknown integers in this model for each integer factor available in the same fixed tiled code position and replace in this code expressions, including integer factors, with those including parameters. Then we use this parallel parametric tiled code to implement the well-known tile size selection (TSS) technique, which allows us to discover in a given search space the best tile size and tile dimension maximizing target code performance. For a given search space, the presented approach allows us to choose the best tile size and tile dimension in parallel tiled code implementing Nussinov's RNA folding. Experimental results, received on modern Intel multi-core processors, demonstrate that this code outperforms known closely related implementations when the length of RNA strands is bigger than 2500.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cooper, Clay A.; Glass, Robert J.; Tyler, Scott W.
OAK - B135 We apply high resolution, full field light transmission techniques to study the onset and development of convection in simulated porous media (Hele-Shaw cells) and fractures. The light transmission technique allows quantitative measurement of the solute concentration fields in time thus allowing direct measurements of the mass flux of components. Experiments are first designed to test theoretical stability relations as a function of the solute concentrations, solute diffusivities and the medium's permeability. Structural evolution and convective transport as a function of dimensionless control parameters is then determined across the full range of parameter space. We also consider themore » application of lattice gas automata techniques to numerically model the onset and development of convection. (Gary Drew notified on 3/25/03 of copyrighted Material)« less
Non-standard interactions and neutrinos from dark matter annihilation in the Sun
NASA Astrophysics Data System (ADS)
Demidov, S. V.
2018-02-01
We perform an analysis of the influence of non-standard neutrino interactions (NSI) on neutrino signal from dark matter annihilations in the Sun. Taking experimentally allowed benchmark values for the matter NSI parameters we show that the evolution of such neutrinos with energies at GeV scale can be considerably modified. We simulate propagation of neutrinos from the Sun to the Earth for realistic dark matter annihilation channels and find that the matter NSI can result in at most 30% correction to the signal rate of muon track events at neutrino telescopes. Still present experimental bounds on dark matter from these searches are robust in the presence of NSI within considerable part of their allowed parameter space. At the same time electron neutrino flux from dark matter annihilation in the Sun can be changed by a factor of few.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parzen, George
It will be shown that starting from a coordinate system where the 6 phase space coordinates are linearly coupled, one can go to a new coordinate system, where the motion is uncoupled, by means of a linear transformation. The original coupled coordinates and the new uncoupled coordinates are related by a 6 x 6 matrix, R. R will be called the decoupling matrix. It will be shown that of the 36 elements of the 6 x 6 decoupling matrix R, only 12 elements are independent. This may be contrasted with the results for motion in 4- dimensional phase space, wheremore » R has 4 independent elements. A set of equations is given from which the 12 elements of R can be computed from the one period transfer matrix. This set of equations also allows the linear parameters, the β i,α i, i = 1, 3, for the uncoupled coordinates, to be computed from the one period transfer matrix. An alternative procedure for computing the linear parameters,β i,α i, i = 1, 3, and the 12 independent elements of the decoupling matrix R is also given which depends on computing the eigenvectors of the one period transfer matrix. These results can be used in a tracking program, where the one period transfer matrix can be computed by multiplying the transfer matrices of all the elements in a period, to compute the linear parameters α i and β i, i = 1, 3, and the elements of the decoupling matrix R. The procedure presented here for studying coupled motion in 6-dimensional phase space can also be applied to coupled motion in 4-dimensional phase space, where it may be a useful alternative procedure to the procedure presented by Edwards and Teng. In particular, it gives a simpler programing procedure for computing the beta functions and the emittances for coupled motion in 4-dimensional phase space.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parzen, G.
It will be shown that starting from a coordinate system where the 6 phase space coordinates are linearly coupled, one can go to a new coordinate system, where the motion is uncoupled, by means of a linear transformation. The original coupled coordinates and the new uncoupled coordinates are related by a 6 {times} 6 matrix, R. R will be called the decoupling matrix. It will be shown that of the 36 elements of the 6 {times} 6 decoupling matrix R, only 12 elements are independent. This may be contrasted with the results for motion in 4-dimensional phase space, where Rmore » has 4 independent elements. A set of equations is given from which the 12 elements of R can be computed from the one period transfer matrix. This set of equations also allows the linear parameters, {beta}{sub i}, {alpha}{sub i} = 1, 3, for the uncoupled coordinates, to be computed from the one period transfer matrix. An alternative procedure for computing the linear parameters, the {beta}{sub i}, {alpha}{sub i} i = 1, 3, and the 12 independent elements of the decoupling matrix R is also given which depends on computing the eigenvectors of the one period transfer matrix. These results can be used in a tracking program, where the one period transfer matrix can be computed by multiplying the transfer matrices of all the elements in a period, to compute the linear parameters {alpha}{sub i} and {beta}{sub i}, i = 1, 3, and the elements of the decoupling matrix R. The procedure presented here for studying coupled motion in 6-dimensional phase space can also be applied to coupled motion in 4-dimensional phase space, where it may be a useful alternative procedure to the procedure presented by Edwards and Teng. In particular, it gives a simpler programming procedure for computing the beta functions and the emittances for coupled motion in 4-dimensional phase space.« less
Model-based high-throughput design of ion exchange protein chromatography.
Khalaf, Rushd; Heymann, Julia; LeSaout, Xavier; Monard, Florence; Costioli, Matteo; Morbidelli, Massimo
2016-08-12
This work describes the development of a model-based high-throughput design (MHD) tool for the operating space determination of a chromatographic cation-exchange protein purification process. Based on a previously developed thermodynamic mechanistic model, the MHD tool generates a large amount of system knowledge and thereby permits minimizing the required experimental workload. In particular, each new experiment is designed to generate information needed to help refine and improve the model. Unnecessary experiments that do not increase system knowledge are avoided. Instead of aspiring to a perfectly parameterized model, the goal of this design tool is to use early model parameter estimates to find interesting experimental spaces, and to refine the model parameter estimates with each new experiment until a satisfactory set of process parameters is found. The MHD tool is split into four sections: (1) prediction, high throughput experimentation using experiments in (2) diluted conditions and (3) robotic automated liquid handling workstations (robotic workstation), and (4) operating space determination and validation. (1) Protein and resin information, in conjunction with the thermodynamic model, is used to predict protein resin capacity. (2) The predicted model parameters are refined based on gradient experiments in diluted conditions. (3) Experiments on the robotic workstation are used to further refine the model parameters. (4) The refined model is used to determine operating parameter space that allows for satisfactory purification of the protein of interest on the HPLC scale. Each section of the MHD tool is used to define the adequate experimental procedures for the next section, thus avoiding any unnecessary experimental work. We used the MHD tool to design a polishing step for two proteins, a monoclonal antibody and a fusion protein, on two chromatographic resins, in order to demonstrate it has the ability to strongly accelerate the early phases of process development. Copyright © 2016 Elsevier B.V. All rights reserved.
Probing Majorana neutrino textures at DUNE
NASA Astrophysics Data System (ADS)
Bora, Kalpana; Borah, Debasish; Dutta, Debajyoti
2017-10-01
We study the possibility of probing different texture zero neutrino mass matrices at the long baseline neutrino experiment DUNE, particularly focusing on its sensitivity to the octant of atmospheric mixing angle θ23 and leptonic Dirac C P phase δcp. Assuming a diagonal charged lepton basis and Majorana nature of light neutrinos, we first classify the possible light neutrino mass matrices with one and two texture zeros and then numerically evaluate the parameter space which satisfies the texture zero conditions. Apart from using the latest global fit 3 σ values of neutrino oscillation parameters, we also use the latest bound on the sum of absolute neutrino masses (∑i |mi|) from the Planck mission data and the updated bound on effective neutrino mass Me e from neutrinoless double beta decay (0 ν β β ) experiments to find the allowed Majorana texture zero mass matrices. For the allowed texture zero mass matrices from all these constraints, we then feed the corresponding light neutrino parameter values satisfying the texture zero conditions into the numerical analysis in order to study the capability of DUNE to allow or exclude them once it starts taking data. We find that DUNE will be able to exclude some of these texture zero mass matrices which restrict (θ23-δcp) to a very specific range of values, depending on the values of the parameters that nature has chosen.
Dynamical thresholding of pancake models: a promising variant of the HDM picture
NASA Astrophysics Data System (ADS)
Buchert, Thomas
Variants of pancake models are considered which allow for the construction of a phenomenological link to the galaxy formation process. A control parameter space is introduced which defines different scenarios of galaxy formation. The sensibility of statistical measures of the small-scale structure with respect to this parameter freedom is demonstrated. This property of the galaxy formation model, together with the consequences of enlarging the box size of the simulation to a `fair sample scale', form the basis of arguments to support the possible revival of the standard `Hot-Dark-Matter' model.
Farina, Marco; Pappadopulo, Duccio; Rompineve, Fabrizio; ...
2017-01-23
Here, we propose a framework in which the QCD axion has an exponentially large coupling to photons, relying on the “clockwork” mechanism. We discuss the impact of present and future axion experiments on the parameter space of the model. In addition to the axion, the model predicts a large number of pseudoscalars which can be light and observable at the LHC. In the most favorable scenario, axion Dark Matter will give a signal in multiple axion detection experiments and the pseudo-scalars will be discovered at the LHC, allowing us to determine most of the parameters of the model.
Neutrino oscillation parameter sampling with MonteCUBES
NASA Astrophysics Data System (ADS)
Blennow, Mattias; Fernandez-Martinez, Enrique
2010-01-01
We present MonteCUBES ("Monte Carlo Utility Based Experiment Simulator"), a software package designed to sample the neutrino oscillation parameter space through Markov Chain Monte Carlo algorithms. MonteCUBES makes use of the GLoBES software so that the existing experiment definitions for GLoBES, describing long baseline and reactor experiments, can be used with MonteCUBES. MonteCUBES consists of two main parts: The first is a C library, written as a plug-in for GLoBES, implementing the Markov Chain Monte Carlo algorithm to sample the parameter space. The second part is a user-friendly graphical Matlab interface to easily read, analyze, plot and export the results of the parameter space sampling. Program summaryProgram title: MonteCUBES (Monte Carlo Utility Based Experiment Simulator) Catalogue identifier: AEFJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 69 634 No. of bytes in distributed program, including test data, etc.: 3 980 776 Distribution format: tar.gz Programming language: C Computer: MonteCUBES builds and installs on 32 bit and 64 bit Linux systems where GLoBES is installed Operating system: 32 bit and 64 bit Linux RAM: Typically a few MBs Classification: 11.1 External routines: GLoBES [1,2] and routines/libraries used by GLoBES Subprograms used:Cat Id ADZI_v1_0, Title GLoBES, Reference CPC 177 (2007) 439 Nature of problem: Since neutrino masses do not appear in the standard model of particle physics, many models of neutrino masses also induce other types of new physics, which could affect the outcome of neutrino oscillation experiments. In general, these new physics imply high-dimensional parameter spaces that are difficult to explore using classical methods such as multi-dimensional projections and minimizations, such as those used in GLoBES [1,2]. Solution method: MonteCUBES is written as a plug-in to the GLoBES software [1,2] and provides the necessary methods to perform Markov Chain Monte Carlo sampling of the parameter space. This allows an efficient sampling of the parameter space and has a complexity which does not grow exponentially with the parameter space dimension. The integration of the MonteCUBES package with the GLoBES software makes sure that the experimental definitions already in use by the community can also be used with MonteCUBES, while also lowering the learning threshold for users who already know GLoBES. Additional comments: A Matlab GUI for interpretation of results is included in the distribution. Running time: The typical running time varies depending on the dimensionality of the parameter space, the complexity of the experiment, and how well the parameter space should be sampled. The running time for our simulations [3] with 15 free parameters at a Neutrino Factory with O(10) samples varied from a few hours to tens of hours. References:P. Huber, M. Lindner, W. Winter, Comput. Phys. Comm. 167 (2005) 195, hep-ph/0407333. P. Huber, J. Kopp, M. Lindner, M. Rolinec, W. Winter, Comput. Phys. Comm. 177 (2007) 432, hep-ph/0701187. S. Antusch, M. Blennow, E. Fernandez-Martinez, J. Lopez-Pavon, arXiv:0903.3986 [hep-ph].
Scoping the parameter space for demo and the engineering test facility (ETF)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meier, Wayne R.
1999-01-19
In our IFE development plan, we have set a goal of building an Engineering Test Facility (ETF) for a total cost of $2B and a Demo for $3B. In Mike Campbell' s presentation at Madison, we included a viewgraph with an example Demo that had 80 to 250 MWe of net power and showed a plausible argument that it could cost less than $3B. In this memo, I examine the design space for the Demo and then briefly for the ETF. Instead of attempting to estimate the costs of the drivers, I pose the question in a way to definemore » R&D goals: As a function of key design and performance parameters, how much can the driver cost if the total facility cost is limited to the specified goal? The design parameters examined for the Demo included target gain, driver energy, driver efficiency, and net power output. For the ETF; the design parameters are target gain, driver energy, and target yield. The resulting graphs of allowable driver cost determine the goals that the driver R&D programs must seek to meet.« less
Non-blind acoustic invisibility by dual layers of homogeneous single-negative media
NASA Astrophysics Data System (ADS)
Gao, He; Zhu, Yi-Fan; Fan, Xu-Dong; Liang, Bin; Yang, Jing; Cheng, Jian-Chun
2017-02-01
Non-blind invisibility cloaks allowing the concealed object to sense the outside world have great application potentials such as in high-precision sensing or underwater camouflage. However the existing designs based on coordinate transformation techniques need complicated spatially-varying negative index or intricate multi-layered configurations, substantially increasing the difficulty in practical realization. Here we report on the non-blind acoustic invisibility for a circular object in free space with simple distribution of cloak parameters. The mechanism is that, instead of utilizing the transformation acoustics technique, we develop the analytical formulae for fast prediction of the scattering from the object and then use an evolutionary optimization to retrieve the desired cloak parameters for minimizing the scattered field. In this way, it is proven possible to break through the fundamental limit of complementary condition that must be satisfied by the effective parameters of the components in transformation acoustics-based cloaks. Numerical results show that the resulting cloak produces a non-bflind invisibility as perfect as in previous designs, but only needs two layers with homogenous single-negative parameters. With full simplification in parameter distribution and broken symmetry in complementary relationship, our scheme opens new route to free-space non-blind invisibility, taking a significant step towards real-world application of cloaking devices.
Non-blind acoustic invisibility by dual layers of homogeneous single-negative media
Gao, He; Zhu, Yi-fan; Fan, Xu-dong; Liang, Bin; Yang, Jing; Cheng, Jian-Chun
2017-01-01
Non-blind invisibility cloaks allowing the concealed object to sense the outside world have great application potentials such as in high-precision sensing or underwater camouflage. However the existing designs based on coordinate transformation techniques need complicated spatially-varying negative index or intricate multi-layered configurations, substantially increasing the difficulty in practical realization. Here we report on the non-blind acoustic invisibility for a circular object in free space with simple distribution of cloak parameters. The mechanism is that, instead of utilizing the transformation acoustics technique, we develop the analytical formulae for fast prediction of the scattering from the object and then use an evolutionary optimization to retrieve the desired cloak parameters for minimizing the scattered field. In this way, it is proven possible to break through the fundamental limit of complementary condition that must be satisfied by the effective parameters of the components in transformation acoustics-based cloaks. Numerical results show that the resulting cloak produces a non-bflind invisibility as perfect as in previous designs, but only needs two layers with homogenous single-negative parameters. With full simplification in parameter distribution and broken symmetry in complementary relationship, our scheme opens new route to free-space non-blind invisibility, taking a significant step towards real-world application of cloaking devices. PMID:28195227
An annotation system for 3D fluid flow visualization
NASA Technical Reports Server (NTRS)
Loughlin, Maria M.; Hughes, John F.
1995-01-01
Annotation is a key activity of data analysis. However, current systems for data analysis focus almost exclusively on visualization. We propose a system which integrates annotations into a visualization system. Annotations are embedded in 3D data space, using the Post-it metaphor. This embedding allows contextual-based information storage and retrieval, and facilitates information sharing in collaborative environments. We provide a traditional database filter and a Magic Lens filter to create specialized views of the data. The system has been customized for fluid flow applications, with features which allow users to store parameters of visualization tools and sketch 3D volumes.
A Bayesian state-space formulation of dynamic occupancy models
Royle, J. Andrew; Kery, M.
2007-01-01
Species occurrence and its dynamic components, extinction and colonization probabilities, are focal quantities in biogeography and metapopulation biology, and for species conservation assessments. It has been increasingly appreciated that these parameters must be estimated separately from detection probability to avoid the biases induced by nondetection error. Hence, there is now considerable theoretical and practical interest in dynamic occupancy models that contain explicit representations of metapopulation dynamics such as extinction, colonization, and turnover as well as growth rates. We describe a hierarchical parameterization of these models that is analogous to the state-space formulation of models in time series, where the model is represented by two components, one for the partially observable occupancy process and another for the observations conditional on that process. This parameterization naturally allows estimation of all parameters of the conventional approach to occupancy models, but in addition, yields great flexibility and extensibility, e.g., to modeling heterogeneity or latent structure in model parameters. We also highlight the important distinction between population and finite sample inference; the latter yields much more precise estimates for the particular sample at hand. Finite sample estimates can easily be obtained using the state-space representation of the model but are difficult to obtain under the conventional approach of likelihood-based estimation. We use R and Win BUGS to apply the model to two examples. In a standard analysis for the European Crossbill in a large Swiss monitoring program, we fit a model with year-specific parameters. Estimates of the dynamic parameters varied greatly among years, highlighting the irruptive population dynamics of that species. In the second example, we analyze route occupancy of Cerulean Warblers in the North American Breeding Bird Survey (BBS) using a model allowing for site-specific heterogeneity in model parameters. The results indicate relatively low turnover and a stable distribution of Cerulean Warblers which is in contrast to analyses of counts of individuals from the same survey that indicate important declines. This discrepancy illustrates the inertia in occupancy relative to actual abundance. Furthermore, the model reveals a declining patch survival probability, and increasing turnover, toward the edge of the range of the species, which is consistent with metapopulation perspectives on the genesis of range edges. Given detection/non-detection data, dynamic occupancy models as described here have considerable potential for the study of distributions and range dynamics.
Behavioral Health and Performance Laboratory Standard Measures (BHP-SM)
NASA Technical Reports Server (NTRS)
Williams, Thomas J.; Cromwell, Ronita
2017-01-01
The Spaceflight Standard Measures is a NASA Johnson Space Center Human Research Project (HRP) project that proposes to collect a set of core measurements, representative of many of the human spaceflight risks, from astronauts before, during and after long-duration International Space Station (ISS) missions. The term "standard measures" is defined as a set of core measurements, including physiological, biochemical, psychosocial, cognitive, and functional, that are reliable, valid, and accepted in terrestrial science, are associated with a specific and measurable outcome known to occur as a consequence of spaceflight, that will be collected in a standardized fashion from all (or most) crewmembers. While such measures might be used to define standards of health and performance or readiness for flight, the prime intent in their collection is to allow longitudinal analysis of multiple parameters in order to answer a variety of operational, occupational, and research-based questions. These questions are generally at a high level, and the approach for this project is to populate the standard measures database with the smallest set of data necessary to indicate further detailed research is required. Also included as standard measures are parameters that are not outcome-based in and of-themselves, but provide ancillary information that supports interpretation of the outcome measures, e.g., nutritional assessment, vehicle environmental parameters, crew debriefs, etc. The project's main aim is to ensure that an optimized minimal set of measures is consistently captured from all ISS crewmembers until the end of Station in order to characterize the human in space. -This allows the HRP to identify, establish, and evaluate a common set of measures for use in spaceflight and analog research to: develop baselines, systematically characterize risk likelihood and consequences, and assess effectiveness of countermeasures that work for behavioral health and performance risk factors. -By standardizing the battery of measures on all crewmembers, it will allow the HRP to evaluate countermeasures that work for one physiological system and ensure another system is not negatively affected. -These measures, named "Standard Measures," will serve as a data repository and be available to other studies under data sharing agreements.
Nutritional Status Assessment (SMO 016E)
NASA Technical Reports Server (NTRS)
Smith, S. M.; Zwart, S. R.; Heer, M.; Coburn, S. P.; Booth, S. A.; Jones, J. A.; Lupton, J.
2007-01-01
It has not been possible to assess nutritional status of crew members on the ISS during flight because blood and urine could not be collected during ISS missions. Postflight observations of alterations in nutritional status for several nutrients are troubling, and we require the ability to monitor the status of these nutrients during flight to determine if there is a specific impetus or timeframe for these changes. In addition to the monitoring of crew nutritional status during flight, in-flight sample collection would allow better assessment of countermeasure effectiveness. SMO 016E is also designed to expand the current medical requirement for nutritional assessment (MR016L) to include additional normative markers for assessing crew health and countermeasure effectiveness. Additional markers of bone metabolism will be measured to better monitor bone health and the effectiveness of countermeasures to prevent bone resorption. New markers of oxidative damage will be measured to better assess the type of oxidative insults that occur during space flight. The array of nutritional assessment parameters will be expanded to include parameters that will allow us to better understand changes in folate and vitamin B6 status, and related cardiovascular risk factors during and after flight. Additionally, stress hormones and hormones that affect bone and muscle metabolism will also be measured. This additional assessment will allow us to better monitor the health of crew members and make more accurate recommendations for their rehabilitation. Several nutritional assessment parameters are altered at landing, but it is not known how long these changes persist. We extended the current protocol to include an additional postflight blood and urine sample collection 30 days after landing. Data are being collected before, during, and after flight. These data will provide a complete survey of how nutritional status and related systems are affected by space flight. Analyzing the data will help us to define nutritional requirements for long-duration missions. This expanded set of measurements will also aid in the identification of nutritional countermeasures to counteract, for example, the deleterious effects of microgravity on bone and muscle and the effects of space radiation.
Electroweak baryogenesis in two Higgs doublet models and B meson anomalies
NASA Astrophysics Data System (ADS)
Cline, James M.; Kainulainen, Kimmo; Trott, Michael
2011-11-01
Motivated by 3.9 σ evidence of a CP-violating phase beyond the standard model in the like-sign dimuon asymmetry reported by D∅, we examine the potential for two Higgs doublet models (2HDMs) to achieve successful electroweak baryogenesis (EWBG) while explaining the dimuon anomaly. Our emphasis is on the minimal flavour violating 2HDM, but our numerical scans of model parameter space include type I and type II models as special cases. We incorporate relevant particle physics constraints, including electroweak precision data, b → sγ, the neutron electric dipole moment, R b , and perturbative coupling bounds to constrain the model. Surprisingly, we find that a large enough baryon asymmetry is only consistently achieved in a small subset of parameter space in 2HDMs, regardless of trying to simultaneously account for any B physics anomaly. There is some tension between simultaneous explanation of the dimuon anomaly and baryogenesis, but using a Markov chain Monte Carlo we find several models within 1 σ of the central values. We point out shortcomings with previous studies that reached different conclusions. The restricted parameter space that allows for EWBG makes this scenario highly predictive for collider searches. We discuss the most promising signatures to pursue at the LHC for EWBG-compatible models.
Life Support Filtration System Trade Study for Deep Space Missions
NASA Technical Reports Server (NTRS)
Agui, Juan H.; Perry, Jay L.
2017-01-01
The National Aeronautics and Space Administrations (NASA) technical developments for highly reliable life support systems aim to maximize the viability of long duration deep space missions. Among the life support system functions, airborne particulate matter filtration is a significant driver of launch mass because of the large geometry required to provide adequate filtration performance and because of the number of replacement filters needed to a sustain a mission. A trade analysis incorporating various launch, operational and maintenance parameters was conducted to investigate the trade-offs between the various particulate matter filtration configurations. In addition to typical launch parameters such as mass, volume and power, the amount of crew time dedicated to system maintenance becomes an increasingly crucial factor for long duration missions. The trade analysis evaluated these parameters for conventional particulate matter filtration technologies and a new multi-stage particulate matter filtration system under development by NASAs Glenn Research Center. The multi-stage filtration system features modular components that allow for physical configuration flexibility. Specifically, the filtration system components can be configured in distributed, centralized, and hybrid physical layouts that can result in considerable mass savings compared to conventional particulate matter filtration technologies. The trade analysis results are presented and implications for future transit and surface missions are discussed.
Predicting the Consequences of MMOD Penetrations on the International Space Station
NASA Technical Reports Server (NTRS)
Hyde, James; Christiansen, E.; Lear, D.; Evans
2018-01-01
The threat from micrometeoroid and orbital debris (MMOD) impacts on space vehicles is often quantified in terms of the probability of no penetration (PNP). However, for large spacecraft, especially those with multiple compartments, a penetration may have a number of possible outcomes. The extent of the damage (diameter of hole, crack length or penetration depth), the location of the damage relative to critical equipment or crew, crew response, and even the time of day of the penetration are among the many factors that can affect the outcome. For the International Space Station (ISS), a Monte-Carlo style software code called Manned Spacecraft Crew Survivability (MSCSurv) is used to predict the probability of several outcomes of an MMOD penetration-broadly classified as loss of crew (LOC), crew evacuation (Evac), loss of escape vehicle (LEV), and nominal end of mission (NEOM). By generating large numbers of MMOD impacts (typically in the billions) and tracking the consequences, MSCSurv allows for the inclusion of a large number of parameters and models as well as enabling the consideration of uncertainties in the models and parameters. MSCSurv builds upon the results from NASA's Bumper software (which provides the probability of penetration and critical input data to MSCSurv) to allow analysts to estimate the probability of LOC, Evac, LEV, and NEOM. This paper briefly describes the overall methodology used by NASA to quantify LOC, Evac, LEV, and NEOM with particular emphasis on describing in broad terms how MSCSurv works and its capabilities and most significant models.
NASA Technical Reports Server (NTRS)
Boland, J. S., III
1973-01-01
The derivation of an approximate error characteristic equation describing the transient system error response is given, along with a procedure for selecting adaptive gain parameters so as to relate to the transient error response. A detailed example of the application and implementation of these methods for a space shuttle type vehicle is included. An extension of the characteristic equation technique is used to provide an estimate of the magnitude of the maximum system error and an estimate of the time of occurrence of this maximum after a plant parameter disturbance. Techniques for relaxing certain stability requirements and the conditions under which this can be done and still guarantee asymptotic stability of the system error are discussed. Such conditions are possible because the Lyapunov methods used in the stability derivation allow for overconstraining a problem in the process of insuring stability.
Particle motion around magnetized black holes: Preston-Poisson space-time
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konoplya, R. A.
We analyze the motion of massless and massive particles around black holes immersed in an asymptotically uniform magnetic field and surrounded by some mechanical structure, which provides the magnetic field. The space-time is described by the Preston-Poisson metric, which is the generalization of the well-known Ernst metric with a new parameter, tidal force, characterizing the surrounding structure. The Hamilton-Jacobi equations allow the separation of variables in the equatorial plane. The presence of a tidal force from the surroundings considerably changes the parameters of the test particle motion: it increases the radius of circular orbits of particles and increases the bindingmore » energy of massive particles going from a given circular orbit to the innermost stable orbit near the black hole. In addition, it increases the distance of the minimal approach, time delay, and bending angle for a ray of light propagating near the black hole.« less
Scale Space for Camera Invariant Features.
Puig, Luis; Guerrero, José J; Daniilidis, Kostas
2014-09-01
In this paper we propose a new approach to compute the scale space of any central projection system, such as catadioptric, fisheye or conventional cameras. Since these systems can be explained using a unified model, the single parameter that defines each type of system is used to automatically compute the corresponding Riemannian metric. This metric, is combined with the partial differential equations framework on manifolds, allows us to compute the Laplace-Beltrami (LB) operator, enabling the computation of the scale space of any central projection system. Scale space is essential for the intrinsic scale selection and neighborhood description in features like SIFT. We perform experiments with synthetic and real images to validate the generalization of our approach to any central projection system. We compare our approach with the best-existing methods showing competitive results in all type of cameras: catadioptric, fisheye, and perspective.
NASA Technical Reports Server (NTRS)
Roosta, Ramin; Wang, Xinchen; Sadigursky, Michael; Tracton, Phil
2004-01-01
Field Programmable Gate Arrays (FPGA) have played increasingly important roles in military and aerospace applications. Xilinx SRAM-based FPGAs have been extensively used in commercial applications. They have been used less frequently in space flight applications due to their susceptibility to single-event upsets. Reliability of these devices in space applications is a concern that has not been addressed. The objective of this project is to design a fully programmable hardware/software platform that allows (but is not limited to) comprehensive static/dynamic burn-in test of Virtex-II 3000 FPGAs, at speed test and SEU test. Conventional methods test very few discrete AC parameters (primarily switching) of a given integrated circuit. This approach will test any possible configuration of the FPGA and any associated performance parameters. It allows complete or partial re-programming of the FPGA and verification of the program by using read back followed by dynamic test. Designers have full control over which functional elements of the FPGA to stress. They can completely simulate all possible types of configurations/functions. Another benefit of this platform is that it allows collecting information on elevation of the junction temperature as a function of gate utilization, operating frequency and functionality. A software tool has been implemented to demonstrate the various features of the system. The software consists of three major parts: the parallel interface driver, main system procedure and a graphical user interface (GUI).
Apollo 8 Commander Frank Borman Receives Presidential Call
NASA Technical Reports Server (NTRS)
1968-01-01
Apollo 8 Astronaut Frank Borman, commander of the first manned Saturn V space flight into Lunar orbit, accepted a phone call from the U.S. President Lyndon B. Johnson prior to launch. Borman, along with astronauts William Anders, Lunar Module (LM) pilot, and James Lovell, Command Module (CM) pilot, launched aboard the Apollo 8 mission on December 21, 1968 and returned safely to Earth on December 27, 1968. The mission achieved operational experience and tested the Apollo command module systems, including communications, tracking, and life-support, in cis-lunar space and lunar orbit, and allowed evaluation of crew performance on a lunar orbiting mission. The crew photographed the lunar surface, both far side and near side, obtaining information on topography and landmarks as well as other scientific information necessary for future Apollo landings. All systems operated within allowable parameters and all objectives of the mission were achieved.
Apollo 8 Astronaut James Lovell On Phone With President Johnson
NASA Technical Reports Server (NTRS)
1968-01-01
Apollo 8 Astronaut James Lovell, Command Module (CM) pilot of the first manned Saturn V space flight into Lunar orbit, accepted a phone call from the U.S. President Lyndon B. Johnson prior to launch. Lovell, along with astronauts William Anders, Lunar Module (LM) pilot, and Frank Borman, commander, launched aboard the Apollo 8 mission on December 21, 1968 and returned safely to Earth on December 27, 1968. The mission achieved operational experience and tested the Apollo command module systems, including communications, tracking, and life-support, in cis-lunar space and lunar orbit, and allowed evaluation of crew performance on a lunar orbiting mission. The crew photographed the lunar surface, both far side and near side, obtaining information on topography and landmarks as well as other scientific information necessary for future Apollo landings. All systems operated within allowable parameters and all objectives of the mission were achieved.
Apollo 8 Astronaut William Anders On Phone With President Johnson
NASA Technical Reports Server (NTRS)
1968-01-01
Apollo 8 Astronaut William Anders, Lunar Module (LM) pilot of the first manned Saturn V space flight into Lunar orbit, accepted a phone call from the U.S. President Lyndon B. Johnson prior to launch. Anders, along with astronauts James Lovell, Command Module (CM) pilot, and Frank Borman, commander, launched aboard the Apollo 8 mission on December 21, 1968 and returned safely to Earth on December 27, 1968. The mission achieved operational experience and tested the Apollo command module systems, including communications, tracking, and life-support, in cis-lunar space and lunar orbit, and allowed evaluation of crew performance on a lunar orbiting mission. The crew photographed the lunar surface, both far side and near side, obtaining information on topography and landmarks as well as other scientific information necessary for future Apollo landings. All systems operated within allowable parameters and all objectives of the mission were achieved.
NASA Astrophysics Data System (ADS)
Adamson, P.; An, F. P.; Anghel, I.; Aurisano, A.; Balantekin, A. B.; Band, H. R.; Barr, G.; Bishai, M.; Blake, A.; Blyth, S.; Bock, G. J.; Bogert, D.; Cao, D.; Cao, G. F.; Cao, J.; Cao, S. V.; Carroll, T. J.; Castromonte, C. M.; Cen, W. R.; Chan, Y. L.; Chang, J. F.; Chang, L. C.; Chang, Y.; Chen, H. S.; Chen, Q. Y.; Chen, R.; Chen, S. M.; Chen, Y.; Chen, Y. X.; Cheng, J.; Cheng, J.-H.; Cheng, Y. P.; Cheng, Z. K.; Cherwinka, J. J.; Childress, S.; Chu, M. C.; Chukanov, A.; Coelho, J. A. B.; Corwin, L.; Cronin-Hennessy, D.; Cummings, J. P.; de Arcos, J.; De Rijck, S.; Deng, Z. Y.; Devan, A. V.; Devenish, N. E.; Ding, X. F.; Ding, Y. Y.; Diwan, M. V.; Dolgareva, M.; Dove, J.; Dwyer, D. A.; Edwards, W. R.; Escobar, C. O.; Evans, J. J.; Falk, E.; Feldman, G. J.; Flanagan, W.; Frohne, M. V.; Gabrielyan, M.; Gallagher, H. R.; Germani, S.; Gill, R.; Gomes, R. A.; Gonchar, M.; Gong, G. H.; Gong, H.; Goodman, M. C.; Gouffon, P.; Graf, N.; Gran, R.; Grassi, M.; Grzelak, K.; Gu, W. Q.; Guan, M. Y.; Guo, L.; Guo, R. P.; Guo, X. H.; Guo, Z.; Habig, A.; Hackenburg, R. W.; Hahn, S. R.; Han, R.; Hans, S.; Hartnell, J.; Hatcher, R.; He, M.; Heeger, K. M.; Heng, Y. K.; Higuera, A.; Holin, A.; Hor, Y. K.; Hsiung, Y. B.; Hu, B. Z.; Hu, T.; Hu, W.; Huang, E. C.; Huang, H. X.; Huang, J.; Huang, X. T.; Huber, P.; Huo, W.; Hussain, G.; Hylen, J.; Irwin, G. M.; Isvan, Z.; Jaffe, D. E.; Jaffke, P.; James, C.; Jen, K. L.; Jensen, D.; Jetter, S.; Ji, X. L.; Ji, X. P.; Jiao, J. B.; Johnson, R. A.; de Jong, J. K.; Joshi, J.; Kafka, T.; Kang, L.; Kasahara, S. M. S.; Kettell, S. H.; Kohn, S.; Koizumi, G.; Kordosky, M.; Kramer, M.; Kreymer, A.; Kwan, K. K.; Kwok, M. W.; Kwok, T.; Lang, K.; Langford, T. J.; Lau, K.; Lebanowski, L.; Lee, J.; Lee, J. H. C.; Lei, R. T.; Leitner, R.; Leung, J. K. C.; Li, C.; Li, D. J.; Li, F.; Li, G. S.; Li, Q. J.; Li, S.; Li, S. C.; Li, W. D.; Li, X. N.; Li, Y. F.; Li, Z. B.; Liang, H.; Lin, C. J.; Lin, G. L.; Lin, S.; Lin, S. K.; Lin, Y.-C.; Ling, J. J.; Link, J. M.; Litchfield, P. J.; Littenberg, L.; Littlejohn, B. R.; Liu, D. W.; Liu, J. C.; Liu, J. L.; Loh, C. W.; Lu, C.; Lu, H. Q.; Lu, J. S.; Lucas, P.; Luk, K. B.; Lv, Z.; Ma, Q. M.; Ma, X. B.; Ma, X. Y.; Ma, Y. Q.; Malyshkin, Y.; Mann, W. A.; Marshak, M. L.; Martinez Caicedo, D. A.; Mayer, N.; McDonald, K. T.; McGivern, C.; McKeown, R. D.; Medeiros, M. M.; Mehdiyev, R.; Meier, J. R.; Messier, M. D.; Miller, W. H.; Mishra, S. R.; Mitchell, I.; Mooney, M.; Moore, C. D.; Mualem, L.; Musser, J.; Nakajima, Y.; Naples, D.; Napolitano, J.; Naumov, D.; Naumova, E.; Nelson, J. K.; Newman, H. B.; Ngai, H. Y.; Nichol, R. J.; Ning, Z.; Nowak, J. A.; O'Connor, J.; Ochoa-Ricoux, J. P.; Olshevskiy, A.; Orchanian, M.; Pahlka, R. B.; Paley, J.; Pan, H.-R.; Park, J.; Patterson, R. B.; Patton, S.; Pawloski, G.; Pec, V.; Peng, J. C.; Perch, A.; Pfützner, M. M.; Phan, D. D.; Phan-Budd, S.; Pinsky, L.; Plunkett, R. K.; Poonthottathil, N.; Pun, C. S. J.; Qi, F. Z.; Qi, M.; Qian, X.; Qiu, X.; Radovic, A.; Raper, N.; Rebel, B.; Ren, J.; Rosenfeld, C.; Rosero, R.; Roskovec, B.; Ruan, X. C.; Rubin, H. A.; Sail, P.; Sanchez, M. C.; Schneps, J.; Schreckenberger, A.; Schreiner, P.; Sharma, R.; Moed Sher, S.; Sousa, A.; Steiner, H.; Sun, G. X.; Sun, J. L.; Tagg, N.; Talaga, R. L.; Tang, W.; Taychenachev, D.; Thomas, J.; Thomson, M. A.; Tian, X.; Timmons, A.; Todd, J.; Tognini, S. C.; Toner, R.; Torretta, D.; Treskov, K.; Tsang, K. V.; Tull, C. E.; Tzanakos, G.; Urheim, J.; Vahle, P.; Viaux, N.; Viren, B.; Vorobel, V.; Wang, C. H.; Wang, M.; Wang, N. Y.; Wang, R. G.; Wang, W.; Wang, X.; Wang, Y. F.; Wang, Z.; Wang, Z. M.; Webb, R. C.; Weber, A.; Wei, H. Y.; Wen, L. J.; Whisnant, K.; White, C.; Whitehead, L.; Whitehead, L. H.; Wise, T.; Wojcicki, S. G.; Wong, H. L. H.; Wong, S. C. F.; Worcester, E.; Wu, C.-H.; Wu, Q.; Wu, W. J.; Xia, D. M.; Xia, J. K.; Xing, Z. Z.; Xu, J. L.; Xu, J. Y.; Xu, Y.; Xue, T.; Yang, C. G.; Yang, H.; Yang, L.; Yang, M. S.; Yang, M. T.; Ye, M.; Ye, Z.; Yeh, M.; Young, B. L.; Yu, Z. Y.; Zeng, S.; Zhan, L.; Zhang, C.; Zhang, H. H.; Zhang, J. W.; Zhang, Q. M.; Zhang, X. T.; Zhang, Y. M.; Zhang, Y. X.; Zhang, Z. J.; Zhang, Z. P.; Zhang, Z. Y.; Zhao, J.; Zhao, Q. W.; Zhao, Y. B.; Zhong, W. L.; Zhou, L.; Zhou, N.; Zhuang, H. L.; Zou, J. H.; Daya Bay Collaboration
2016-10-01
Searches for a light sterile neutrino have been performed independently by the MINOS and the Daya Bay experiments using the muon (anti)neutrino and electron antineutrino disappearance channels, respectively. In this Letter, results from both experiments are combined with those from the Bugey-3 reactor neutrino experiment to constrain oscillations into light sterile neutrinos. The three experiments are sensitive to complementary regions of parameter space, enabling the combined analysis to probe regions allowed by the Liquid Scintillator Neutrino Detector (LSND) and MiniBooNE experiments in a minimally extended four-neutrino flavor framework. Stringent limits on sin22 θμ e are set over 6 orders of magnitude in the sterile mass-squared splitting Δ m412. The sterile-neutrino mixing phase space allowed by the LSND and MiniBooNE experiments is excluded for Δ m412<0.8 eV2 at 95 % CLs .
Fuzzy Logic Approaches to Multi-Objective Decision-Making in Aerospace Applications
NASA Technical Reports Server (NTRS)
Hardy, Terry L.
1994-01-01
Fuzzy logic allows for the quantitative representation of multi-objective decision-making problems which have vague or fuzzy objectives and parameters. As such, fuzzy logic approaches are well-suited to situations where alternatives must be assessed by using criteria that are subjective and of unequal importance. This paper presents an overview of fuzzy logic and provides sample applications from the aerospace industry. Applications include an evaluation of vendor proposals, an analysis of future space vehicle options, and the selection of a future space propulsion system. On the basis of the results provided in this study, fuzzy logic provides a unique perspective on the decision-making process, allowing the evaluator to assess the degree to which each option meets the evaluation criteria. Future decision-making should take full advantage of fuzzy logic methods to complement existing approaches in the selection of alternatives.
NASA Astrophysics Data System (ADS)
Günther, Uwe; Zhuk, Alexander; Bezerra, Valdir B.; Romero, Carlos
2005-08-01
We study multi-dimensional gravitational models with scalar curvature nonlinearities of types R-1 and R4. It is assumed that the corresponding higher dimensional spacetime manifolds undergo a spontaneous compactification to manifolds with a warped product structure. Special attention has been paid to the stability of the extra-dimensional factor spaces. It is shown that for certain parameter regions the systems allow for a freezing stabilization of these spaces. In particular, we find for the R-1 model that configurations with stabilized extra dimensions do not provide a late-time acceleration (they are AdS), whereas the solution branch which allows for accelerated expansion (the dS branch) is incompatible with stabilized factor spaces. In the case of the R4 model, we obtain that the stability region in parameter space depends on the total dimension D = dim(M) of the higher dimensional spacetime M. For D > 8 the stability region consists of a single (absolutely stable) sector which is shielded from a conformal singularity (and an antigravity sector beyond it) by a potential barrier of infinite height and width. This sector is smoothly connected with the stability region of a curvature-linear model. For D < 8 an additional (metastable) sector exists which is separated from the conformal singularity by a potential barrier of finite height and width so that systems in this sector are prone to collapse into the conformal singularity. This second sector is not smoothly connected with the first (absolutely stable) one. Several limiting cases and the possibility of inflation are discussed for the R4 model.
NASA Astrophysics Data System (ADS)
Portier-Fozzani, F.; Noens, J.-C.
In this presentation, I will present different techniques for 3D coronal structures reconstructions. Multiscale vision model (MVM, collaboration with A. Bijaoui) based on wavelet decomposition were used to prepare data. With SOHO/EIT, geometrical constraints were added to be able to measure by stereovision loop size parameters. Thus from these parameters, while including information of several observation wavelenghts, it has been possible by using the CHIANTI code to derive temperature and density along and across the loops, and thus to determine loops physical properties. During the emergence of a new active region, a more sophisticated method, was made to measure the twist degree variations. Loops appear twisted and detwist as expand. The magnetic helicity conservation gives thus important criteria to derive the limit of the stability for a non forced phenomena. Sigmoids, twisted ARLs, sheared filament are related with flares and CMEs. In that case 3D measurement can say upon which level of twist the structure will become unstable. With basic geometrical measures, it has been seen that a new active region reconnected a sigmoide leading to a flare. Also, for CMEs, the measure of the filament ejection angle from stereo EUV images, and the following of temporal evolution from coronagraphic measurement such as done by HACO at the Pic Du Midi Observatory, gives possibility to determine if the CME is coming toward the Earth, and when eventually would be the impact with the magnetosphere. The input of new missions such as STEREO/SECCHI would allow us to better understood the coronal dynamic. Such joined observations GBO-space, used simultaneously together with 3D methods, will allow to develop efficiently forecasting for Space Weather.
Alignment limit of the NMSSM Higgs sector
Carena, Marcela; Haber, Howard E.; Low, Ian; ...
2016-02-17
The Next-to-Minimal Supersymmetric extension of the Standard Model (NMSSM) with a Higgs boson of mass 125 GeV can be compatible with stop masses of order of the electroweak scale, thereby reducing the degree of fine-tuning necessary to achieve electroweak symmetry breaking. Moreover, in an attractive region of the NMSSM parameter space, corresponding to the \\alignment limit" in which one of the neutral Higgs fields lies approximately in the same direction in field space as the doublet Higgs vacuum expectation value, the observed Higgs boson is predicted to have Standard- Model-like properties. We derive analytical expressions for the alignment conditions andmore » show that they point toward a more natural region of parameter space for electroweak symmetry breaking, while allowing for perturbativity of the theory up to the Planck scale. Additionally, the alignment limit in the NMSSM leads to a well defined spectrum in the Higgs and Higgsino sectors, and yields a rich and interesting Higgs boson phenomenology that can be tested at the LHC. Here, we discuss the most promising channels for discovery and present several benchmark points for further study.« less
NASA Technical Reports Server (NTRS)
Howell, L. W.; Kennel, H. F.
1984-01-01
The Space Telescope (ST) is subjected to charged particle strikes in its space environment. ST's onboard fine guidance sensors utilize multiplier phototubes (PMT) for attitude determination. These tubes, when subjected to charged particle strikes, generate spurious photons in the form of Cerenkov radiation and fluorescence which give rise to unwanted disturbances in the pointing of the telescope. A stochastic model for the number of these spurious photons which strike the photocathode of the multiplier phototube which in turn produce the unwanted photon noise are presented. The model is applicable to both galactic cosmic rays and charged particles trapped in the Earth's radiation belts. The model which was programmed allows for easy adaption to a wide range of particles and different parameters for the phototube of the multiplier. The probability density functions for photons noise caused by protons, alpha particles, and carbon nuclei were using thousands of simulated strikes. These distributions are used as part of an overall ST dynamics simulation. The sensitivity of the density function to changes in the window parameters was also investigated.
NASA Technical Reports Server (NTRS)
Howell, L. W.; Kennel, H. F.
1986-01-01
The Space Telescope (ST) is subjected to charged particle strikes in its space environment. ST's onboard fine guidance sensors utilize multiplier phototubes (PMT) for attitude determination. These tubes, when subjected to charged particle strikes, generate spurious photons in the form of Cerenkov radiation and fluorescence which give rise to unwanted disturbances in the pointing of the telescope. A stochastic model for the number of these spurious photons which strike the photocathodes of the multiplier phototube which in turn produce the unwanted photon noise are presented. The model is applicable to both galactic cosmic rays and charged particles trapped in the earth's radiation belts. The model which was programmed allows for easy adaption to a wide range of particles and different parameters for the phototube of the multiplier. The probability density functions for photons noise caused by protons, alpha particles, and carbon nuclei were using thousands of simulated strikes. These distributions are used as part of an overall ST dynamics simulation. The sensitivity of the density function to changes in the window parameters was also investigated.
Lomax, Terri L; Findlay, Kirk A; White, T J; Winner, William E
2003-06-01
Plants will play an essential role in providing life support for any long-term space exploration or habitation. We are evaluating the feasibility of an adaptable system for measuring the response of plants to any unique space condition and optimizing plant performance under those conditions. The proposed system is based on a unique combination of systems including the rapid advances in the field of plant genomics, microarray technology for measuring gene expression, bioinformatics, gene pathways and networks, physiological measurements in controlled environments, and advances in automation and robotics. The resulting flexible module for monitoring and optimizing plant responses will be able to be inserted as a cassette into a variety of platforms and missions for either experimental or life support purposes. The results from future plant functional genomics projects have great potential to be applied to those plant species most likely to be used in space environments. Eventually, it will be possible to use the plant genetic assessment and control system to optimize the performance of any plant in any space environment. In addition to allowing the effective control of environmental parameters for enhanced plant productivity and other life support functions, the proposed module will also allow the selection or engineering of plants to thrive in specific space environments. The proposed project will advance human exploration of space in the near- and mid-term future on the International Space Station and free-flying satellites and in the far-term for longer duration missions and eventual space habitation.
NASA Astrophysics Data System (ADS)
Cary, J. R.; Shasharina, S.; Bruhwiler, D. L.
1998-04-01
The MAPA code is a fully interactive accelerator modeling and design tool consisting of a GUI and two object-oriented C++ libraries: a general library suitable for treatment of any dynamical system, and an accelerator library including many element types plus an accelerator class. The accelerator library inherits directly from the system library, which uses hash tables to store any relevant parameters or strings. The GUI can access these hash tables in a general way, allowing the user to invoke a window displaying all relevant parameters for a particular element type or for the accelerator class, with the option to change those parameters. The system library can advance an arbitrary number of dynamical variables through an arbitrary mapping. The accelerator class inherits this capability and overloads the relevant functions to advance the phase space variables of a charged particle through a string of elements. Among other things, the GUI makes phase space plots and finds fixed points of the map. We discuss the object hierarchy of the two libraries and use of the code.
NASA Astrophysics Data System (ADS)
Smith, David; Schuldt, Carsten; Lorenz, Jessica; Tschirner, Teresa; Moebius-Winkler, Maximilian; Kaes, Josef; Glaser, Martin; Haendler, Tina; Schnauss, Joerg
2015-03-01
Biologically evolved materials are often used as inspiration in the development of new materials as well as examinations into the underlying physical principles governing their behavior. For instance, the biopolymer constituents of the highly dynamic cellular cytoskeleton such as actin have inspired a deep understanding of soft polymer-based materials. However, the molecular toolbox provided by biological systems has been evolutionarily optimized to carry out the necessary functions of cells, and the inability modify basic properties such as biopolymer stiffness hinders a meticulous examination of parameter space. Using actin as inspiration, we circumvent these limitations using model systems assembled from programmable materials such as DNA. Nanorods with comparable, but controllable dimensions and mechanical properties as actin can be constructed from small sets of specially designed DNA strands. In entangled gels, these allow us to systematically determine the dependence of network mechanical properties on parameters such as persistence length and crosslink strength. At higher concentrations in the presence of local attractive forces, we see a transition to highly-ordered bundled and ``aster'' phases similar to those previously characterized in systems of actin or microtubules.
The Gamma-Ray Burst ToolSHED is Open for Business
NASA Astrophysics Data System (ADS)
Giblin, Timothy W.; Hakkila, Jon; Haglin, David J.; Roiger, Richard J.
2004-09-01
The GRB ToolSHED, a Gamma-Ray Burst SHell for Expeditions in Data-Mining, is now online and available via a web browser to all in the scientific community. The ToolSHED is an online web utility that contains pre-processed burst attributes of the BATSE catalog and a suite of induction-based machine learning and statistical tools for classification and cluster analysis. Users create their own login account and study burst properties within user-defined multi-dimensional parameter spaces. Although new GRB attributes are periodically added to the database for user selection, the ToolSHED has a feature that allows users to upload their own burst attributes (e.g. spectral parameters, etc.) so that additional parameter spaces can be explored. A data visualization feature using GNUplot and web-based IDL has also been implemented to provide interactive plotting of user-selected session output. In an era in which GRB observations and attributes are becoming increasingly more complex, a utility such as the GRB ToolSHED may play an important role in deciphering GRB classes and understanding intrinsic burst properties.
Koski, Jason P; Riggleman, Robert A
2017-04-28
Block copolymers, due to their ability to self-assemble into periodic structures with long range order, are appealing candidates to control the ordering of functionalized nanoparticles where it is well-accepted that the spatial distribution of nanoparticles in a polymer matrix dictates the resulting material properties. The large parameter space associated with block copolymer nanocomposites makes theory and simulation tools appealing to guide experiments and effectively isolate parameters of interest. We demonstrate a method for performing field-theoretic simulations in a constant volume-constant interfacial tension ensemble (nVγT) that enables the determination of the equilibrium properties of block copolymer nanocomposites, including when the composites are placed under tensile or compressive loads. Our approach is compatible with the complex Langevin simulation framework, which allows us to go beyond the mean-field approximation. We validate our approach by comparing our nVγT approach with free energy calculations to determine the ideal domain spacing and modulus of a symmetric block copolymer melt. We analyze the effect of numerical and thermodynamic parameters on the efficiency of the nVγT ensemble and subsequently use our method to investigate the ideal domain spacing, modulus, and nanoparticle distribution of a lamellar forming block copolymer nanocomposite. We find that the nanoparticle distribution is directly linked to the resultant domain spacing and is dependent on polymer chain density, nanoparticle size, and nanoparticle chemistry. Furthermore, placing the system under tension or compression can qualitatively alter the nanoparticle distribution within the block copolymer.
Spinor Field Nonlinearity and Space-Time Geometry
NASA Astrophysics Data System (ADS)
Saha, Bijan
2018-03-01
Within the scope of Bianchi type VI,VI0,V, III, I, LRSBI and FRW cosmological models we have studied the role of nonlinear spinor field on the evolution of the Universe and the spinor field itself. It was found that due to the presence of non-trivial non-diagonal components of the energy-momentum tensor of the spinor field in the anisotropic space-time, there occur some severe restrictions both on the metric functions and on the components of the spinor field. In this report we have considered a polynomial nonlinearity which is a function of invariants constructed from the bilinear spinor forms. It is found that in case of a Bianchi type-VI space-time, depending of the sign of self-coupling constants, the model allows either late time acceleration or oscillatory mode of evolution. In case of a Bianchi VI 0 type space-time due to the specific behavior of the spinor field we have two different scenarios. In one case the invariants constructed from bilinear spinor forms become trivial, thus giving rise to a massless and linear spinor field Lagrangian. This case is equivalent to the vacuum solution of the Bianchi VI 0 type space-time. The second case allows non-vanishing massive and nonlinear terms and depending on the sign of coupling constants gives rise to accelerating mode of expansion or the one that after obtaining some maximum value contracts and ends in big crunch, consequently generating space-time singularity. In case of a Bianchi type-V model there occur two possibilities. In one case we found that the metric functions are similar to each other. In this case the Universe expands with acceleration if the self-coupling constant is taken to be a positive one, whereas a negative coupling constant gives rise to a cyclic or periodic solution. In the second case the spinor mass and the spinor field nonlinearity vanish and the Universe expands linearly in time. In case of a Bianchi type-III model the space-time remains locally rotationally symmetric all the time, though the isotropy of space-time can be attained for a large proportionality constant. As far as evolution is concerned, depending on the sign of coupling constant the model allows both accelerated and oscillatory mode of expansion. A negative coupling constant leads to an oscillatory mode of expansion, whereas a positive coupling constant generates expanding Universe with late time acceleration. Both deceleration parameter and EoS parameter in this case vary with time and are in agreement with modern concept of space-time evolution. In case of a Bianchi type-I space-time the non-diagonal components lead to three different possibilities. In case of a full BI space-time we find that the spinor field nonlinearity and the massive term vanish, hence the spinor field Lagrangian becomes massless and linear. In two other cases the space-time evolves into either LRSBI or FRW Universe. If we consider a locally rotationally symmetric BI( LRSBI) model, neither the mass term nor the spinor field nonlinearity vanishes. In this case depending on the sign of coupling constant we have either late time accelerated mode of expansion or oscillatory mode of evolution. In this case for an expanding Universe we have asymptotical isotropization. Finally, in case of a FRW model neither the mass term nor the spinor field nonlinearity vanishes. Like in LRSBI case we have either late time acceleration or cyclic mode of evolution. These findings allow us to conclude that the spinor field is very sensitive to the gravitational one.
System health monitoring using multiple-model adaptive estimation techniques
NASA Astrophysics Data System (ADS)
Sifford, Stanley Ryan
Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.
How Complex, Probable, and Predictable is Genetically Driven Red Queen Chaos?
Duarte, Jorge; Rodrigues, Carla; Januário, Cristina; Martins, Nuno; Sardanyés, Josep
2015-12-01
Coevolution between two antagonistic species has been widely studied theoretically for both ecologically- and genetically-driven Red Queen dynamics. A typical outcome of these systems is an oscillatory behavior causing an endless series of one species adaptation and others counter-adaptation. More recently, a mathematical model combining a three-species food chain system with an adaptive dynamics approach revealed genetically driven chaotic Red Queen coevolution. In the present article, we analyze this mathematical model mainly focusing on the impact of species rates of evolution (mutation rates) in the dynamics. Firstly, we analytically proof the boundedness of the trajectories of the chaotic attractor. The complexity of the coupling between the dynamical variables is quantified using observability indices. By using symbolic dynamics theory, we quantify the complexity of genetically driven Red Queen chaos computing the topological entropy of existing one-dimensional iterated maps using Markov partitions. Co-dimensional two bifurcation diagrams are also built from the period ordering of the orbits of the maps. Then, we study the predictability of the Red Queen chaos, found in narrow regions of mutation rates. To extend the previous analyses, we also computed the likeliness of finding chaos in a given region of the parameter space varying other model parameters simultaneously. Such analyses allowed us to compute a mean predictability measure for the system in the explored region of the parameter space. We found that genetically driven Red Queen chaos, although being restricted to small regions of the analyzed parameter space, might be highly unpredictable.
Evaluation of powertrain solutions for future tactical truck vehicle systems
NASA Astrophysics Data System (ADS)
Pisu, Pierluigi; Cantemir, Codrin-Gruie; Dembski, Nicholas; Rizzoni, Giorgio; Serrao, Lorenzo; Josephson, John R.; Russell, James
2006-05-01
The article presents the results of a large scale design space exploration for the hybridization of two off-road vehicles, part of the Future Tactical Truck System (FTTS) family: Maneuver Sustainment Vehicle (MSV) and Utility Vehicle (UV). Series hybrid architectures are examined. The objective of the paper is to illustrate a novel design methodology that allows for the choice of the optimal values of several vehicle parameters. The methodology consists in an extensive design space exploration, which involves running a large number of computer simulations with systematically varied vehicle design parameters, where each variant is paced through several different mission profiles, and multiple attributes of performance are measured. The resulting designs are filtered to choose the design tradeoffs that better satisfy the performance and fuel economy requirements. At the end, few promising vehicle configuration designs will be selected that will need additional detailed investigation including neglected metrics like ride and drivability. Several powertrain architectures have been simulated. The design parameters include the number of axles in the vehicle (2 or 3), the number of electric motors per axle (1 or 2), the type of internal combustion engine, the type and quantity of energy storage system devices (batteries, electrochemical capacitors or both together). An energy management control strategy has also been developed to provide efficiency and performance. The control parameters are tunable and have been included into the design space exploration. The results show that the internal combustion engine and the energy storage system devices are extremely important for the vehicle performance.
Genetic Algorithm-Guided, Adaptive Model Order Reduction of Flexible Aircrafts
NASA Technical Reports Server (NTRS)
Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter; Brenner, Martin J.
2017-01-01
This paper presents a methodology for automated model order reduction (MOR) of flexible aircrafts to construct linear parameter-varying (LPV) reduced order models (ROM) for aeroservoelasticity (ASE) analysis and control synthesis in broad flight parameter space. The novelty includes utilization of genetic algorithms (GAs) to automatically determine the states for reduction while minimizing the trial-and-error process and heuristics requirement to perform MOR; balanced truncation for unstable systems to achieve locally optimal realization of the full model; congruence transformation for "weak" fulfillment of state consistency across the entire flight parameter space; and ROM interpolation based on adaptive grid refinement to generate a globally functional LPV ASE ROM. The methodology is applied to the X-56A MUTT model currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that X-56A ROM with less than one-seventh the number of states relative to the original model is able to accurately predict system response among all input-output channels for pitch, roll, and ASE control at various flight conditions. The GA-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The adaptive refinement allows selective addition of the grid points in the parameter space where flight dynamics varies dramatically to enhance interpolation accuracy without over-burdening controller synthesis and onboard memory efforts downstream. The present MOR framework can be used by control engineers for robust ASE controller synthesis and novel vehicle design.
Muñoz, David J.; Miller, David A.W.; Sutherland, Chris; Grant, Evan H. Campbell
2016-01-01
The cryptic behavior and ecology of herpetofauna make estimating the impacts of environmental change on demography difficult; yet, the ability to measure demographic relationships is essential for elucidating mechanisms leading to the population declines reported for herpetofauna worldwide. Recently developed spatial capture–recapture (SCR) methods are well suited to standard herpetofauna monitoring approaches. Individually identifying animals and their locations allows accurate estimates of population densities and survival. Spatial capture–recapture methods also allow estimation of parameters describing space-use and movement, which generally are expensive or difficult to obtain using other methods. In this paper, we discuss the basic components of SCR models, the available software for conducting analyses, and the experimental designs based on common herpetological survey methods. We then apply SCR models to Red-backed Salamander (Plethodon cinereus), to determine differences in density, survival, dispersal, and space-use between adult male and female salamanders. By highlighting the capabilities of SCR, and its advantages compared to traditional methods, we hope to give herpetologists the resource they need to apply SCR in their own systems.
Task-based design of a synthetic-collimator SPECT system used for small animal imaging.
Lin, Alexander; Kupinski, Matthew A; Peterson, Todd E; Shokouhi, Sepideh; Johnson, Lindsay C
2018-05-07
In traditional multipinhole SPECT systems, image multiplexing - the overlapping of pinhole projection images - may occur on the detector, which can inhibit quality image reconstructions due to photon-origin uncertainty. One proposed system to mitigate the effects of multiplexing is the synthetic-collimator SPECT system. In this system, two detectors, a silicon detector and a germanium detector, are placed at different distances behind the multipinhole aperture, allowing for image detection to occur at different magnifications and photon energies, resulting in higher overall sensitivity while maintaining high resolution. The unwanted effects of multiplexing are reduced by utilizing the additional data collected from the front silicon detector. However, determining optimal system configurations for a given imaging task requires efficient parsing of the complex parameter space, to understand how pinhole spacings and the two detector distances influence system performance. In our simulation studies, we use the ensemble mean-squared error of the Wiener estimator (EMSE W ) as the figure of merit to determine optimum system parameters for the task of estimating the uptake of an 123 I-labeled radiotracer in three different regions of a computer-generated mouse brain phantom. The segmented phantom map is constructed by using data from the MRM NeAt database and allows for the reduction in dimensionality of the system matrix which improves the computational efficiency of scanning the system's parameter space. To contextualize our results, the Wiener estimator is also compared against a region of interest estimator using maximum-likelihood reconstructed data. Our results show that the synthetic-collimator SPECT system outperforms traditional multipinhole SPECT systems in this estimation task. We also find that image multiplexing plays an important role in the system design of the synthetic-collimator SPECT system, with optimal germanium detector distances occurring at maxima in the derivative of the percent multiplexing function. Furthermore, we report that improved task performance can be achieved by using an adaptive system design in which the germanium detector distance may vary with projection angle. Finally, in our comparative study, we find that the Wiener estimator outperforms the conventional region of interest estimator. Our work demonstrates how this optimization method has the potential to quickly and efficiently explore vast parameter spaces, providing insight into the behavior of competing factors, which are otherwise very difficult to calculate and study using other existing means. © 2018 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Heinkelmann, Robert; Dick, Galina; Nilsson, Tobias; Soja, Benedikt; Wickert, Jens; Zus, Florian; Schuh, Harald
2015-04-01
Observations from space-geodetic techniques are nowadays increasingly used to derive atmospheric information for various commercial and scientific applications. A prominent example is the operational use of GNSS data to improve global and regional weather forecasts, which was started in 2006. Atmosphere gradients describe the azimuthal asymmetry of zenith delays. Estimates of geodetic and other parameters significantly improve when atmosphere gradients are determined in addition. Here we assess the capability of several space geodetic techniques (GNSS, VLBI, DORIS) to determine atmosphere gradients of refractivity. For this purpose we implement and compare various strategies for gradient estimation, such as different values for the temporal resolution and the corresponding parameter constraints. Applying least squares estimation the gradients are usually deterministically modelled as constants or piece-wise linear functions. In our study we compare this approach with a stochastic approach modelling atmosphere gradients as random walk processes and applying a Kalman Filter for parameter estimation. The gradients, derived from space geodetic techniques are verified by comparison with those derived from Numerical Weather Models (NWM). These model data were generated using raytracing calculations based on European Centre for Medium-Range Weather Forecast (ECMWF) and National Centers for Environmental Prediction (NCEP) analyses with different spatial resolutions. The investigation of the differences between the ECMWF and NCEP gradients hereby in addition allow for an empirical assessment of the quality of model gradients and how suitable the NWM data are for verification. CONT14 (2014-05-06 until 2014-05-20) is the youngest two week long continuous VLBI campaign carried out by IVS (International VLBI Service for Geodesy and Astrometry). It presents the state-of-the-art VLBI performance in terms of number of stations and number of observations and presents thus an excellent test period for comparisons with other space geodetic techniques. During the VLBI campaign CONT14 the HOBART12 and HOBART26 (Hobart, Tasmania, Australia) VLBI antennas were involved that co-locate with each other. The investigation of the gradient estimate differences from these co-located antennas allows for a valuable empirical quality assessment. Another quality criterion for gradient estimates are the differences of parameters at the borders of adjacent 24h-sessions. Both are investigated in our study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bae, Kyu Jung; Baer, Howard; Serce, Hasan
The supersymmetrized DFSZ axion model is highly motivated not only because it offers solutions to both the gauge hierarchy and strong CP problems, but also because it provides a solution to the SUSY μ-problem which naturally allows for a Little Hierarchy. We compute the expected mixed axion-neutralino dark matter abundance for the SUSY DFSZ axion model in two benchmark cases—a natural SUSY model with a standard neutralino underabundance (SUA) and an mSUGRA/CMSSM model with a standard overabundance (SOA). Our computation implements coupled Boltzmann equations which track the radiation density along with neutralino, axion, axion CO (produced via coherent oscillations), saxion,more » saxion CO, axino and gravitino densities. In the SUSY DFSZ model, axions, axinos and saxions go through the process of freeze-in—in contrast to freeze-out or out-of-equilibrium production as in the SUSY KSVZ model—resulting in thermal yields which are largely independent of the re-heat temperature. We find the SUA case with suppressed saxion-axion couplings (ξ=0) only admits solutions for PQ breaking scale f{sub a}∼< 6× 10{sup 12} GeV where the bulk of parameter space tends to be axion-dominated. For SUA with allowed saxion-axion couplings (ξ =1), then f{sub a} values up to ∼ 10{sup 14} GeV are allowed. For the SOA case, almost all of SUSY DFSZ parameter space is disallowed by a combination of overproduction of dark matter, overproduction of dark radiation or violation of BBN constraints. An exception occurs at very large f{sub a}∼ 10{sup 15}–10{sup 16} GeV where large entropy dilution from CO-produced saxions leads to allowed models.« less
NASA Astrophysics Data System (ADS)
Mishchenko, I. A.; Galushko, V. P.; Taran, O. P.
2008-06-01
Research of potato crop productivity under simulated microgravity allows to identify the plants which can become potentially productive under such stress conditions and that might allow to identify the technological parameters of potato production in other space expeditions. One of the traditional practices of planting material treatment against the viruses are the species in vitro. The study of infectious process flow is conducted in the vitro potato in the conditions of clinorotation. The introduction into culture of the meristems from clinostated plants allowed to obtain the regenerants free from the PVX infection. The employment of simulated microgravity for plant remediation reduced the expenditures on the production of in vitro culture 4,5 times, as compared to termoteraphy.
NASA Astrophysics Data System (ADS)
McKague, Darren Shawn
2001-12-01
The statistical properties of clouds and precipitation on a global scale are important to our understanding of climate. Inversion methods exist to retrieve the needed cloud and precipitation properties from satellite data pixel-by-pixel that can then be summarized over large data sets to obtain the desired statistics. These methods can be quite computationally expensive, and typically don't provide errors on the statistics. A new method is developed to directly retrieve probability distributions of parameters from the distribution of measured radiances. The method also provides estimates of the errors on the retrieved distributions. The method can retrieve joint distributions of parameters that allows for the study of the connection between parameters. A forward radiative transfer model creates a mapping from retrieval parameter space to radiance space. A Monte Carlo procedure uses the mapping to transform probability density from the observed radiance histogram to a two- dimensional retrieval property probability distribution function (PDF). An estimate of the uncertainty in the retrieved PDF is calculated from random realizations of the radiance to retrieval parameter PDF transformation given the uncertainty of the observed radiances, the radiance PDF, the forward radiative transfer, the finite number of prior state vectors, and the non-unique mapping to retrieval parameter space. The retrieval method is also applied to the remote sensing of precipitation from SSM/I microwave data. A method of stochastically generating hydrometeor fields based on the fields from a numerical cloud model is used to create the precipitation parameter radiance space transformation. The impact of vertical and horizontal variability within the hydrometeor fields has a significant impact on algorithm performance. Beamfilling factors are computed from the simulated hydrometeor fields. The beamfilling factors vary quite a bit depending upon the horizontal structure of the rain. The algorithm is applied to SSM/I images from the eastern tropical Pacific and is compared to PDFs of rain rate computed using pixel-by-pixel retrievals from Wilheit and from Liu and Curry. Differences exist between the three methods, but good general agreement is seen between the PDF retrieval algorithm and the algorithm of Liu and Curry. (Abstract shortened by UMI.)
Predictive Feature Selection for Genetic Policy Search
2014-05-22
inverted pendulum balancing problem (Gomez and Miikkulainen, 1999), where the agent must learn a policy in a continuous state space using discrete...algorithms to automate the process of training and/or designing NNs, mitigate these drawbacks and allow NNs to be easily applied to RL domains (Sher, 2012...racing simulator and the double inverted pendulum balance environments. It also includes parameter settings for all algorithms included in the study
Space Particle Hazard Specification, Forecasting, and Mitigation
2007-11-30
Automated FTP scripts permitted users to automatically update their global input parameter data set directly from the National Oceanic and...of CEASE capabilities. The angular field-of-view for CEASE is relatively large and will not allow for pitch angle resolved measurements. However... angular zones spanning 120° in the plane containing the magnetic field with an approximate 4° width in the direction perpendicular to the look-plane
Nanofiber Nerve Guide for Peripheral Nerve Repair and Regeneration
2016-04-01
faster regeneration and functional recovery. Peripheral nerve injury is a common complication of complex tissue trauma and often results in significant...having poor regeneration overall, the areas of regenerating nerve tissue could often be found in sections of the nerve guide where luminal spaces of...conducted in this Aim also provided important insight into the NGC design parameters necessary to allow for maximum nerve tissue ingrowth and regeneration
Parameter Study for Optimizing the Mass of a Space Nuclear Power System Radiation Shield
2002-03-01
long been selected as the best choice for neutron shielding of a SNPS [ 3 :24-30]. The low atomic number of both lithium and hydrogen allows...Integer :: missed(1:nBatches) Real(dp), Dimension(1: 3 ) :: r1, r2, omegaHat Real(dp) :: Radius1, Radius2, z1, z2, xi, omega , rFrac Real(dp) :: pAvg... 3 Motivation
Hilltop supernatural inflation and gravitino problem
NASA Astrophysics Data System (ADS)
Kohri, Kazunori; Lin, Chia-Min
2010-11-01
In this paper, we explore the parameter space of hilltop supernatural inflation model and show the regime within which there is no gravitino problem even if we consider both thermal and nonthermal production mechanisms. We make plots for the allowed reheating temperature as a function of gravitino mass by constraints from big-bang nucleosynthesis. We also plot the constraint when gravitino is assumed to be stable and plays the role of dark matter.
Defining Exercise Performance Metrics for Flight Hardware Development
NASA Technical Reports Server (NTRS)
Beyene, Nahon M.
2004-01-01
The space industry has prevailed over numerous design challenges in the spirit of exploration. Manned space flight entails creating products for use by humans and the Johnson Space Center has pioneered this effort as NASA's center for manned space flight. NASA Astronauts use a suite of flight exercise hardware to maintain strength for extravehicular activities and to minimize losses in muscle mass and bone mineral density. With a cycle ergometer, treadmill, and the Resistive Exercise Device available on the International Space Station (ISS), the Space Medicine community aspires to reproduce physical loading schemes that match exercise performance in Earth s gravity. The resistive exercise device presents the greatest challenge with the duty of accommodating 20 different exercises and many variations on the core set of exercises. This paper presents a methodology for capturing engineering parameters that can quantify proper resistive exercise performance techniques. For each specified exercise, the method provides engineering parameters on hand spacing, foot spacing, and positions of the point of load application at the starting point, midpoint, and end point of the exercise. As humans vary in height and fitness levels, the methodology presents values as ranges. In addition, this method shows engineers the proper load application regions on the human body. The methodology applies to resistive exercise in general and is in use for the current development of a Resistive Exercise Device. Exercise hardware systems must remain available for use and conducive to proper exercise performance as a contributor to mission success. The astronauts depend on exercise hardware to support extended stays aboard the ISS. Future plans towards exploration of Mars and beyond acknowledge the necessity of exercise. Continuous improvement in technology and our understanding of human health maintenance in space will allow us to support the exploration of Mars and the future of space exploration.
Qualification Testing of Laser Diode Pump Arrays for a Space-Based 2-micron Coherent Doppler Lidar
NASA Technical Reports Server (NTRS)
Amzajerdian, Farzin; Meadows, Byron L.; Baker, Nathaniel R.; Barnes, Bruce W.; Singh, Upendra N.; Kavaya, Michael J.
2007-01-01
The 2-micron thulium and holmium-based lasers being considered as the transmitter source for space-based coherent Doppler lidar require high power laser diode pump arrays operating in a long pulse regime of about 1 msec. Operating laser diode arrays over such long pulses drastically impact their useful lifetime due to the excessive localized heating and substantial pulse-to-pulse thermal cycling of their active regions. This paper describes the long pulse performance of laser diode arrays and their critical thermal characteristics. A viable approach is then offered that allows for determining the optimum operational parameters leading to the maximum attainable lifetime.
Required experimental accuracy to select between supersymmetrical models
NASA Astrophysics Data System (ADS)
Grellscheid, David
2004-03-01
We will present a method to decide a priori whether various supersymmetrical scenarios can be distinguished based on sparticle mass data alone. For each model, a scan over all free SUSY breaking parameters reveals the extent of that model's physically allowed region of sparticle-mass-space. Based on the geometrical configuration of these regions in mass-space, it is possible to obtain an estimate of the required accuracy of future sparticle mass measurements to distinguish between the models. We will illustrate this algorithm with an example. This talk is based on work done in collaboration with B C Allanach (LAPTH, Annecy) and F Quevedo (DAMTP, Cambridge).
NASA Technical Reports Server (NTRS)
Yanosy, James L.
1988-01-01
This manual describes how to use the Emulation Simulation Computer Model (ESCM). Based on G189A, ESCM computes the transient performance of a Space Station atmospheric revitalization subsystem (ARS) with CO2 removal provided by a solid amine water desorbed subsystem called SAWD. Many performance parameters are computed some of which are cabin CO2 partial pressure, relative humidity, temperature, O2 partial pressure, and dew point. The program allows the user to simulate various possible combinations of man loading, metabolic profiles, cabin volumes and certain hypothesized failures that could occur.
Microwave magnetic field detection based on Cs vapor cell in free space
NASA Astrophysics Data System (ADS)
Liu, Xiaochi; Jiang, Zhiyuan; Qu, Jifeng; Hou, Dong; Huang, Xianhe; Sun, Fuyu
2018-06-01
In this study, we demonstrate the direct measurement of a microwave (MW) magnetic field through the detection of atomic Rabi resonances with Cs vapor cells in a free-space low-Q cavity. The line shape (amplitude and linewidth) of detected Rabi resonances is investigated versus several experimental parameters such as the laser intensity, cell buffer gas pressure, and cell length. The specially designed low-Q cavity creates a suitable MW environment allowing easy testing of different vapor cells with distinct properties. Obtained results are analyzed to optimize the performances of a MW magnetic field sensor based on the present atom-based detection technique.
Binzoni, Tiziano; Torricelli, Alessandro; Giust, Remo; Sanguinetti, Bruno; Bernhard, Paul; Spinelli, Lorenzo
2014-01-01
A bone tissue phantom prototype allowing to test, in general, optical flowmeters at large interoptode spacings, such as laser-Doppler flowmetry or diffuse correlation spectroscopy, has been developed by 3D-stereolithography technique. It has been demonstrated that complex tissue vascular systems of any geometrical shape can be conceived. Absorption coefficient, reduced scattering coefficient and refractive index of the optical phantom have been measured to ensure that the optical parameters reasonably reproduce real human bone tissue in vivo. An experimental demonstration of a possible use of the optical phantom, utilizing a laser-Doppler flowmeter, is also presented. PMID:25136496
Broadband impedance-matched electromagnetic structured ferrite composite in the megahertz range
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parke, L.; Hibbins, A. P.; Sambles, J. R.
2014-06-02
A high refractive-index structured ferrite composite is designed to experimentally demonstrate broadband impedance matching to free-space. It consists of an array of ferrite cubes that are anisotropically spaced, thereby allowing for independent control of the effective complex permeability and permittivity. Despite having a refractive index of 9.5, the array gives less than 1% reflection and over 90% transmission of normally incident radiation up to 70 MHz for one of the orthogonal linear polarisations lying in a symmetry plane of the array. This result presents a route to the design of MHz-frequency ferrite composites with bespoke electromagnetic parameters for antenna miniaturisation.
The acoustic environment of a sonoluminescing bubble
NASA Astrophysics Data System (ADS)
Holzfuss, Joachim; Rüggeberg, Matthias; Holt, R. Glynn
2000-07-01
A bubble is levitated in water in a cylindrical resonator which is driven by ultrasound. It has been shown that in a certain region of parameter space the bubble is emitting light pulses (sonoluminescence). One of the properties observed is the enormous spatial stability leaving the bubble "pinned" in space allowing it to emit light with a timing of picosecond accuracy. We argue that the observed stability is due to interactions of the bubble with the resonator. A shock wave emitted at collapse time together with a self generated complex sound field, which is experimentally mapped with high resolution, is responsible for the observed effects.
NASA Astrophysics Data System (ADS)
Fomina, E. V.; Kozhukhova, N. I.; Sverguzova, S. V.; Fomin, A. E.
2018-05-01
In this paper, the regression equations method for design of construction material was studied. Regression and polynomial equations representing the correlation between the studied parameters were proposed. The logic design and software interface of the regression equations method focused on parameter optimization to provide the energy saving effect at the stage of autoclave aerated concrete design considering the replacement of traditionally used quartz sand by coal mining by-product such as argillite. The mathematical model represented by a quadric polynomial for the design of experiment was obtained using calculated and experimental data. This allowed the estimation of relationship between the composition and final properties of the aerated concrete. The surface response graphically presented in a nomogram allowed the estimation of concrete properties in response to variation of composition within the x-space. The optimal range of argillite content was obtained leading to a reduction of raw materials demand, development of target plastic strength of aerated concrete as well as a reduction of curing time before autoclave treatment. Generally, this method allows the design of autoclave aerated concrete with required performance without additional resource and time costs.
Exchange interaction and tunneling-induced transparency in coupled quantum dots
NASA Astrophysics Data System (ADS)
Borges, H. S.; Alcalde, A. M.; Ulloa, Sergio E.
2014-11-01
We investigate the optical response of quantum dot molecules coherently driven by polarized laser light. Our description includes the splitting in excitonic levels caused by isotropic and anisotropic exchange interactions. We consider interdot transitions mediated by hole tunneling between states with the same total angular momentum and between bright and dark exciton states as allowed by spin-flip hopping between the dots in the molecule. Using realistic experimental parameters we demonstrate that the excitonic states coupled by tunneling exhibit a rich and controllable optical response. We show that through the appropriate control of an external electric field and light polarization, the tunneling coupling establishes an efficient destructive quantum interference path that creates a transparency window in the absorption spectra whenever states of appropriate symmetry are mixed by the carrier tunneling. We explore the relevant parameter space that allows probing this phenomenon in experiments. Controlled variation in applied field and laser detuning would allow the optical characterization of spin-preserving and spin-flip hopping amplitudes in such systems by measuring the width of the tunneling-induced transparency windows.
NASA Astrophysics Data System (ADS)
Denardini, Clezio Marcos; Dal Lago, Alisson; Mendes, Odim; Batista, Inez S.; SantAnna, Nilson; Gatto, Rubens; Takahashi, Hisao; Costa, D. Joaquim; Banik Padua, Marcelo; Campos Velho, Haroldo
2016-07-01
On August 2007 the National Institute for Space Research started a task force to develop and operate a space weather program, which is known by the acronyms Embrace that stands for the Portuguese statement "Estudo e Monitoramento BRAasileiro de Clima Espacial" Program (Brazilian Space Weather Study and Monitoring program). The mission of the Embrace/INPE program is to monitor the Solar-Terrestrial environment, the magnetosphere, the upper atmosphere and the ground induced currents to prevent effects on technological and economic activities. The Embrace/INPE system monitors the physical parameters of the Sun-Earth environment, such as Active Regions (AR) in the Sun and solar radiation by using radio telescope, Coronal Mass Ejection (CME) information by satellite and ground-based cosmic ray monitoring, geomagnetic activity by the magnetometer network, and ionospheric disturbance by ionospheric sounders and using data collected by four GPS receiver network, geomagnetic activity by a magnetometer network, and provides a forecasting for Total Electronic Content (TEC) - 24 hours ahead - using a version of the SUPIM model which assimilates the two latter data using nudging approach. Most of these physical parameters are daily published on the Brazilian space weather program web portal, related to the entire network sensors available. Regarding outreach, it has being published a daily bulletin in Portuguese and English with the status of the space weather environment on the Sun, the Interplanetary Medium and close to the Earth. Since December 2011, all these activities are carried out at the Embrace Headquarter, a building located at the INPE's main campus. Recently, a comprehensive data bank and an interface layer are under commissioning to allow an easy and direct access to all the space weather data collected by Embrace through the Embrace web Portal. The information being released encompasses data from: (a) the Embrace Digisonde Network (Embrace DigiNet) that monitors the ionospheric profiles in two equatorial sites and in two low latitude sites; (b) several solar radio telescopes to monitor solar activity (under development); (c) the matrix of the GNSS TEC map over South America; (d) the Embrace Airglow All-sky Imagers Network (Embrace GlowNet); and (d) the Embrace Magnetometer Network (Embrace Magnet), all of them in South America. Also, the system allows subscription to space weather alerts and reports. Contacting Author: C. M. Denardini (clezio.denardin@inpe.br)
Updated global 3+1 analysis of short-baseline neutrino oscillations
NASA Astrophysics Data System (ADS)
Gariazzo, S.; Giunti, C.; Laveder, M.; Li, Y. F.
2017-06-01
We present the results of an updated fit of short-baseline neutrino oscillation data in the framework of 3+1 active-sterile neutrino mixing. We first consider ν e and {\\overline{ν}}_e disappearance in the light of the Gallium and reactor anomalies. We discuss the implications of the recent measurement of the reactor {\\overline{ν}}_e spectrum in the NEOS experiment, which shifts the allowed regions of the parameter space towards smaller values of | U e4|2. The β-decay constraints of the Mainz and Troitsk experiments allow us to limit the oscillation length between about 2 cm and 7 m at 3 σ for neutrinos with an energy of 1 MeV. The corresponding oscillations can be discovered in a model-independent way in ongoing reactor and source experiments by measuring ν e and {\\overline{ν}}_e disappearance as a function of distance. We then consider the global fit of the data on short-baseline {}_{ν_{μ}}^{(-)}{\\to}_{ν_e}^{(-)} transitions in the light of the LSND anomaly, taking into account the constraints from {}_{ν_e}^{(-)} and {}_{ν_{μ}}^{(-)} disappearance experiments, including the recent data of the MINOS and IceCube experiments. The combination of the NEOS constraints on | U e4|2 and the MINOS and IceCube constraints on | U μ4|2 lead to an unacceptable appearance-disappearance tension which becomes tolerable only in a pragmatic fit which neglects the MiniBooNE low-energy anomaly. The minimization of the global χ 2 in the space of the four mixing parameters Δ m 41 2 , | U e4|2, | U μ4|2, and | U τ4|2 leads to three allowed regions with narrow Δ m 41 2 widths at Δ m 41 2 ≈ 1.7 (best-fit), 1.3 (at 2 σ), 2.4 (at 3 σ) eV2. The effective amplitude of short-baseline {}_{ν_{μ}}^{(-)}{\\to}_{ν_e}^{(-)} oscillations is limited by 0.00048 ≲ sin2 2 ϑ eμ ≲ 0.0020 at 3 σ. The restrictions of the allowed regions of the mixing parameters with respect to our previous global fits are mainly due to the NEOS constraints. We present a comparison of the allowed regions of the mixing parameters with the sensitivities of ongoing experiments, which show that it is likely that these experiments will determine in a definitive way if the reactor, Gallium and LSND anomalies are due to active-sterile neutrino oscillations or not.
Estimation of primordial spectrum with post-WMAP 3-year data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shafieloo, Arman; Souradeep, Tarun
2008-07-15
In this paper we implement an improved (error-sensitive) Richardson-Lucy deconvolution algorithm on the measured angular power spectrum from the Wilkinson Microwave Anisotropy Probe (WMAP) 3 year data to determine the primordial power spectrum assuming different points in the cosmological parameter space for a flat {lambda}CDM cosmological model. We also present the preliminary results of the cosmological parameter estimation by assuming a free form of the primordial spectrum, for a reasonably large volume of the parameter space. The recovered spectrum for a considerably large number of the points in the cosmological parameter space has a likelihood far better than a 'bestmore » fit' power law spectrum up to {delta}{chi}{sub eff}{sup 2}{approx_equal}-30. We use discrete wavelet transform (DWT) for smoothing the raw recovered spectrum from the binned data. The results obtained here reconfirm and sharpen the conclusion drawn from our previous analysis of the WMAP 1st year data. A sharp cut off around the horizon scale and a bump after the horizon scale seem to be a common feature for all of these reconstructed primordial spectra. We have shown that although the WMAP 3 year data prefers a lower value of matter density for a power law form of the primordial spectrum, for a free form of the spectrum, we can get a very good likelihood to the data for higher values of matter density. We have also shown that even a flat cold dark matter model, allowing a free form of the primordial spectrum, can give a very high likelihood fit to the data. Theoretical interpretation of the results is open to the cosmology community. However, this work provides strong evidence that the data retains discriminatory power in the cosmological parameter space even when there is full freedom in choosing the primordial spectrum.« less
NASA Astrophysics Data System (ADS)
Tyler, R.
2017-12-01
Resonant tidal excitation of an atmosphere will arrive in predictable situations where there is a match in form and frequency between tidal forces and the atmosphere's eigenmodes of oscillation. The resonant response is typically several orders of magnitude more energetic than in non-resonant configurations involving only slight differences in parameters, and the behavior can be quite different because different oscillation modes are favored in each. The work presented provides first a generic description of these resonant states by demonstrating the behavior of solutions within the very large parameter space of potential scenarios. This generic description of the range of atmospheric tidal response scenarios is further used to create a taxonomy for organizing and understanding various tidally driven dynamic regimes. The resonances are easily identified by associated peaks in the power. But because these peaks may be relatively narrow, millions of solutions can be required to complete the description of the solution's dependence over the range of parameter values. (Construction of these large solution spaces is performed using a fast, semi-analytical method that solves the forced, dissipative, Laplace Tidal Equations subject to the constraint of dynamical consistency (through a separation constant) with solutions describing the vertical structure.) Filling in the solution space in this way is used not only to locate the parameter coordinates of resonant scenarios but also to study allowed migration paths through this space. It is suggested that resonant scenarios do not arrive through happenstance but rather because secular variations in parameters make the configuration move into the resonant scenario, with associated feedbacks either accelerating or halting the configuration migration. These results are then used to show strong support for the hypothesis by R. Lindzen that the regular banding (belts/zones/jets) on Jupiter and Saturn are driven by tides. The results also provide important, though less specific, support for a second hypothesis that inflated atmospheres inferred for a number of giant extra-solar planets are due to thermal or gravitational tides.
NASA Technical Reports Server (NTRS)
Troudet, Terry; Merrill, Walter C.
1989-01-01
The ability of feed-forward neural net architectures to learn continuous-valued mappings in the presence of noise is demonstrated in relation to parameter identification and real-time adaptive control applications. Factors and parameters influencing the learning performance of such nets in the presence of noise are identified. Their effects are discussed through a computer simulation of the Back-Error-Propagation algorithm by taking the example of the cart-pole system controlled by a nonlinear control law. Adequate sampling of the state space is found to be essential for canceling the effect of the statistical fluctuations and allowing learning to take place.
The predictive consequences of parameterization
NASA Astrophysics Data System (ADS)
White, J.; Hughes, J. D.; Doherty, J. E.
2013-12-01
In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.
Continuous-time discrete-space models for animal movement
Hanks, Ephraim M.; Hooten, Mevin B.; Alldredge, Mat W.
2015-01-01
The processes influencing animal movement and resource selection are complex and varied. Past efforts to model behavioral changes over time used Bayesian statistical models with variable parameter space, such as reversible-jump Markov chain Monte Carlo approaches, which are computationally demanding and inaccessible to many practitioners. We present a continuous-time discrete-space (CTDS) model of animal movement that can be fit using standard generalized linear modeling (GLM) methods. This CTDS approach allows for the joint modeling of location-based as well as directional drivers of movement. Changing behavior over time is modeled using a varying-coefficient framework which maintains the computational simplicity of a GLM approach, and variable selection is accomplished using a group lasso penalty. We apply our approach to a study of two mountain lions (Puma concolor) in Colorado, USA.
Interpolation/extrapolation technique with application to hypervelocity impact of space debris
NASA Technical Reports Server (NTRS)
Rule, William K.
1992-01-01
A new technique for the interpolation/extrapolation of engineering data is described. The technique easily allows for the incorporation of additional independent variables, and the most suitable data in the data base is automatically used for each prediction. The technique provides diagnostics for assessing the reliability of the prediction. Two sets of predictions made for known 5-degree-of-freedom, 15-parameter functions using the new technique produced an average coefficient of determination of 0.949. Here, the technique is applied to the prediction of damage to the Space Station from hypervelocity impact of space debris. A new set of impact data is presented for this purpose. Reasonable predictions for bumper damage were obtained, but predictions of pressure wall and multilayer insulation damage were poor.
NASA Technical Reports Server (NTRS)
Lundquist, Ray A.; Leidecker, Henning
1998-01-01
The allowable operating currents of electrical wiring when used in the space vacuum environment is predominantly determined by the maximum operating temperature of the wire insulation. For Kapton insulated wire this value is 200 C. Guidelines provided in the Goddard Space Flight Center (GSFC) Preferred Parts List (PPL) limit the operating current of wire within vacuum to ensure the maximum insulation temperature is not exceeded. For 20 AWG wire, these operating parameters are: 3.7 amps per wire, bundle of 15 or more wires, 70 C environment, and vacuum of 10(exp -5) torr or less. To determine the behavior and temperature of electrical wire at different operating conditions, a thermal vacuum test was performed on a representative electrical harness of the Hubble Space Telescope (HST) power distribution system. This paper describes the test and the results.
TU-AB-BRC-05: Creation of a Monte Carlo TrueBeam Model by Reproducing Varian Phase Space Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Grady, K; Davis, S; Seuntjens, J
Purpose: To create a Varian TrueBeam 6 MV FFF Monte Carlo model using BEAMnrc/EGSnrc that accurately reproduces the Varian representative dataset, followed by tuning the model’s source parameters to accurately reproduce in-house measurements. Methods: A BEAMnrc TrueBeam model for 6 MV FFF has been created by modifying a validated 6 MV Varian CL21EX model. Geometric dimensions and materials were adjusted in a trial and error approach to match the fluence and spectra of TrueBeam phase spaces output by the Varian VirtuaLinac. Once the model’s phase space matched Varian’s counterpart using the default source parameters, it was validated to match 10more » × 10 cm{sup 2} Varian representative data obtained with the IBA CC13. The source parameters were then tuned to match in-house 5 × 5 cm{sup 2} PTW microDiamond measurements. All dose to water simulations included detector models to include the effects of volume averaging and the non-water equivalence of the chamber materials, allowing for more accurate source parameter selection. Results: The Varian phase space spectra and fluence were matched with excellent agreement. The in-house model’s PDD agreement with CC13 TrueBeam representative data was within 0.9% local percent difference beyond the first 3 mm. Profile agreement at 10 cm depth was within 0.9% local percent difference and 1.3 mm distance-to-agreement in the central axis and penumbra regions, respectively. Once the source parameters were tuned, PDD agreement with microDiamond measurements was within 0.9% local percent difference beyond 2 mm. The microDiamond profile agreement at 10 cm depth was within 0.6% local percent difference and 0.4 mm distance-to-agreement in the central axis and penumbra regions, respectively. Conclusion: An accurate in-house Monte Carlo model of the Varian TrueBeam was achieved independently of the Varian phase space solution and was tuned to in-house measurements. KO acknowledges partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290).« less
ACES: Space shuttle flight software analysis expert system
NASA Technical Reports Server (NTRS)
Satterwhite, R. Scott
1990-01-01
The Analysis Criteria Evaluation System (ACES) is a knowledge based expert system that automates the final certification of the Space Shuttle onboard flight software. Guidance, navigation and control of the Space Shuttle through all its flight phases are accomplished by a complex onboard flight software system. This software is reconfigured for each flight to allow thousands of mission-specific parameters to be introduced and must therefore be thoroughly certified prior to each flight. This certification is performed in ground simulations by executing the software in the flight computers. Flight trajectories from liftoff to landing, including abort scenarios, are simulated and the results are stored for analysis. The current methodology of performing this analysis is repetitive and requires many man-hours. The ultimate goals of ACES are to capture the knowledge of the current experts and improve the quality and reduce the manpower required to certify the Space Shuttle onboard flight software.
Compact field color schlieren system for use in microgravity materials processing
NASA Technical Reports Server (NTRS)
Poteet, W. M.; Owen, R. B.
1986-01-01
A compact color schlieren system designed for field measurement of materials processing parameters has been built and tested in a microgravity environment. Improvements in the color filter design and a compact optical arrangement allowed the system described here to retain the traditional advantages of schlieren, such as simplicity, sensitivity, and ease of data interpretation. Testing was accomplished by successfully flying the instrument on a series of parabolic trajectories on the NASA KC-135 microgravity simulation aircraft. A variety of samples of interest in materials processing were examined. Although the present system was designed for aircraft use, the technique is well suited to space flight experimentation. A major goal of this effort was to accommodate the main optical system within a volume approximately equal to that of a Space Shuttle middeck locker. Future plans include the development of an automated space-qualified facility for use on the Shuttle and Space Station.
METEOSAT studies of clouds and radiation budget
NASA Technical Reports Server (NTRS)
Saunders, R. W.
1982-01-01
Radiation budget studies of the atmosphere/surface system from Meteosat, cloud parameter determination from space, and sea surface temperature measurements from TIROS N data are all described. This work was carried out on the interactive planetary image processing system (IPIPS), which allows interactive manipulationion of the image data in addition to the conventional computational tasks. The current hardware configuration of IPIPS is shown. The I(2)S is the principal interactive display allowing interaction via a trackball, four buttons under program control, or a touch tablet. Simple image processing operations such as contrast enhancing, pseudocoloring, histogram equalization, and multispectral combinations, can all be executed at the push of a button.
Enhanced di-Higgs boson production in the complex Higgs singlet model
Dawson, S.; Sullivan, M.
2018-01-31
Here, we consider the standard model (SM) extended by the addition of a complex scalar singlet, with no assumptions about additional symmetries of the potential. This model provides for resonant di-Higgs production of Higgs particles with different masses. We demonstrate that regions of parameter space allowed by precision electroweak measurements, experimental limits on single Higgs production, and perturbative unitarity allow for large di-Higgs production rates relative to the SM rates. In this scenario, the dominant production mechanism of the new scalar states is di-Higgs production. Results are presented formore » $$\\sqrt{s}$$ = 13, 27 and 100 TeV.« less
NASA Astrophysics Data System (ADS)
Khodasevich, M. A.; Sinitsyn, G. V.; Skorbanova, E. A.; Rogovaya, M. V.; Kambur, E. I.; Aseev, V. A.
2016-06-01
Analysis of multiparametric data on transmission spectra of 24 divins (Moldovan cognacs) in the 190-2600 nm range allows identification of outliers and their removal from a sample under study in the following consideration. The principal component analysis and classification tree with a single-rank predictor constructed in the 2D space of principal components allow classification of divin manufacturers. It is shown that the accuracy of syringaldehyde, ethyl acetate, vanillin, and gallic acid concentrations in divins calculated with the regression to latent structures depends on the sample volume and is 3, 6, 16, and 20%, respectively, which is acceptable for the application.
Enhanced di-Higgs boson production in the complex Higgs singlet model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawson, S.; Sullivan, M.
Here, we consider the standard model (SM) extended by the addition of a complex scalar singlet, with no assumptions about additional symmetries of the potential. This model provides for resonant di-Higgs production of Higgs particles with different masses. We demonstrate that regions of parameter space allowed by precision electroweak measurements, experimental limits on single Higgs production, and perturbative unitarity allow for large di-Higgs production rates relative to the SM rates. In this scenario, the dominant production mechanism of the new scalar states is di-Higgs production. Results are presented formore » $$\\sqrt{s}$$ = 13, 27 and 100 TeV.« less
Designing a space-based galaxy redshift survey to probe dark energy
NASA Astrophysics Data System (ADS)
Wang, Yun; Percival, Will; Cimatti, Andrea; Mukherjee, Pia; Guzzo, Luigi; Baugh, Carlton M.; Carbone, Carmelita; Franzetti, Paolo; Garilli, Bianca; Geach, James E.; Lacey, Cedric G.; Majerotto, Elisabetta; Orsi, Alvaro; Rosati, Piero; Samushia, Lado; Zamorani, Giovanni
2010-12-01
A space-based galaxy redshift survey would have enormous power in constraining dark energy and testing general relativity, provided that its parameters are suitably optimized. We study viable space-based galaxy redshift surveys, exploring the dependence of the Dark Energy Task Force (DETF) figure-of-merit (FoM) on redshift accuracy, redshift range, survey area, target selection and forecast method. Fitting formulae are provided for convenience. We also consider the dependence on the information used: the full galaxy power spectrum P(k), P(k) marginalized over its shape, or just the Baryon Acoustic Oscillations (BAO). We find that the inclusion of growth rate information (extracted using redshift space distortion and galaxy clustering amplitude measurements) leads to a factor of ~3 improvement in the FoM, assuming general relativity is not modified. This inclusion partially compensates for the loss of information when only the BAO are used to give geometrical constraints, rather than using the full P(k) as a standard ruler. We find that a space-based galaxy redshift survey covering ~20000deg2 over with σz/(1 + z) <= 0.001 exploits a redshift range that is only easily accessible from space, extends to sufficiently low redshifts to allow both a vast 3D map of the universe using a single tracer population, and overlaps with ground-based surveys to enable robust modelling of systematic effects. We argue that these parameters are close to their optimal values given current instrumental and practical constraints.
An Integrated Approach to Parameter Learning in Infinite-Dimensional Space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyd, Zachary M.; Wendelberger, Joanne Roth
The availability of sophisticated modern physics codes has greatly extended the ability of domain scientists to understand the processes underlying their observations of complicated processes, but it has also introduced the curse of dimensionality via the many user-set parameters available to tune. Many of these parameters are naturally expressed as functional data, such as initial temperature distributions, equations of state, and controls. Thus, when attempting to find parameters that match observed data, being able to navigate parameter-space becomes highly non-trivial, especially considering that accurate simulations can be expensive both in terms of time and money. Existing solutions include batch-parallel simulations,more » high-dimensional, derivative-free optimization, and expert guessing, all of which make some contribution to solving the problem but do not completely resolve the issue. In this work, we explore the possibility of coupling together all three of the techniques just described by designing user-guided, batch-parallel optimization schemes. Our motivating example is a neutron diffusion partial differential equation where the time-varying multiplication factor serves as the unknown control parameter to be learned. We find that a simple, batch-parallelizable, random-walk scheme is able to make some progress on the problem but does not by itself produce satisfactory results. After reducing the dimensionality of the problem using functional principal component analysis (fPCA), we are able to track the progress of the solver in a visually simple way as well as viewing the associated principle components. This allows a human to make reasonable guesses about which points in the state space the random walker should try next. Thus, by combining the random walker's ability to find descent directions with the human's understanding of the underlying physics, it is possible to use expensive simulations more efficiently and more quickly arrive at the desired parameter set.« less
Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo
2015-01-01
Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely stratification of sepsis patients by distinguishing hyper-inflammatory from paralytic phases in immune dysregulation. PMID:26150807
Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo
2015-01-01
Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely stratification of sepsis patients by distinguishing hyper-inflammatory from paralytic phases in immune dysregulation.
Effective theories of universal theories
Wells, James D.; Zhang, Zhengkang
2016-01-20
It is well-known but sometimes overlooked that constraints on the oblique parameters (most notably S and T parameters) are generally speaking only applicable to a special class of new physics scenarios known as universal theories. The oblique parameters should not be associated with Wilson coefficients in a particular operator basis in the effective field theory (EFT) framework, unless restrictions have been imposed on the EFT so that it describes universal theories. Here, we work out these restrictions, and present a detailed EFT analysis of universal theories. We find that at the dimension-6 level, universal theories are completely characterized by 16more » parameters. They are conveniently chosen to be: 5 oblique parameters that agree with the commonly-adopted ones, 4 anomalous triple-gauge couplings, 3 rescaling factors for the h 3, hff, hV V vertices, 3 parameters for hV V vertices absent in the Standard Model, and 1 four-fermion coupling of order yf 2. Furthermore, all these parameters are defined in an unambiguous and basis-independent way, allowing for consistent constraints on the universal theories parameter space from precision electroweak and Higgs data.« less
Effective theories of universal theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wells, James D.; Zhang, Zhengkang
It is well-known but sometimes overlooked that constraints on the oblique parameters (most notably S and T parameters) are generally speaking only applicable to a special class of new physics scenarios known as universal theories. The oblique parameters should not be associated with Wilson coefficients in a particular operator basis in the effective field theory (EFT) framework, unless restrictions have been imposed on the EFT so that it describes universal theories. Here, we work out these restrictions, and present a detailed EFT analysis of universal theories. We find that at the dimension-6 level, universal theories are completely characterized by 16more » parameters. They are conveniently chosen to be: 5 oblique parameters that agree with the commonly-adopted ones, 4 anomalous triple-gauge couplings, 3 rescaling factors for the h 3, hff, hV V vertices, 3 parameters for hV V vertices absent in the Standard Model, and 1 four-fermion coupling of order yf 2. Furthermore, all these parameters are defined in an unambiguous and basis-independent way, allowing for consistent constraints on the universal theories parameter space from precision electroweak and Higgs data.« less
Cost and Economics for Advanced Launch Vehicles
NASA Technical Reports Server (NTRS)
Whitfield, Jeff
1998-01-01
Market sensitivity and weight-based cost estimating relationships are key drivers in determining the financial viability of advanced space launch vehicle designs. Due to decreasing space transportation budgets and increasing foreign competition, it has become essential for financial assessments of prospective launch vehicles to be performed during the conceptual design phase. As part of this financial assessment, it is imperative to understand the relationship between market volatility, the uncertainty of weight estimates, and the economic viability of an advanced space launch vehicle program. This paper reports the results of a study that evaluated the economic risk inherent in market variability and the uncertainty of developing weight estimates for an advanced space launch vehicle program. The purpose of this study was to determine the sensitivity of a business case for advanced space flight design with respect to the changing nature of market conditions and the complexity of determining accurate weight estimations during the conceptual design phase. The expected uncertainty associated with these two factors drives the economic risk of the overall program. The study incorporates Monte Carlo simulation techniques to determine the probability of attaining specific levels of economic performance when the market and weight parameters are allowed to vary. This structured approach toward uncertainties allows for the assessment of risks associated with a launch vehicle program's economic performance. This results in the determination of the value of the additional risk placed on the project by these two factors.
Using state-space models to predict the abundance of juvenile and adult sea lice on Atlantic salmon.
Elghafghuf, Adel; Vanderstichel, Raphael; St-Hilaire, Sophie; Stryhn, Henrik
2018-04-11
Sea lice are marine parasites affecting salmon farms, and are considered one of the most costly pests of the salmon aquaculture industry. Infestations of sea lice on farms significantly increase opportunities for the parasite to spread in the surrounding ecosystem, making control of this pest a challenging issue for salmon producers. The complexity of controlling sea lice on salmon farms requires frequent monitoring of the abundance of different sea lice stages over time. Industry-based data sets of counts of lice are amenable to multivariate time-series data analyses. In this study, two sets of multivariate autoregressive state-space models were applied to Chilean sea lice data from six Atlantic salmon production cycles on five isolated farms (at least 20 km seaway distance away from other known active farms), to evaluate the utility of these models for predicting sea lice abundance over time on farms. The models were constructed with different parameter configurations, and the analysis demonstrated large heterogeneity between production cycles for the autoregressive parameter, the effects of chemotherapeutant bath treatments, and the process-error variance. A model allowing for different parameters across production cycles had the best fit and the smallest overall prediction errors. However, pooling information across cycles for the drift and observation error parameters did not substantially affect model performance, thus reducing the number of necessary parameters in the model. Bath treatments had strong but variable effects for reducing sea lice burdens, and these effects were stronger for adult lice than juvenile lice. Our multivariate state-space models were able to handle different sea lice stages and provide predictions for sea lice abundance with reasonable accuracy up to five weeks out. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
The modern trends in space electromagnetic instrumentation
NASA Astrophysics Data System (ADS)
Korepanov, V.
The future trends of the experimental plasma physical development in outer space demands more and more exact and sophisticated scientific instrumentation. Moreover, the situation is complicated by constant reducing of financial support of scientific research, even in leading countries. This resulted in the development of mini, micro and nanosatellites with low price and short preparation time. Consequently, it provoked the creation of new generation of scientific instruments with reduced weight and power consumption but increased level of metrological parameters. The recent state of the development of electromagnetic (EM) sensors for microsatellites is reported. The set of EM sensors produced at LCISR includes following devices. Flux-gate magnetometers (FGM). The reduction of new of satellite versions FGM weight as well as power consumption was achieved not only due to the use of new electronic components but also because the development of new operation modes. To this the scientific and technological study allowed to decrease FGM noise and now typical figure is about 10 picotesla rms at 1 Hz and the record one is below 1 picotesla. Also because of satellite weight reduction the possibility was studied to use FGM only for satellite attitude control. The magnetic orientation and stabilization system was developed and new FGM for orientation was created. It uses industrial components and special measures are taken to increase its reliability. Search-coil magnetometers (SCM). The super-light version of SCM was created as the result of intensive scientific and technological research. These new SCMs can have about six decades operational frequency band noise with upper limit ~ 1 MHz and noise level of few femtotesla with total weight about 75 grams. Electric probes (EP). The study of operation condition of EP immersed in space plasma allowed to find the possibilities to decrease the EP weight conserving the same noise factor. Two types of EP operating from DC and from 0,1 Hz are created and successfully tested. Wave probe (WP). The WP is another kind of instrument which combines in one sensor three independent sensors: SCM, EP and split Langmuir probe. Such a combination allowed to create a principally new instru ment U wave probe. The developed theory confirms that WP can directly measure vector components in space plasma. All these space sensors are described and their experimentally obtained parameters are presented. This work was partially supported by INTAS grant 2000-465.
NASA Astrophysics Data System (ADS)
Balin Talamba, D.; Higy, C.; Joerin, C.; Musy, A.
The paper presents an application concerning the hydrological modelling for the Haute-Mentue catchment, located in western Switzerland. A simplified version of Topmodel, developed in a Labview programming environment, was applied in the aim of modelling the hydrological processes on this catchment. Previous researches car- ried out in this region outlined the importance of the environmental tracers in studying the hydrological behaviour and an important knowledge has been accumulated dur- ing this period concerning the mechanisms responsible for runoff generation. In con- formity with the theoretical constraints, Topmodel was applied for an Haute-Mentue sub-catchment where tracing experiments showed constantly low contributions of the soil water during the flood events. The model was applied for two humid periods in 1998. First, the model calibration was done in order to provide the best estimations for the total runoff. Instead, the simulated components (groundwater and rapid flow) showed far deviations from the reality indicated by the tracing experiments. Thus, a new calibration was performed including additional information given by the environ- mental tracing. The calibration of the model was done by using simulated annealing (SA) techniques, which are easy to implement and statistically allow for converging to a global minimum. The only problem is that the method is time and computer consum- ing. To improve this, a version of SA was used which is known as very fast-simulated annealing (VFSA). The principles are the same as for the SA technique. The random search is guided by certain probability distribution and the acceptance criterion is the same as for SA but the VFSA allows for better taking into account the ranges of vari- ation of each parameter. Practice with Topmodel showed that the energy function has different sensitivities along different dimensions of the parameter space. The VFSA algorithm allows differentiated search in relation with the sensitivity of the param- eters. The environmental tracing was used in the aim of constraining the parameter space in order to better simulate the hydrological behaviour of the catchment. VFSA outlined issues for characterising the significance of Topmodel input parameters as well as their uncertainty for the hydrological modelling.
A Test of General Relativity with MESSENGER Mission Data
NASA Astrophysics Data System (ADS)
Genova, A.; Mazarico, E.; Goossens, S. J.; Lemoine, F. G.; Neumann, G. A.; Nicholas, J. B.; Rowlands, D. D.; Smith, D. E.; Zuber, M. T.; Solomon, S. C.
2016-12-01
The MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft initiated collection of scientific data from the innermost planet during its first flyby of Mercury in January 2008. After two additional Mercury flybys, MESSENGER was inserted into orbit around Mercury on 18 March 2011 and operated for more than four Earth years through 30 April 2015. Data acquired during the flyby and orbital phases have provided crucial information on the formation and evolution of Mercury. The Mercury Laser Altimeter (MLA) and the radio science system, for example, obtained geodetic observations of the topography, gravity field, orientation, and tides of Mercury, which helped constrain its surface and deep interior structure. X-band radio tracking data collected by the NASA Deep Space Network (DSN) allowed the determination of Mercury's gravity field to spherical harmonic degree and order 100, as well as refinement of the planet's obliquity and estimation of the tidal Love number k2. These geophysical parameters are derived from the range-rate observables that measure precisely the motion of the spacecraft in orbit around the planet. However, the DSN stations acquired two other kinds of radio tracking data, range and delta-differential one-way ranging, which also provided precise measurements of Mercury's ephemeris. The proximity of Mercury's orbit to the Sun leads to a significant perihelion precession, which was used by Einstein as confirmation of general relativity (GR) because of its inconsistency with the effects predicted from classical Newtonian theory. MESSENGER data allow the estimation of the GR parameterized post-Newtonian (PPN) coefficients γ and β. Furthermore, determination of Mercury's orbit also allows estimation of the gravitational parameter (GM) and the flattening (J2) of the Sun. We modified our orbit determination software, NASA GSFC's GEODYN II, to enable simultaneous orbit integration of both MESSENGER and the planet Mercury. The combined estimation of both orbits leads to a more accurate estimation of Mercury's gravity field, orientation, and tides. Results for these geophysical parameters, GM and J2 for the Sun, and the PPN parameters constitute updates for all of these quantities.
Using cystoscopy to segment bladder tumors with a multivariate approach in different color spaces.
Freitas, Nuno R; Vieira, Pedro M; Lima, Estevao; Lima, Carlos S
2017-07-01
Nowadays the diagnosis of bladder lesions relies upon cystoscopy examination and depends on the interpreter's experience. State of the art of bladder tumor identification are based on 3D reconstruction, using CT images (Virtual Cystoscopy) or images where the structures are exalted with the use of pigmentation, but none uses white light cystoscopy images. An initial attempt to automatically identify tumoral tissue was already developed by the authors and this paper will develop this idea. Traditional cystoscopy images processing has a huge potential to improve early tumor detection and allows a more effective treatment. In this paper is described a multivariate approach to do segmentation of bladder cystoscopy images, that will be used to automatically detect and improve physician diagnose. Each region can be assumed as a normal distribution with specific parameters, leading to the assumption that the distribution of intensities is a Gaussian Mixture Model (GMM). Region of high grade and low grade tumors, usually appears with higher intensity than normal regions. This paper proposes a Maximum a Posteriori (MAP) approach based on pixel intensities read simultaneously in different color channels from RGB, HSV and CIELab color spaces. The Expectation-Maximization (EM) algorithm is used to estimate the best multivariate GMM parameters. Experimental results show that the proposed method does bladder tumor segmentation into two classes in a more efficient way in RGB even in cases where the tumor shape is not well defined. Results also show that the elimination of component L from CIELab color space does not allow definition of the tumor shape.
The HelCat dual-source plasma device.
Lynn, Alan G; Gilmore, Mark; Watts, Christopher; Herrea, Janis; Kelly, Ralph; Will, Steve; Xie, Shuangwei; Yan, Lincan; Zhang, Yue
2009-10-01
The HelCat (Helicon-Cathode) device has been constructed to support a broad range of basic plasma science experiments relevant to the areas of solar physics, laboratory astrophysics, plasma nonlinear dynamics, and turbulence. These research topics require a relatively large plasma source capable of operating over a broad region of parameter space with a plasma duration up to at least several milliseconds. To achieve these parameters a novel dual-source system was developed utilizing both helicon and thermionic cathode sources. Plasma parameters of n(e) approximately 0.5-50 x 10(18) m(-3) and T(e) approximately 3-12 eV allow access to a wide range of collisionalities important to the research. The HelCat device and initial characterization of plasma behavior during dual-source operation are described.
Optics Program Simplifies Analysis and Design
NASA Technical Reports Server (NTRS)
2007-01-01
Engineers at Goddard Space Flight Center partnered with software experts at Mide Technology Corporation, of Medford, Massachusetts, through a Small Business Innovation Research (SBIR) contract to design the Disturbance-Optics-Controls-Structures (DOCS) Toolbox, a software suite for performing integrated modeling for multidisciplinary analysis and design. The DOCS Toolbox integrates various discipline models into a coupled process math model that can then predict system performance as a function of subsystem design parameters. The system can be optimized for performance; design parameters can be traded; parameter uncertainties can be propagated through the math model to develop error bounds on system predictions; and the model can be updated, based on component, subsystem, or system level data. The Toolbox also allows the definition of process parameters as explicit functions of the coupled model and includes a number of functions that analyze the coupled system model and provide for redesign. The product is being sold commercially by Nightsky Systems Inc., of Raleigh, North Carolina, a spinoff company that was formed by Mide specifically to market the DOCS Toolbox. Commercial applications include use by any contractors developing large space-based optical systems, including Lockheed Martin Corporation, The Boeing Company, and Northrup Grumman Corporation, as well as companies providing technical audit services, like General Dynamics Corporation
Science with the space-based interferometer LISA. V. Extreme mass-ratio inspirals
NASA Astrophysics Data System (ADS)
Babak, Stanislav; Gair, Jonathan; Sesana, Alberto; Barausse, Enrico; Sopuerta, Carlos F.; Berry, Christopher P. L.; Berti, Emanuele; Amaro-Seoane, Pau; Petiteau, Antoine; Klein, Antoine
2017-05-01
The space-based Laser Interferometer Space Antenna (LISA) will be able to observe the gravitational-wave signals from systems comprised of a massive black hole and a stellar-mass compact object. These systems are known as extreme-mass-ratio inspirals (EMRIs) and are expected to complete ˜1 04- 1 05 cycles in band, thus allowing exquisite measurements of their parameters. In this work, we attempt to quantify the astrophysical uncertainties affecting the predictions for the number of EMRIs detectable by LISA, and find that competing astrophysical assumptions produce a variance of about three orders of magnitude in the expected intrinsic EMRI rate. However, we find that irrespective of the astrophysical model, at least a few EMRIs per year should be detectable by the LISA mission, with up to a few thousands per year under the most optimistic astrophysical assumptions. We also investigate the precision with which LISA will be able to extract the parameters of these sources. We find that typical fractional statistical errors with which the intrinsic parameters (redshifted masses, massive black hole spin and orbital eccentricity) can be recovered are ˜10-6- 10-4 . Luminosity distance (which is required to infer true masses) is inferred to about 10% precision and sky position is localized to a few square degrees, while tests of the multipolar structure of the Kerr metric can be performed to percent-level precision or better.
Parameter retrieval of chiral metamaterials based on the state-space approach.
Zarifi, Davoud; Soleimani, Mohammad; Abdolali, Ali
2013-08-01
This paper deals with the introduction of an approach for the electromagnetic characterization of homogeneous chiral layers. The proposed method is based on the state-space approach and properties of a 4×4 state transition matrix. Based on this, first, the forward problem analysis through the state-space method is reviewed and properties of the state transition matrix of a chiral layer are presented and proved as two theorems. The formulation of a proposed electromagnetic characterization method is then presented. In this method, scattering data for a linearly polarized plane wave incident normally on a homogeneous chiral slab are combined with properties of a state transition matrix and provide a powerful characterization method. The main difference with respect to other well-established retrieval procedures based on the use of the scattering parameters relies on the direct computation of the transfer matrix of the slab as opposed to the conventional calculation of the propagation constant and impedance of the modes supported by the medium. The proposed approach allows avoiding nonlinearity of the problem but requires getting enough equations to fulfill the task which was provided by considering some properties of the state transition matrix. To demonstrate the applicability and validity of the method, the constitutive parameters of two well-known dispersive chiral metamaterial structures at microwave frequencies are retrieved. The results show that the proposed method is robust and reliable.
NASA Astrophysics Data System (ADS)
Walker, W.; Ardebili, H.
2014-12-01
Lithium-ion batteries (LIBs) are replacing the Nickel-Hydrogen batteries used on the International Space Station (ISS). Knowing that LIB efficiency and survivability are greatly influenced by temperature, this study focuses on the thermo-electrochemical analysis of LIBs in space orbit. Current finite element modeling software allows for advanced simulation of the thermo-electrochemical processes; however the heat transfer simulation capabilities of said software suites do not allow for the extreme complexities of orbital-space environments like those experienced by the ISS. In this study, we have coupled the existing thermo-electrochemical models representing heat generation in LIBs during discharge cycles with specialized orbital-thermal software, Thermal Desktop (TD). Our model's parameters were obtained from a previous thermo-electrochemical model of a 185 Amp-Hour (Ah) LIB with 1-3 C (C) discharge cycles for both forced and natural convection environments at 300 K. Our TD model successfully simulates the temperature vs. depth-of-discharge (DOD) profiles and temperature ranges for all discharge and convection variations with minimal deviation through the programming of FORTRAN logic representing each variable as a function of relationship to DOD. Multiple parametrics were considered in a second and third set of cases whose results display vital data in advancing our understanding of accurate thermal modeling of LIBs.
NASA Technical Reports Server (NTRS)
Schmidt, Phillip; Garg, Sanjay
1991-01-01
A framework for a decentralized hierarchical controller partitioning structure is developed. This structure allows for the design of separate airframe and propulsion controllers which, when assembled, will meet the overall design criterion for the integrated airframe/propulsion system. An algorithm based on parameter optimization of the state-space representation for the subsystem controllers is described. The algorithm is currently being applied to an integrated flight propulsion control design example.
Gravitational wave, collider and dark matter signals from a scalar singlet electroweak baryogenesis
Beniwal, Ankit; Lewicki, Marek; Wells, James D.; ...
2017-08-23
We analyse a simple extension of the SM with just an additional scalar singlet coupled to the Higgs boson. Here, we discuss the possible probes for electroweak baryogenesis in this model including collider searches, gravitational wave and direct dark matter detection signals. We show that a large portion of the model parameter space exists where the observation of gravitational waves would allow detection while the indirect collider searches would not.
Gravitational wave, collider and dark matter signals from a scalar singlet electroweak baryogenesis
NASA Astrophysics Data System (ADS)
Beniwal, Ankit; Lewicki, Marek; Wells, James D.; White, Martin; Williams, Anthony G.
2017-08-01
We analyse a simple extension of the SM with just an additional scalar singlet coupled to the Higgs boson. We discuss the possible probes for electroweak baryogenesis in this model including collider searches, gravitational wave and direct dark matter detection signals. We show that a large portion of the model parameter space exists where the observation of gravitational waves would allow detection while the indirect collider searches would not.
Gravitational wave, collider and dark matter signals from a scalar singlet electroweak baryogenesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beniwal, Ankit; Lewicki, Marek; Wells, James D.
We analyse a simple extension of the SM with just an additional scalar singlet coupled to the Higgs boson. Here, we discuss the possible probes for electroweak baryogenesis in this model including collider searches, gravitational wave and direct dark matter detection signals. We show that a large portion of the model parameter space exists where the observation of gravitational waves would allow detection while the indirect collider searches would not.
Multidisciplinary Analysis of the NEXUS Precursor Space Telescope
NASA Astrophysics Data System (ADS)
de Weck, Olivier L.; Miller, David W.; Mosier, Gary E.
2002-12-01
A multidisciplinary analysis is demonstrated for the NEXUS space telescope precursor mission. This mission was originally designed as an in-space technology testbed for the Next Generation Space Telescope (NGST). One of the main challenges is to achieve a very tight pointing accuracy with a sub-pixel line-of-sight (LOS) jitter budget and a root-mean-square (RMS) wavefront error smaller than λ/50 despite the presence of electronic and mechanical disturbances sources. The analysis starts with the assessment of the performance for an initial design, which turns out not to meet the requirements. Twentyfive design parameters from structures, optics, dynamics and controls are then computed in a sensitivity and isoperformance analysis, in search of better designs. Isoperformance allows finding an acceptable design that is well "balanced" and does not place undue burden on a single subsystem. An error budget analysis shows the contributions of individual disturbance sources. This paper might be helpful in analyzing similar, innovative space telescope systems in the future.
Strength of Zerodur® for mirror applications
NASA Astrophysics Data System (ADS)
Béhar-Lafenêtre, S.; Cornillon, Laurence; Ait-Zaid, Sonia
2015-09-01
Zerodur® is a well-known glass-ceramic used for optical components because of its unequalled dimensional stability under thermal environment. In particular it has been used since decades in Thales Alenia Space's optical payloads for space telescopes, especially for mirrors. The drawback of Zerodur® is however its quite low strength, but the relatively small size of mirrors in the past had made it unnecessary to further investigate this aspect, although elementary tests have always shown higher failure strength. As performance of space telescopes is increasing, the size of mirrors increases accordingly, and an optimization of the design is necessary, mainly for mass saving. Therefore the question of the effective strength of Zerodur® has become a real issue. Thales Alenia Space has investigated the application of the Weibull law and associated size effects on Zerodur® in 2014, under CNES funding, through a thorough test campaign with a high number of samples (300) of various types. The purpose was to accurately determine the parameters of the Weibull law for Zerodur® when machined in the same conditions as mirrors. The proposed paper will discuss the obtained results, in the light of the Weibull theory. The applicability of the 2-parameter and 3-parameter (with threshold strength) laws will be compared. The expected size effect has not been evidenced therefore some investigations are led to determine the reasons of this result, from the test implementation quality to the data post-processing methodology. However this test campaign has already provided enough data to safely increase the allowable value for mirrors sizing.
Population Synthesis of Radio & Gamma-Ray Millisecond Pulsars
NASA Astrophysics Data System (ADS)
Frederick, Sara; Gonthier, P. L.; Harding, A. K.
2014-01-01
In recent years, the number of known gamma-ray millisecond pulsars (MSPs) in the Galactic disk has risen substantially thanks to confirmed detections by Fermi Gamma-ray Space Telescope (Fermi). We have developed a new population synthesis of gamma-ray and radio MSPs in the galaxy which uses Markov Chain Monte Carlo techniques to explore the large and small worlds of the model parameter space and allows for comparisons of the simulated and detected MSP distributions. The simulation employs empirical radio and gamma-ray luminosity models that are dependent upon the pulsar period and period derivative with freely varying exponents. Parameters associated with the birth distributions are also free to vary. The computer code adjusts the magnitudes of the model luminosities to reproduce the number of MSPs detected by a group of ten radio surveys, thus normalizing the simulation and predicting the MSP birth rates in the Galaxy. Computing many Markov chains leads to preferred sets of model parameters that are further explored through two statistical methods. Marginalized plots define confidence regions in the model parameter space using maximum likelihood methods. A secondary set of confidence regions is determined in parallel using Kuiper statistics calculated from comparisons of cumulative distributions. These two techniques provide feedback to affirm the results and to check for consistency. Radio flux and dispersion measure constraints have been imposed on the simulated gamma-ray distributions in order to reproduce realistic detection conditions. The simulated and detected distributions agree well for both sets of radio and gamma-ray pulsar characteristics, as evidenced by our various comparisons.
Using Natural Language to Enable Mission Managers to Control Multiple Heterogeneous UAVs
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Puig-Navarro, Javier; Mehdi, S. Bilal; Mcquarry, A. Kyle
2016-01-01
The availability of highly capable, yet relatively cheap, unmanned aerial vehicles (UAVs) is opening up new areas of use for hobbyists and for commercial activities. This research is developing methods beyond classical control-stick pilot inputs, to allow operators to manage complex missions without in-depth vehicle expertise. These missions may entail several heterogeneous UAVs flying coordinated patterns or flying multiple trajectories deconflicted in time or space to predefined locations. This paper describes the functionality and preliminary usability measures of an interface that allows an operator to define a mission using speech inputs. With a defined and simple vocabulary, operators can input the vast majority of mission parameters using simple, intuitive voice commands. Although the operator interface is simple, it is based upon autonomous algorithms that allow the mission to proceed with minimal input from the operator. This paper also describes these underlying algorithms that allow an operator to manage several UAVs.
Changes of Space Debris Orbits After LDR Operation
NASA Astrophysics Data System (ADS)
Wnuk, E.; Golebiewska, J.; Jacquelard, C.; Haag, H.
2013-09-01
A lot of technical studies are currently developing concepts of active removal of space debris to protect space assets from on orbit collision. For small objects, such concepts include the use of ground-based lasers to remove or reduce the momentum of the objects thereby lowering their orbit in order to facilitate their decay by re-entry into the Earth's atmosphere. The concept of the Laser Debris Removal (LDR) system is the main subject of the CLEANSPACE project. One of the CLEANSPACE objectives is to define a global architecture (including surveillance, identification and tracking) for an innovative ground-based laser solution, which can remove hazardous medium debris around selected space assets. The CLEANSPACE project is realized by a European consortium in the frame of the European Commission Seventh Framework Programme (FP7), Space topic. The use of sequence of laser operations to remove space debris, needs very precise predictions of future space debris orbital positions, on a level even better than 1 meter. Orbit determination, tracking (radar, optical and laser) and orbit prediction have to be performed with accuracy much better than so far. For that, the applied prediction tools have to take into account all perturbation factors that influence object orbit. The expected object's trajectory after the LDR operation is a lowering of its perigee. To prevent the debris with this new trajectory to collide with another object, a precise trajectory prediction after the LDR sequence is therefore the main task allowing also to estimate re-entry parameters. The LDR laser pulses change the debris object velocity v. The future orbit and re-entry parameters of the space debris after the LDR engagement can be calculated if the resulting ?v vector is known with the sufficient accuracy. The value of the ?v may be estimated from the parameters of the LDR station and from the characteristics of the orbital debris. However, usually due to the poor knowledge of the debris object's size, mass, spin and chemical composition the value and the direction of the vector ?v cannot be estimated with the high accuracy. Therefore, a high precise tracking of the debris will be necessary immediately before the engagement of the LDR and also during this engagement. By extending this tracking and ranging for a few seconds after engagement, the necessary data to evaluate the orbital modification can be produced in the same way as it is done for the catalogue generation. In our paper we discuss the object's orbit changes due to LDR operation for different locations of LDR station and different parameters of the laser energy and telescope diameter. We estimate the future orbit and re-entry parameters taking into account the influence of all important perturbation factors on the space debris orbital motion after LDR.
Hernández, Oscar E; Zurek, Eduardo E
2013-05-15
We present a software tool called SENB, which allows the geometric and biophysical neuronal properties in a simple computational model of a Hodgkin-Huxley (HH) axon to be changed. The aim of this work is to develop a didactic and easy-to-use computational tool in the NEURON simulation environment, which allows graphical visualization of both the passive and active conduction parameters and the geometric characteristics of a cylindrical axon with HH properties. The SENB software offers several advantages for teaching and learning electrophysiology. First, SENB offers ease and flexibility in determining the number of stimuli. Second, SENB allows immediate and simultaneous visualization, in the same window and time frame, of the evolution of the electrophysiological variables. Third, SENB calculates parameters such as time and space constants, stimuli frequency, cellular area and volume, sodium and potassium equilibrium potentials, and propagation velocity of the action potentials. Furthermore, it allows the user to see all this information immediately in the main window. Finally, with just one click SENB can save an image of the main window as evidence. The SENB software is didactic and versatile, and can be used to improve and facilitate the teaching and learning of the underlying mechanisms in the electrical activity of an axon using the biophysical properties of the squid giant axon.
Global dynamics for switching systems and their extensions by linear differential equations
NASA Astrophysics Data System (ADS)
Huttinga, Zane; Cummins, Bree; Gedeon, Tomáš; Mischaikow, Konstantin
2018-03-01
Switching systems use piecewise constant nonlinearities to model gene regulatory networks. This choice provides advantages in the analysis of behavior and allows the global description of dynamics in terms of Morse graphs associated to nodes of a parameter graph. The parameter graph captures spatial characteristics of a decomposition of parameter space into domains with identical Morse graphs. However, there are many cellular processes that do not exhibit threshold-like behavior and thus are not well described by a switching system. We consider a class of extensions of switching systems formed by a mixture of switching interactions and chains of variables governed by linear differential equations. We show that the parameter graphs associated to the switching system and any of its extensions are identical. For each parameter graph node, there is an order-preserving map from the Morse graph of the switching system to the Morse graph of any of its extensions. We provide counterexamples that show why possible stronger relationships between the Morse graphs are not valid.
Global dynamics for switching systems and their extensions by linear differential equations.
Huttinga, Zane; Cummins, Bree; Gedeon, Tomáš; Mischaikow, Konstantin
2018-03-15
Switching systems use piecewise constant nonlinearities to model gene regulatory networks. This choice provides advantages in the analysis of behavior and allows the global description of dynamics in terms of Morse graphs associated to nodes of a parameter graph. The parameter graph captures spatial characteristics of a decomposition of parameter space into domains with identical Morse graphs. However, there are many cellular processes that do not exhibit threshold-like behavior and thus are not well described by a switching system. We consider a class of extensions of switching systems formed by a mixture of switching interactions and chains of variables governed by linear differential equations. We show that the parameter graphs associated to the switching system and any of its extensions are identical. For each parameter graph node, there is an order-preserving map from the Morse graph of the switching system to the Morse graph of any of its extensions. We provide counterexamples that show why possible stronger relationships between the Morse graphs are not valid.
NASA Astrophysics Data System (ADS)
Janidarmian, Majid; Fekr, Atena Roshan; Bokharaei, Vahhab Samadi
2011-08-01
Mapping algorithm which means which core should be linked to which router is one of the key issues in the design flow of network-on-chip. To achieve an application-specific NoC design procedure that minimizes the communication cost and improves the fault tolerant property, first a heuristic mapping algorithm that produces a set of different mappings in a reasonable time is presented. This algorithm allows the designers to identify the set of most promising solutions in a large design space, which has low communication costs while yielding optimum communication costs in some cases. Another evaluated parameter, vulnerability index, is then considered as a principle of estimating the fault-tolerance property in all produced mappings. Finally, in order to yield a mapping which considers trade-offs between these two parameters, a linear function is defined and introduced. It is also observed that more flexibility to prioritize solutions within the design space is possible by adjusting a set of if-then rules in fuzzy logic.
Nonlocal Gravity and Structure in the Universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dodelson, Scott; Park, Sohyun
2014-08-26
The observed acceleration of the Universe can be explained by modifying general relativity. One such attempt is the nonlocal model of Deser and Woodard. Here we fix the background cosmology using results from the Planck satellite and examine the predictions of nonlocal gravity for the evolution of structure in the universe, confronting the model with three tests: gravitational lensing, redshift space distortions, and the estimator of gravitymore » $$E_G$$. Current data favor general relativity (GR) over nonlocal gravity: fixing primordial cosmology with the best fit parameters from Planck leads to weak lensing results favoring GR by 5.9 sigma; redshift space distortions measurements of the growth rate preferring GR by 7.8 sigma; and the single measurement of $$E_G$$ favoring GR, but by less than 1-sigma. The significance holds up even after the parameters are allowed to vary within Planck limits. The larger lesson is that a successful modified gravity model will likely have to suppress the growth of structure compared to general relativity.« less
Zhu, Yifan; Hu, Jie; Fan, Xudong; Yang, Jing; Liang, Bin; Zhu, Xuefeng; Cheng, Jianchun
2018-04-24
The fine manipulation of sound fields is critical in acoustics yet is restricted by the coupled amplitude and phase modulations in existing wave-steering metamaterials. Commonly, unavoidable losses make it difficult to control coupling, thereby limiting device performance. Here we show the possibility of tailoring the loss in metamaterials to realize fine control of sound in three-dimensional (3D) space. Quantitative studies on the parameter dependence of reflection amplitude and phase identify quasi-decoupled points in the structural parameter space, allowing arbitrary amplitude-phase combinations for reflected sound. We further demonstrate the significance of our approach for sound manipulation by producing self-bending beams, multifocal focusing, and a single-plane two-dimensional hologram, as well as a multi-plane 3D hologram with quality better than the previous phase-controlled approach. Our work provides a route for harnessing sound via engineering the loss, enabling promising device applications in acoustics and related fields.
Variation in the modal parameters of space structures
NASA Technical Reports Server (NTRS)
Crawley, Edward F.; Barlow, Mark S.; Van Schoor, Marthinus C.; Bicos, Andrew S.
1992-01-01
An analytic and experimental study of gravity and suspension influences on space structural test articles is presented. A modular test article including deployable, erectable, and rotary modules was assembled in three one- and two-dimensional structures. The two deployable modules utilized cable diagonal bracing rather than rigid cross members; within a bay of one of the deployable modules, the cable preload was adjustable. A friction lock was used on the alpha joint to either allow or prohibit rotary motion. Suspension systems with plunge fundamentals of 1, 2, and 5 Hz were used for ground testing to evaluate the influences of suspension stiffness. Assembly and reassembly testing was performed, as was testing on two separate shipsets at two test sites. Trends and statistical variances in modal parameters are presented as a function of force amplitude, joint preload, reassembly, shipset and suspension. Linear finite element modeling of each structure provided analytical results for 0-g unsuspended and 1-g suspended models, which are correlated with the analytical model.
NASA Astrophysics Data System (ADS)
Parviainen, Hannu
2015-10-01
PyLDTk automates the calculation of custom stellar limb darkening (LD) profiles and model-specific limb darkening coefficients (LDC) using the library of PHOENIX-generated specific intensity spectra by Husser et al. (2013). It facilitates exoplanet transit light curve modeling, especially transmission spectroscopy where the modeling is carried out for custom narrow passbands. PyLDTk construct model-specific priors on the limb darkening coefficients prior to the transit light curve modeling. It can also be directly integrated into the log posterior computation of any pre-existing transit modeling code with minimal modifications to constrain the LD model parameter space directly by the LD profile, allowing for the marginalization over the whole parameter space that can explain the profile without the need to approximate this constraint by a prior distribution. This is useful when using a high-order limb darkening model where the coefficients are often correlated, and the priors estimated from the tabulated values usually fail to include these correlations.
NASA Technical Reports Server (NTRS)
Bozyan, Elizabeth P.; Hemenway, Paul D.; Argue, A. Noel
1990-01-01
Observations of a set of 89 extragalactic objects (EGOs) will be made with the Hubble Space Telescope Fine Guidance Sensors and Planetary Camera in order to link the HIPPARCOS Instrumental System to an extragalactic coordinate system. Most of the sources chosen for observation contain compact radio sources and stellarlike nuclei; 65 percent are optical variables beyond a 0.2 mag limit. To ensure proper exposure times, accurate mean magnitudes are necessary. In many cases, the average magnitudes listed in the literature were not adequate. The literature was searched for all relevant photometric information for the EGOs, and photometric parameters were derived, including mean magnitude, maximum range, and timescale of variability. This paper presents the results of that search and the parameters derived. The results will allow exposure times to be estimated such that an observed magnitude different from the tabular magnitude by 0.5 mag in either direction will not degrade the astrometric centering ability on a Planetary Camera CCD frame.
NASA Astrophysics Data System (ADS)
Doroshin, Anton V.
2018-06-01
In this work the chaos in dynamical systems is considered as a positive aspect of dynamical behavior which can be applied to change systems dynamical parameters and, moreover, to change systems qualitative properties. From this point of view, the chaos can be characterized as a hub for the system dynamical regimes, because it allows to interconnect separated zones of the phase space of the system, and to fulfill the jump into the desirable phase space zone. The concretized aim of this part of the research is to focus on developing the attitude control method for magnetized gyrostat-satellites, which uses the passage through the intentionally generated heteroclinic chaos. The attitude dynamics of the satellite/spacecraft in this case represents the series of transitions from the initial dynamical regime into the chaotic heteroclinic regime with the subsequent exit to the final target dynamical regime with desirable parameters of the attitude dynamics.
Application of neogeographic tools for geochemistry
NASA Astrophysics Data System (ADS)
Zhilin, Denis
2010-05-01
Neogeography is a usage of geographical tools for utilization by a non-expert group of users. It have been rapidly developing last ten years and is founded on (a) availability of Global Positioning System (GPS) receivers, that allows to obtain very precise geographical position (b) services, that allows linking geographical position with satellite images, GoogleEarth for example and (c) programs as GPS Track Maker or OziExplorer, that allows linking geographical coordinates with other raster images (for example, maps). However, the possibilities of neogeographic approach are much wider. It allows linking different parameters with geographical coordinates on the one hand and space image or map - on the other. If it is easy to measure a parameter, a great database could be collected for a very small time. The results can be presented in very different ways. One can plot a parameter versus the distance from a particular point (for example, a source of a substance), make two-dimension distribution of parameter of put the results onto a map or space image. In the case of chemical parameters it can help finding the source of pollution, trace the influence of pollution, reveal geochemical processes and patterns. The main advantage of neogeograpic approach is the employment of non-experts in collecting data. Now non-experts can easily measure electrical conductivity and pH of natural waters, concentration of different gases in the atmosphere, solar irradiation, radioactivity and so on. If the results are obtained (for example, by students of secondary schools) and shared, experts can proceed them and make significant conclusions. An interface of sharing the results (http://maps.sch192.ru/) was elaborated by V. Ilyin. Within the interface a user can load *.csv file with coordinates, type of parameter and the value of parameter in a particular point. The points are marked on the GoogleEarth map with the color corresponding the value of the parameter. The color scale can be edited manually. We would like to show some results of practical and scientific importance, obtained by non-experts. At 2006 our secondary school students investigated the distribution of snow salinity around Kosygina Street in Moscow. One can conclude that the distribution of salinity is reproducible and that the street influences the snow up to 150 meters. Another example obtained by our students is the distribution of electrical conductivity of swamp water showing extreme irregularity of this parameter within the small area (about 0.5x0.5 km) the electrical conductivity varied from 22 to 77 uS with no regularity. It points out the key role of local processes in swamp water chemistry. The third example (maps of electrical conductivity and pH of water on a large area) one can see at http://fenevo.narod.ru/maps/ec-maps.htm and http://fenevo.narod.ru/maps/ph-maps.htm. Basing on the map one can conclude mechanisms of formation of water mineralization in the area. Availability of GPS receivers and systems for easy measuring of chemical parameters can lead to neogeochemical revolution as GPS receivers have led to neogeographical. A great number of non-experts can share their geochemical results, forming huge amount of available geochemical data. It will help to falsify and visualize concepts of geochemistry and environmental chemistry and, maybe, develop new ones. Geophysical and biological data could be shared as well with the same advantages for corresponding sciences.
A robust momentum management and attitude control system for the space station
NASA Technical Reports Server (NTRS)
Speyer, J. L.; Rhee, Ihnseok
1991-01-01
A game theoretic controller is synthesized for momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Full state information is assumed since attitude rates are assumed to be very assurately measured. By an input-output decomposition of the uncertainty in the system matrices, the parameter uncertainties in the dynamic system are represented as an unknown gain associated with an internal feedback loop (IFL). The input and output matrices associated with the IFL form directions through which the uncertain parameters affect system response. If the quadratic form of the IFL output augments the cost criterion, then enhanced parameter robustness is anticipated. By considering the input and the input disturbance from the IFL as two noncooperative players, a linear-quadratic differential game is constructed. The solution in the form of a linear controller is used for synthesis. Inclusion of the external disturbance torques results in a dynamic feedback controller which consists of conventional PID (proportional integral derivative) control and cyclic disturbance rejection filters. It is shown that the game theoretic design allows large variations in the inertias in directions of importance.
A Multiphase Model for the Intracluster Medium
NASA Technical Reports Server (NTRS)
Nagai, Daisuke; Sulkanen, Martin E.; Evrard, August E.
1999-01-01
Constraints on the clustered mass density of the universe derived from the observed population mean intracluster gas fraction of x-ray clusters may be biased by reliance on a single-phase assumption for the thermodynamic structure of the intracluster medium (ICM). We propose a descriptive model for multiphase structure in which a spherically symmetric ICM contains isobaric density perturbations with a radially dependent variance. Fixing the x-ray emission and emission weighted temperature, we explore two independently observable signatures of the model in the parameter space. For bremsstrahlung dominated emission, the central Sunyaev-Zel'dovich (SZ) decrement in the multiphase case is increased over the single-phase case and multiphase x-ray spectra in the range 0.1-20 keV are flatter in the continuum and exhibit stronger low energy emission lines than their single-phase counterpart. We quantify these effects for a fiducial 10e8 K cluster and demonstrate how the combination of SZ and x-ray spectroscopy can be used to identify a preferred location in the plane of the model parameter space. From these parameters the correct value of mean intracluster gas fraction in the multiphase model results, allowing an unbiased estimate of clustered mass density to he recovered.
NASA Astrophysics Data System (ADS)
Alterman, B. L.; Klein, K. G.; Verscharen, D.; Stevens, M. L.; Kasper, J. C.
2017-12-01
Long duration, in situ data sets enable large-scale statistical analysis of free-energy-driven instabilities in the solar wind. The plasma beta and temperature anisotropy plane provides a well-defined parameter space in which a single-fluid plasma's stability can be represented. Because this reduced parameter space can only represent instability thresholds due to the free energy of one ion species - typically the bulk protons - the true impact of instabilities on the solar wind is under estimated. Nyquist's instability criterion allows us to systematically account for other sources of free energy including beams, drifts, and additional temperature anisotropies. Utilizing over 20 years of Wind Faraday cup and magnetic field observations, we have resolved the bulk parameters for three ion populations: the bulk protons, beam protons, and alpha particles. Applying Nyquist's criterion, we calculate the number of linearly growing modes supported by each spectrum and provide a more nuanced consideration of solar wind stability. Using collisional age measurements, we predict the stability of the solar wind close to the sun. Accounting for the free-energy from the three most common ion populations in the solar wind, our approach provides a more complete characterization of solar wind stability.
Robust momentum management and attitude control system for the Space Station
NASA Technical Reports Server (NTRS)
Rhee, Ihnseok; Speyer, Jason L.
1992-01-01
A game theoretic controller is synthesized for momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Full state information is assumed since attitude rates are assumed to be very accurately measured. By an input-output decomposition of the uncertainty in the system matrices, the parameter uncertainties in the dynamic system are represented as an unknown gain associated with an internal feedback loop (IFL). The input and output matrices associated with the IFL form directions through which the uncertain parameters affect system response. If the quadratic form of the IFL output augments the cost criterion, then enhanced parameter robustness is anticipated. By considering the input and the input disturbance from the IFL as two noncooperative players, a linear-quadratic differential game is constructed. The solution in the form of a linear controller is used for synthesis. Inclusion of the external disturbance torques results in a dynamic feedback controller which consists of conventional PID (proportional integral derivative) control and cyclic disturbance rejection filters. It is shown that the game theoretic design allows large variations in the inertias in directions of importance.
Nonstandard neutrino interactions in supernovae
NASA Astrophysics Data System (ADS)
Stapleford, Charles J.; Väänänen, Daavid J.; Kneller, James P.; McLaughlin, Gail C.; Shapiro, Brandon T.
2016-11-01
Nonstandard interactions (NSI) of neutrinos with matter can significantly alter neutrino flavor evolution in supernovae with the potential to impact explosion dynamics, nucleosynthesis, and the neutrinos signal. In this paper, we explore, both numerically and analytically, the landscape of neutrino flavor transformation effects in supernovae due to NSI and find a new, heretofore unseen transformation processes can occur. These new transformations can take place with NSI strengths well below current experimental limits. Within a broad swath of NSI parameter space, we observe symmetric and standard matter-neutrino resonances for supernovae neutrinos, a transformation effect previously only seen in compact object merger scenarios; in another region of the parameter space we find the NSI can induce neutrino collective effects in scenarios where none would appear with only the standard case of neutrino oscillation physics; and in a third region the NSI can lead to the disappearance of the high density Mikheyev-Smirnov-Wolfenstein resonance. Using a variety of analytical tools, we are able to describe quantitatively the numerical results allowing us to partition the NSI parameter according to the transformation processes observed. Our results indicate nonstandard interactions of supernova neutrinos provide a sensitive probe of beyond the Standard Model physics complementary to present and future terrestrial experiments.
Digit replacement: A generic map for nonlinear dynamical systems.
García-Morales, Vladimir
2016-09-01
A simple discontinuous map is proposed as a generic model for nonlinear dynamical systems. The orbit of the map admits exact solutions for wide regions in parameter space and the method employed (digit manipulation) allows the mathematical design of useful signals, such as regular or aperiodic oscillations with specific waveforms, the construction of complex attractors with nontrivial properties as well as the coexistence of different basins of attraction in phase space with different qualitative properties. A detailed analysis of the dynamical behavior of the map suggests how the latter can be used in the modeling of complex nonlinear dynamics including, e.g., aperiodic nonchaotic attractors and the hierarchical deposition of grains of different sizes on a surface.
Imaging Beyond What Man Can See
NASA Technical Reports Server (NTRS)
May, George; Mitchell, Brian
2004-01-01
Three lightweight, portable hyperspectral sensor systems have been built that capture energy from 200 to 1700 nanometers (ultravio1et to shortwave infrared). The sensors incorporate a line scanning technique that requires no relative movement between the target and the sensor. This unique capability, combined with portability, opens up new uses of hyperspectral imaging for laboratory and field environments. Each system has a GUI-based software package that allows the user to communicate with the imaging device for setting spatial resolution, spectral bands and other parameters. NASA's Space Partnership Development has sponsored these innovative developments and their application to human problems on Earth and in space. Hyperspectral datasets have been captured and analyzed in numerous areas including precision agriculture, food safety, biomedical imaging, and forensics. Discussion on research results will include realtime detection of food contaminants, molds and toxin research on corn, identifying counterfeit documents, non-invasive wound monitoring and aircraft applications. Future research will include development of a thermal infrared hyperspectral sensor that will support natural resource applications on Earth and thermal analyses during long duration space flight. This paper incorporates a variety of disciplines and imaging technologies that have been linked together to allow the expansion of remote sensing across both traditional and non-traditional boundaries.
Plants and somatic embryos in space: what have we learned?
NASA Technical Reports Server (NTRS)
Krikorian, A. D.
1998-01-01
Space provides a unique environment that can affect the interplay between cell cycle controls and environment and can thus modify the processes of cell division, development and growth. It is proposed that the chromosomal and nuclear abnormalities frequently encountered in cells of various plants exposed to space are due to a combination of factors including the biological status of the systems and the way in which they are grown, exposed to, and ultimately, the way in which they experience multiple stresses. The extent to which space-specific changes become manifest is dependent on the extent of pre-existing stresses in the system. This has become evident in a variety of plant species grown in space but has been particularly amenable to study using in vitro systems, especially in developing embryoids. The following observations allow us to harmonize disparate results from a variety of space experiments:- (a) the more completely developed a system, the less likely it is to show cell stress during growth; the less morphologically complex, the greater the vulnerability; (b) the size/"packaging" of the genome (karyotype) are significant experimental variables; plants with larger genomes (e.g. polyploids) seem to be more space-stress tolerant; (c) a single space-associated stress is inadequate to produce a significant adverse response unless the stress is severe, or a biological parameter necessary to 'amplify' it exists. On this view, an appropriate "stress match" with other non-equilibrium determinants, much like a 'tug of war', can result in genomic variations in space. All this emphasizes that fastidiously-controlled growing environments must be devised if one is to resolve the matter of direct versus indirect effects of space. Better understanding of the novel physico-chemical equilibrium phenomena associated with space will allow those interested in space cell and developmental biology to pick and choose procedures best suited to their exploitation for specific objectives.
Improvements in Space Surveillance Processing for Wide Field of View Optical Sensors
NASA Astrophysics Data System (ADS)
Sydney, P.; Wetterer, C.
2014-09-01
For more than a decade, an autonomous satellite tracking system at the Air Force Maui Optical and Supercomputing (AMOS) observatory has been generating routine astrometric measurements of Earth-orbiting Resident Space Objects (RSOs) using small commercial telescopes and sensors. Recent work has focused on developing an improved processing system, enhancing measurement performance and response while supporting other sensor systems and missions. This paper will outline improved techniques in scheduling, detection, astrometric and photometric measurements, and catalog maintenance. The processing system now integrates with Special Perturbation (SP) based astrodynamics algorithms, allowing covariance-based scheduling and more precise orbital estimates and object identification. A merit-based scheduling algorithm provides a global optimization framework to support diverse collection tasks and missions. The detection algorithms support a range of target tracking and camera acquisition rates. New comprehensive star catalogs allow for more precise astrometric and photometric calibrations including differential photometry for monitoring environmental changes. This paper will also examine measurement performance with varying tracking rates and acquisition parameters.
Adamson, P.; An, F. P.; Anghel, I.; ...
2016-10-07
Searches for a light sterile neutrino have been performed independently by the MINOS and the Daya Bay experiments using the muon (anti)neutrino and electron antineutrino disappearance channels, respectively. In this Letter, results from both experiments are combined with those from the Bugey-3 reactor neutrino experiment to constrain oscillations into light sterile neutrinos. The three experiments are sensitive to complementary regions of parameter space, enabling the combined analysis to probe regions allowed by the Liquid Scintillator Neutrino Detector (LSND) and MiniBooNE experiments in a minimally extended four-neutrino flavor framework. Here, stringent limits on sin 22θ μe are set over 6 ordersmore » of magnitude in the sterile mass-squared splitting Δm 2 41. The sterile-neutrino mixing phase space allowed by the LSND and MiniBooNE experiments is excluded for Δm 2 41 < 0.8 eV 2 at 95% CL s.« less
Adamson, P; An, F P; Anghel, I; Aurisano, A; Balantekin, A B; Band, H R; Barr, G; Bishai, M; Blake, A; Blyth, S; Bock, G J; Bogert, D; Cao, D; Cao, G F; Cao, J; Cao, S V; Carroll, T J; Castromonte, C M; Cen, W R; Chan, Y L; Chang, J F; Chang, L C; Chang, Y; Chen, H S; Chen, Q Y; Chen, R; Chen, S M; Chen, Y; Chen, Y X; Cheng, J; Cheng, J-H; Cheng, Y P; Cheng, Z K; Cherwinka, J J; Childress, S; Chu, M C; Chukanov, A; Coelho, J A B; Corwin, L; Cronin-Hennessy, D; Cummings, J P; de Arcos, J; De Rijck, S; Deng, Z Y; Devan, A V; Devenish, N E; Ding, X F; Ding, Y Y; Diwan, M V; Dolgareva, M; Dove, J; Dwyer, D A; Edwards, W R; Escobar, C O; Evans, J J; Falk, E; Feldman, G J; Flanagan, W; Frohne, M V; Gabrielyan, M; Gallagher, H R; Germani, S; Gill, R; Gomes, R A; Gonchar, M; Gong, G H; Gong, H; Goodman, M C; Gouffon, P; Graf, N; Gran, R; Grassi, M; Grzelak, K; Gu, W Q; Guan, M Y; Guo, L; Guo, R P; Guo, X H; Guo, Z; Habig, A; Hackenburg, R W; Hahn, S R; Han, R; Hans, S; Hartnell, J; Hatcher, R; He, M; Heeger, K M; Heng, Y K; Higuera, A; Holin, A; Hor, Y K; Hsiung, Y B; Hu, B Z; Hu, T; Hu, W; Huang, E C; Huang, H X; Huang, J; Huang, X T; Huber, P; Huo, W; Hussain, G; Hylen, J; Irwin, G M; Isvan, Z; Jaffe, D E; Jaffke, P; James, C; Jen, K L; Jensen, D; Jetter, S; Ji, X L; Ji, X P; Jiao, J B; Johnson, R A; de Jong, J K; Joshi, J; Kafka, T; Kang, L; Kasahara, S M S; Kettell, S H; Kohn, S; Koizumi, G; Kordosky, M; Kramer, M; Kreymer, A; Kwan, K K; Kwok, M W; Kwok, T; Lang, K; Langford, T J; Lau, K; Lebanowski, L; Lee, J; Lee, J H C; Lei, R T; Leitner, R; Leung, J K C; Li, C; Li, D J; Li, F; Li, G S; Li, Q J; Li, S; Li, S C; Li, W D; Li, X N; Li, Y F; Li, Z B; Liang, H; Lin, C J; Lin, G L; Lin, S; Lin, S K; Lin, Y-C; Ling, J J; Link, J M; Litchfield, P J; Littenberg, L; Littlejohn, B R; Liu, D W; Liu, J C; Liu, J L; Loh, C W; Lu, C; Lu, H Q; Lu, J S; Lucas, P; Luk, K B; Lv, Z; Ma, Q M; Ma, X B; Ma, X Y; Ma, Y Q; Malyshkin, Y; Mann, W A; Marshak, M L; Martinez Caicedo, D A; Mayer, N; McDonald, K T; McGivern, C; McKeown, R D; Medeiros, M M; Mehdiyev, R; Meier, J R; Messier, M D; Miller, W H; Mishra, S R; Mitchell, I; Mooney, M; Moore, C D; Mualem, L; Musser, J; Nakajima, Y; Naples, D; Napolitano, J; Naumov, D; Naumova, E; Nelson, J K; Newman, H B; Ngai, H Y; Nichol, R J; Ning, Z; Nowak, J A; O'Connor, J; Ochoa-Ricoux, J P; Olshevskiy, A; Orchanian, M; Pahlka, R B; Paley, J; Pan, H-R; Park, J; Patterson, R B; Patton, S; Pawloski, G; Pec, V; Peng, J C; Perch, A; Pfützner, M M; Phan, D D; Phan-Budd, S; Pinsky, L; Plunkett, R K; Poonthottathil, N; Pun, C S J; Qi, F Z; Qi, M; Qian, X; Qiu, X; Radovic, A; Raper, N; Rebel, B; Ren, J; Rosenfeld, C; Rosero, R; Roskovec, B; Ruan, X C; Rubin, H A; Sail, P; Sanchez, M C; Schneps, J; Schreckenberger, A; Schreiner, P; Sharma, R; Moed Sher, S; Sousa, A; Steiner, H; Sun, G X; Sun, J L; Tagg, N; Talaga, R L; Tang, W; Taychenachev, D; Thomas, J; Thomson, M A; Tian, X; Timmons, A; Todd, J; Tognini, S C; Toner, R; Torretta, D; Treskov, K; Tsang, K V; Tull, C E; Tzanakos, G; Urheim, J; Vahle, P; Viaux, N; Viren, B; Vorobel, V; Wang, C H; Wang, M; Wang, N Y; Wang, R G; Wang, W; Wang, X; Wang, Y F; Wang, Z; Wang, Z M; Webb, R C; Weber, A; Wei, H Y; Wen, L J; Whisnant, K; White, C; Whitehead, L; Whitehead, L H; Wise, T; Wojcicki, S G; Wong, H L H; Wong, S C F; Worcester, E; Wu, C-H; Wu, Q; Wu, W J; Xia, D M; Xia, J K; Xing, Z Z; Xu, J L; Xu, J Y; Xu, Y; Xue, T; Yang, C G; Yang, H; Yang, L; Yang, M S; Yang, M T; Ye, M; Ye, Z; Yeh, M; Young, B L; Yu, Z Y; Zeng, S; Zhan, L; Zhang, C; Zhang, H H; Zhang, J W; Zhang, Q M; Zhang, X T; Zhang, Y M; Zhang, Y X; Zhang, Z J; Zhang, Z P; Zhang, Z Y; Zhao, J; Zhao, Q W; Zhao, Y B; Zhong, W L; Zhou, L; Zhou, N; Zhuang, H L; Zou, J H
2016-10-07
Searches for a light sterile neutrino have been performed independently by the MINOS and the Daya Bay experiments using the muon (anti)neutrino and electron antineutrino disappearance channels, respectively. In this Letter, results from both experiments are combined with those from the Bugey-3 reactor neutrino experiment to constrain oscillations into light sterile neutrinos. The three experiments are sensitive to complementary regions of parameter space, enabling the combined analysis to probe regions allowed by the Liquid Scintillator Neutrino Detector (LSND) and MiniBooNE experiments in a minimally extended four-neutrino flavor framework. Stringent limits on sin^{2}2θ_{μe} are set over 6 orders of magnitude in the sterile mass-squared splitting Δm_{41}^{2}. The sterile-neutrino mixing phase space allowed by the LSND and MiniBooNE experiments is excluded for Δm_{41}^{2}<0.8 eV^{2} at 95% CL_{s}.
The dynamics of a space station tethered refueling facility
NASA Technical Reports Server (NTRS)
Abbott, P.; Rudolph, L. K.; Fester, D. A.
1986-01-01
The fluid stored in a tethered orbital refueling facility is settled at the bottom of the storage tanks by gravity-gradient forces. The fluid motions (slosh) induced by outside disturbances must be limited to ensure the tank outlet is not uncovered during a fluid transfer. The dynamics of a LO2/LH2 TORF attached to the space station have been analyzed to identify design parameters necessary to limit fluid motion. Using the worst case disturbance of a shuttle docking at the space station, the fluid motion was found to be a function of tether length and allowable facility swing angle. Acceptable fluid behavior occurs for tether lengths of at least 1000 ft. To ensure motions induced by separate disturbances do not add to unacceptable values, a slosh damping coefficient of 5 percent is recommended.
NASA Technical Reports Server (NTRS)
Lundquist, Ray A.; Leidecker, Henning
1999-01-01
The allowable operating currents of electrical wiring when used in the space vacuum environment is predominantly determined by the maximum operating temperature of the wire insulation. For Kapton insulated wire this value is 200 degree C. Guidelines provided in the Goddard Space Flight Center (GSFC) Preferred Parts List (PPL) limit the operating current of wire within vacuum to ensure the maximum insulation temperature is not exceeded. For 20 AWG wire, these operating parameters are: (1) 3.7 amps per wire (2) bundle of 15 or more wires (3) 70 C environment (4) vacuum of 10(exp -5) torr or less To determine the behavior and temperature of electrical wire at different operating conditions, a thermal vacuum test was performed on a representative electrical harness of the Hubble Space Telescope (HST) power distribution system. This paper describes the test and the results.
NASA Technical Reports Server (NTRS)
Lundquist, Ray A.; Leidecker, Henning
1998-01-01
The allowable operating currents of electrical wiring when used in the space vacuum environment is predominantly determined by the maximum operating temperature of the wire insulation. For Kapton insulated wire this value is 200 C. Guidelines provided in the Goddard Space Flight Center (GSFC) Preferred Parts List (PPL) limit the operating current of wire within vacuum to ensure the maximum insulation temperature is not exceeded. For 20 AWG wire, these operating parameters are: (1) 3.7 amps per wire; (2) bundle of 15 or more wires; (3) 70 C environment: and (4) vacuum of 10(exp -5) torr or less. To determine the behavior and temperature of electrical wire at different operating conditions, a thermal vacuum test was performed on a representative electrical harness of the Hubble Space Telescope (HST) power distribution system. This paper describes the test and the results.
Two statistics for evaluating parameter identifiability and error reduction
Doherty, John; Hunt, Randall J.
2009-01-01
Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.
PASP Plus: An experiment to measure space-environment effects on photovoltaic power subsystems
NASA Technical Reports Server (NTRS)
Guidice, Donald A.
1992-01-01
The Photovoltaic Array Space Power Plus Diagnostic experiment (PASP Plus) was accepted as part of the APEX Mission payload aboard a Pegastar satellite to be orbited by a Pegasus launch vehicle in late 1992. The mission's elliptical orbit will allow us to investigate both space plasma and space radiation effects. PASP Plus will have eleven types of solar arrays and a full complement of environmental and interactions diagnostic sensors. Measurements of space-plasma interactions on the various solar arrays will be made at large negative voltages (to investigate arcing parameters) and at large positive voltages (to investigate leakage currents) by biasing the arrays to various levels up to -500 and +500 volts. The long-term deterioration in solar array performance caused by exposure to space radiation will also be investigated; radiation dosage will be measured by an electron/proton dosimeter included in the environmental sensor complement. Experimental results from PASP Plus will help establish cause-and-effect relationships and lead to improved design guidelines and test standards for new-technology solar arrays.
Navigation Architecture For A Space Mobile Network
NASA Technical Reports Server (NTRS)
Valdez, Jennifer E.; Ashman, Benjamin; Gramling, Cheryl; Heckler, Gregory W.; Carpenter, Russell
2016-01-01
The Tracking and Data Relay Satellite System (TDRSS) Augmentation Service for Satellites (TASS) is a proposed beacon service to provide a global, space-based GPS augmentation service based on the NASA Global Differential GPS (GDGPS) System. The TASS signal will be tied to the GPS time system and usable as an additional ranging and Doppler radiometric source. Additionally, it will provide data vital to autonomous navigation in the near Earth regime, including space weather information, TDRS ephemerides, Earth Orientation Parameters (EOP), and forward commanding capability. TASS benefits include enhancing situational awareness, enabling increased autonomy, and providing near real-time command access for user platforms. As NASA Headquarters Space Communication and Navigation Office (SCaN) begins to move away from a centralized network architecture and towards a Space Mobile Network (SMN) that allows for user initiated services, autonomous navigation will be a key part of such a system. This paper explores how a TASS beacon service enables the Space Mobile Networking paradigm, what a typical user platform would require, and provides an in-depth analysis of several navigation scenarios and operations concepts.
Low-Cost Detection of Thin Film Stress during Fabrication
NASA Technical Reports Server (NTRS)
Nabors, Sammy A.
2015-01-01
NASA's Marshall Space Flight Center has developed a simple, cost-effective optical method for thin film stress measurements during growth and/or subsequent annealing processes. Stress arising in thin film fabrication presents production challenges for electronic devices, sensors, and optical coatings; it can lead to substrate distortion and deformation, impacting the performance of thin film products. NASA's technique measures in-situ stress using a simple, noncontact fiber optic probe in the thin film vacuum deposition chamber. This enables real-time monitoring of stress during the fabrication process and allows for efficient control of deposition process parameters. By modifying process parameters in real time during fabrication, thin film stress can be optimized or controlled, improving thin film product performance.
Transit Photometry of Recently Discovered Hot Jupiters
NASA Astrophysics Data System (ADS)
McCloat, Sean Peter
The University of North Dakota Space Studies Internet Observatory was used to observe the transits of hot Jupiter exoplanets. Targets for this research were selected from the list of currently confirmed exoplanets using the following criteria: radius > 0.5 Rjup, discovered since 2011, orbiting stars with apparent magnitude > 13. Eleven transits were observed distributed across nine targets with the goal of performing differential photometry for parameter refinement and transit timing variation analysis if data quality allowed. Data quality was ultimately insufficient for robust parameter refinement, but tentative calculations of mid-transit times were made of three of the observed transits. Mid-transit times for WASP-103b and WASP-48b were consistent with predictions and the existing database.
Blakes, Jonathan; Twycross, Jamie; Romero-Campero, Francisco Jose; Krasnogor, Natalio
2011-12-01
The Infobiotics Workbench is an integrated software suite incorporating model specification, simulation, parameter optimization and model checking for Systems and Synthetic Biology. A modular model specification allows for straightforward creation of large-scale models containing many compartments and reactions. Models are simulated either using stochastic simulation or numerical integration, and visualized in time and space. Model parameters and structure can be optimized with evolutionary algorithms, and model properties calculated using probabilistic model checking. Source code and binaries for Linux, Mac and Windows are available at http://www.infobiotics.org/infobiotics-workbench/; released under the GNU General Public License (GPL) version 3. Natalio.Krasnogor@nottingham.ac.uk.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marekova, Elisaveta
Series of relatively large earthquakes in different regions of the Earth are studied. The regions chooses are of a high seismic activity and has a good contemporary network for recording of the seismic events along them. The main purpose of this investigation is the attempt to describe analytically the seismic process in the space and time. We are considering the statistical distributions the distances and the times between consecutive earthquakes (so called pair analysis). Studies conducted on approximating the statistical distribution of the parameters of consecutive seismic events indicate the existence of characteristic functions that describe them best. Such amore » mathematical description allows the distributions of the examined parameters to be compared to other model distributions.« less
Phenotypic models of evolution and development: geometry as destiny.
François, Paul; Siggia, Eric D
2012-12-01
Quantitative models of development that consider all relevant genes typically are difficult to fit to embryonic data alone and have many redundant parameters. Computational evolution supplies models of phenotype with relatively few variables and parameters that allows the patterning dynamics to be reduced to a geometrical picture for how the state of a cell moves. The clock and wavefront model, that defines the phenotype of somitogenesis, can be represented as a sequence of two discrete dynamical transitions (bifurcations). The expression-time to space map for Hox genes and the posterior dominance rule are phenotypes that naturally follow from computational evolution without considering the genetics of Hox regulation. Copyright © 2012 Elsevier Ltd. All rights reserved.
High-mass diffraction in the QCD dipole picture
NASA Astrophysics Data System (ADS)
Bialas, A.; Navelet, H.; Peschanski, R.
1998-05-01
Using the QCD dipole picture of the BFKL pomeron, the cross-section of single diffractive dissociation of virtual photons at high energy and large diffractively excited masses is calculated. The calculation takes into account the full impact-parameter phase-space and thus allows to obtain an exact value of the triple BFKL Pomeron vertex. It appears large enough to compensate the perturbative 6-gluon coupling factor (α/π)3 thus suggesting a rather appreciable diffractive cross-section.
Application of propagation predictions to Earth/space telecommunications system design
NASA Technical Reports Server (NTRS)
1981-01-01
The corresponding between a given propagation phenomenon and system performance is considered. Propagation data are related to system performance parameters, allowing the systems engineer to perform the analyses determining how well requirements are met by a given system design, and enabling the systems engineer to modify that design if necessary. The various ways of specifying performance criteria for different kinds of systems are discussed, and a general procedure for system design is presented and demonstrated.
The Allowed Parameter Space of a Long-lived Neutron Star as the Merger Remnant of GW170817
NASA Astrophysics Data System (ADS)
Ai, Shunke; Gao, He; Dai, Zi-Gao; Wu, Xue-Feng; Li, Ang; Zhang, Bing; Li, Mu-Zi
2018-06-01
Due to the limited sensitivity of the current gravitational wave (GW) detectors, the central remnant of the binary neutron star (NS) merger associated with GW170817 remains an open question. In view of the relatively large total mass, it is generally proposed that the merger of GW170817 would lead to a short-lived hypermassive NS or directly produce a black hole (BH). There is no clear evidence to support or rule out a long-lived NS as the merger remnant. Here, we utilize the GW and electromagnetic (EM) signals to comprehensively investigate the parameter space that allows a long-lived NS to survive as the merger remnant of GW170817. We find that for some stiff equations of state, the merger of GW170817 could, in principle, lead to a massive NS, which has a millisecond spin period. The post-merger GW signal could hardly constrain the ellipticity of the NS. If the ellipticity reaches 10‑3, in order to be compatible with the multi-band EM observations, the dipole magnetic field of the NS (B p ) is constrained to the magnetar level of ∼1014 G. If the ellipticity is smaller than 10‑4, B p is constrained to the level of ∼109–1011 G. These conclusions weakly depend on the adoption of the NS equation of state.
NASA Astrophysics Data System (ADS)
Lim, Hongki; Dewaraja, Yuni K.; Fessler, Jeffrey A.
2018-02-01
Most existing PET image reconstruction methods impose a nonnegativity constraint in the image domain that is natural physically, but can lead to biased reconstructions. This bias is particularly problematic for Y-90 PET because of the low probability positron production and high random coincidence fraction. This paper investigates a new PET reconstruction formulation that enforces nonnegativity of the projections instead of the voxel values. This formulation allows some negative voxel values, thereby potentially reducing bias. Unlike the previously reported NEG-ML approach that modifies the Poisson log-likelihood to allow negative values, the new formulation retains the classical Poisson statistical model. To relax the non-negativity constraint embedded in the standard methods for PET reconstruction, we used an alternating direction method of multipliers (ADMM). Because choice of ADMM parameters can greatly influence convergence rate, we applied an automatic parameter selection method to improve the convergence speed. We investigated the methods using lung to liver slices of XCAT phantom. We simulated low true coincidence count-rates with high random fractions corresponding to the typical values from patient imaging in Y-90 microsphere radioembolization. We compared our new methods with standard reconstruction algorithms and NEG-ML and a regularized version thereof. Both our new method and NEG-ML allow more accurate quantification in all volumes of interest while yielding lower noise than the standard method. The performance of NEG-ML can degrade when its user-defined parameter is tuned poorly, while the proposed algorithm is robust to any count level without requiring parameter tuning.
Optimal correction and design parameter search by modern methods of rigorous global optimization
NASA Astrophysics Data System (ADS)
Makino, K.; Berz, M.
2011-07-01
Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle optics for the computation of aberrations allow the determination of particularly sharp underestimators for large regions. As a consequence, the subsequent progressive pruning of the allowed search space as part of the optimization progresses is carried out particularly effectively. The end result is the rigorous determination of the single or multiple optimal solutions of the parameter optimization, regardless of their location, their number, and the starting values of optimization. The methods are particularly powerful if executed in interplay with genetic optimizers generating their new populations within the currently active unpruned space. Their current best guess provides rigorous upper bounds of the minima, which can then beneficially be used for better pruning. Examples of the method and its performance will be presented, including the determination of all operating points of desired tunes or chromaticities, etc. in storage ring lattices.
Reusable Launch Vehicle Tank/Intertank Sizing Trade Study
NASA Technical Reports Server (NTRS)
Dorsey, John T.; Myers, David E.; Martin, Carl J.
2000-01-01
A tank and intertank sizing tool that includes effects of major design drivers, and which allows parametric studies to be performed, has been developed and calibrated against independent representative results. Although additional design features, such as bulkheads and field joints, are not currently included in the process, the improved level of fidelity has allowed parametric studies to be performed which have resulted in understanding of key tank and intertank design drivers, design sensitivities, and definition of preferred design spaces. The sizing results demonstrated that there were many interactions between the configuration parameters of internal/external payload, vehicle fineness ratio (half body angle), fuel arrangement (LOX-forward/LOX-aft), number of tanks, and tank shape/arrangement (number of lobes).
3+1 and 3+2 sterile neutrino fits
NASA Astrophysics Data System (ADS)
Giunti, Carlo; Laveder, Marco
2011-10-01
We present the results of fits of short-baseline neutrino-oscillation data in 3+1 and 3+2 neutrino-mixing schemes. In spite of the presence of a tension in the interpretation of the data, 3+1 neutrino mixing is attractive for its simplicity and for the natural correspondence of one new entity (a sterile neutrino) with a new effect (short-baseline oscillations). The allowed regions in the oscillation parameter space can be tested in near-future experiments. In the framework of 3+2 neutrino mixing, there is less tension in the interpretation of the data, at the price of introducing a second sterile neutrino. Moreover, the improvement of the parameter goodness of fit is mainly a statistical effect due to an increase in the number of parameters. The CP violation in short-baseline experiments allowed in 3+2 neutrino mixing can explain the positive ν¯μ→ν¯e signal and the negative νμ→νe measurement in the MiniBooNE experiment. For the CP-violating phase, we obtained two minima of the marginal χ2 close to the two values where CP violation is maximal.
A Stochastic Fractional Dynamics Model of Space-time Variability of Rain
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Travis, James E.
2013-01-01
Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment.
Charting the parameter space of the global 21-cm signal
NASA Astrophysics Data System (ADS)
Cohen, Aviad; Fialkov, Anastasia; Barkana, Rennan; Lotem, Matan
2017-12-01
The early star-forming Universe is still poorly constrained, with the properties of high-redshift stars, the first heating sources and reionization highly uncertain. This leaves observers planning 21-cm experiments with little theoretical guidance. In this work, we explore the possible range of high-redshift parameters including the star formation efficiency and the minimal mass of star-forming haloes; the efficiency, spectral energy distribution and redshift evolution of the first X-ray sources; and the history of reionization. These parameters are only weakly constrained by available observations, mainly the optical depth to the cosmic microwave background. We use realistic semi-numerical simulations to produce the global 21-cm signal over the redshift range z = 6-40 for each of 193 different combinations of the astrophysical parameters spanning the allowed range. We show that the expected signal fills a large parameter space, but with a fixed general shape for the global 21-cm curve. Even with our wide selection of models, we still find clear correlations between the key features of the global 21-cm signal and underlying astrophysical properties of the high-redshift Universe, namely the Ly α intensity, the X-ray heating rate and the production rate of ionizing photons. These correlations can be used to directly link future measurements of the global 21-cm signal to astrophysical quantities in a mostly model-independent way. We identify additional correlations that can be used as consistency checks.
NASA Astrophysics Data System (ADS)
Potvin-Trottier, Laurent; Chen, Lingfeng; Horwitz, Alan Rick; Wiseman, Paul W.
2013-08-01
We introduce a new generalized theoretical framework for image correlation spectroscopy (ICS). Using this framework, we extend the ICS method in time-frequency (ν, nu) space to map molecular flow of fluorescently tagged proteins in individual living cells. Even in the presence of a dominant immobile population of fluorescent molecules, nu-space ICS (nICS) provides an unbiased velocity measurement, as well as the diffusion coefficient of the flow, without requiring filtering. We also develop and characterize a tunable frequency-filter for spatio-temporal ICS (STICS) that allows quantification of the density, the diffusion coefficient and the velocity of biased diffusion. We show that the techniques are accurate over a wide range of parameter space in computer simulation. We then characterize the retrograde flow of adhesion proteins (α6- and αLβ2-GFP integrins and mCherry-paxillin) in CHO.B2 cells plated on laminin and intercellular adhesion molecule 1 (ICAM-1) ligands respectively. STICS with a tunable frequency filter, in conjunction with nICS, measures two new transport parameters, the density and transport bias coefficient (a measure of the diffusive character of a flow/biased diffusion), showing that molecular flow in this cell system has a significant diffusive component. Our results suggest that the integrin-ligand interaction, along with the internal myosin-motor generated force, varies for different integrin-ligand pairs, consistent with previous results.
The cosmological analysis of X-ray cluster surveys. III. 4D X-ray observable diagrams
NASA Astrophysics Data System (ADS)
Pierre, M.; Valotti, A.; Faccioli, L.; Clerc, N.; Gastaud, R.; Koulouridis, E.; Pacaud, F.
2017-11-01
Context. Despite compelling theoretical arguments, the use of clusters as cosmological probes is, in practice, frequently questioned because of the many uncertainties surrounding cluster-mass estimates. Aims: Our aim is to develop a fully self-consistent cosmological approach of X-ray cluster surveys, exclusively based on observable quantities rather than masses. This procedure is justified given the possibility to directly derive the cluster properties via ab initio modelling, either analytically or by using hydrodynamical simulations. In this third paper, we evaluate the method on cluster toy-catalogues. Methods: We model the population of detected clusters in the count-rate - hardness-ratio - angular size - redshift space and compare the corresponding four-dimensional diagram with theoretical predictions. The best cosmology+physics parameter configuration is determined using a simple minimisation procedure; errors on the parameters are estimated by averaging the results from ten independent survey realisations. The method allows a simultaneous fit of the cosmological parameters of the cluster evolutionary physics and of the selection effects. Results: When using information from the X-ray survey alone plus redshifts, this approach is shown to be as accurate as the modelling of the mass function for the cosmological parameters and to perform better for the cluster physics, for a similar level of assumptions on the scaling relations. It enables the identification of degenerate combinations of parameter values. Conclusions: Given the considerably shorter computer times involved for running the minimisation procedure in the observed parameter space, this method appears to clearly outperform traditional mass-based approaches when X-ray survey data alone are available.
Necessary conditions for tumbling in the rotational motion
NASA Astrophysics Data System (ADS)
Carrera, Danny H. Z.; Weber, Hans I.
2012-11-01
The goal of this work is the investigation of the necessary conditions for the possible existence of tumbling in rotational motion of rigid bodies. In a stable spinning satellite, tumbling may occur by sufficient strong action of external impulses, when the conical movement characteristic of the stable attitude is de-characterized. For this purpose a methodology is chosen to simplify the study of rotational motions with great amplitude, for example free bodies in space, allowing an extension of the analysis to non-conservative systems. In the case of a satellite in space, the projection of the angular velocity along the principal axes of inertia must be known, defining completely the initial conditions of motion for stability investigations. In this paper, the coordinate systems are established according to the initial condition in order to allow a simple analytical work on the equations of motion. Also it will be proposed the definition of a parameter, calling it tumbling coefficient, to measure the intensity of the tumbling and the amplitude of the motion when crossing limits of stability in the concept of Lyapunov. Tumbling in the motion of bodies in space is not possible when this coefficient is positive. Magnus Triangle representation will be used to represent the geometry of the body, establishing regions of stability/instability for possible initial conditions of motion. In the study of nonconservative systems for an oblate body, one sufficient condition will be enough to assure damped motion, and this condition is checked for a motion damped by viscous torques. This paper seeks to highlight the physical understanding of the phenomena and the influence of various parameters that are important in the process.
Extending the modeling of the anisotropic galaxy power spectrum to k = 0.4 hMpc-1
NASA Astrophysics Data System (ADS)
Hand, Nick; Seljak, Uroš; Beutler, Florian; Vlah, Zvonimir
2017-10-01
We present a model for the redshift-space power spectrum of galaxies and demonstrate its accuracy in describing the monopole, quadrupole, and hexadecapole of the galaxy density field down to scales of k = 0.4 hMpc-1. The model describes the clustering of galaxies in the context of a halo model and the clustering of the underlying halos in redshift space using a combination of Eulerian perturbation theory and N-body simulations. The modeling of redshift-space distortions is done using the so-called distribution function approach. The final model has 13 free parameters, and each parameter is physically motivated rather than a nuisance parameter, which allows the use of well-motivated priors. We account for the Finger-of-God effect from centrals and both isolated and non-isolated satellites rather than using a single velocity dispersion to describe the combined effect. We test and validate the accuracy of the model on several sets of high-fidelity N-body simulations, as well as realistic mock catalogs designed to simulate the BOSS DR12 CMASS data set. The suite of simulations covers a range of cosmologies and galaxy bias models, providing a rigorous test of the level of theoretical systematics present in the model. The level of bias in the recovered values of f σ8 is found to be small. When including scales to k = 0.4 hMpc-1, we find 15-30% gains in the statistical precision of f σ8 relative to k = 0.2 hMpc-1 and a roughly 10-15% improvement for the perpendicular Alcock-Paczynski parameter α⊥. Using the BOSS DR12 CMASS mocks as a benchmark for comparison, we estimate an uncertainty on f σ8 that is ~10-20% larger than other similar Fourier-space RSD models in the literature that use k <= 0.2 hMpc-1, suggesting that these models likely have a too-limited parametrization.
Energy and momentum analysis of the deployment dynamics of nets in space
NASA Astrophysics Data System (ADS)
Botta, Eleonora M.; Sharf, Inna; Misra, Arun K.
2017-11-01
In this paper, the deployment dynamics of nets in space is investigated through a combination of analysis and numerical simulations. The considered net is deployed by ejecting several corner masses and thanks to momentum and energy transfer from those to the innermost threads of the net. In this study, the net is modeled with a lumped-parameter approach, and assumed to be symmetrical, subject to symmetrical initial conditions, and initially slack. The work-energy and momentum conservation principles are employed to carry out centroidal analysis of the net, by conceptually partitioning the net into a system of corner masses and the net proper and applying the aforementioned principles to the corresponding centers of mass. The analysis provides bounds on the values that the velocity of the center of mass of the corner masses and the velocity of the center of mass of the net proper can individually attain, as well as relationships between these and different energy contributions. The analytical results allow to identify key parameters characterizing the deployment dynamics of nets in space, which include the ratio between the mass of the corner masses and the total mass, the initial linear momentum, and the direction of the initial velocity vectors. Numerical tools are employed to validate and interpret further the analytical observations. Comparison of deployment results with and without initial velocity of the net proper suggests that more complete and lasting deployment can be achieved if the corner masses alone are ejected. A sensitivity study is performed for the key parameters identified from the energy/momentum analysis, and the outcome establishes that more lasting deployment and safer capture (i.e., characterized by higher traveled distance) can be achieved by employing reasonably lightweight corner masses, moderate shooting angles, and low shooting velocities. A comparison with current literature on tether-nets for space debris capture confirms overall agreement on the importance and effect of the relevant inertial and ejection parameters on the deployment dynamics.
Dai, Sheng-Yun; Xu, Bing; Zhang, Yi; Li, Jian-Yu; Sun, Fei; Shi, Xin-Yuan; Qiao, Yan-Jiang
2016-09-01
Coptis chinensis (Huanglian) is a commonly used traditional Chinese medicine (TCM) herb and alkaloids are the most important chemical constituents in it. In the present study, an isocratic reverse phase high performance liquid chromatography (RP-HPLC) method allowing the separation of six alkaloids in Huanglian was for the first time developed under the quality by design (QbD) principles. First, five chromatographic parameters were identified to construct a Plackett-Burman experimental design. The critical resolution, analysis time, and peak width were responses modeled by multivariate linear regression. The results showed that the percentage of acetonitrile, concentration of sodium dodecyl sulfate, and concentration of potassium phosphate monobasic were statistically significant parameters (P < 0.05). Then, the Box-Behnken experimental design was applied to further evaluate the interactions between the three parameters on selected responses. Full quadratic models were built and used to establish the analytical design space. Moreover, the reliability of design space was estimated by the Bayesian posterior predictive distribution. The optimal separation was predicted at 40% acetonitrile, 1.7 g·mL(-1) of sodium dodecyl sulfate and 0.03 mol·mL(-1) of potassium phosphate monobasic. Finally, the accuracy profile methodology was used to validate the established HPLC method. The results demonstrated that the QbD concept could be efficiently used to develop a robust RP-HPLC analytical method for Huanglian. Copyright © 2016 China Pharmaceutical University. Published by Elsevier B.V. All rights reserved.
Space Life Sciences at NASA: Spaceflight Health Policy and Standards
NASA Technical Reports Server (NTRS)
Davis, Jeffrey R.; House, Nancy G.
2006-01-01
In January 2005, the President proposed a new initiative, the Vision for Space Exploration. To accomplish the goals within the vision for space exploration, physicians and researchers at Johnson Space Center are establishing spaceflight health standards. These standards include fitness for duty criteria (FFD), permissible exposure limits (PELs), and permissible outcome limits (POLs). POLs delineate an acceptable maximum decrement or change in a physiological or behavioral parameter, as the result of exposure to the space environment. For example cardiovascular fitness for duty standards might be a measurable clinical parameter minimum that allows successful performance of all required duties. An example of a permissible exposure limit for radiation might be the quantifiable limit of exposure over a given length of time (e.g. life time radiation exposure). An example of a permissible outcome limit might be the length of microgravity exposure that would minimize bone loss. The purpose of spaceflight health standards is to promote operational and vehicle design requirements, aid in medical decision making during space missions, and guide the development of countermeasures. Standards will be based on scientific and clinical evidence including research findings, lessons learned from previous space missions, studies conducted in space analog environments, current standards of medical practices, risk management data, and expert recommendations. To focus the research community on the needs for exploration missions, NASA has developed the Bioastronautics Roadmap. The Bioastronautics Roadmap, NASA's approach to identification of risks to human space flight, revised baseline was released in February 2005. This document was reviewed by the Institute of Medicine in November 2004 and the final report was received in October 2005. The roadmap defines the most important research and operational needs that will be used to set policy, standards (define acceptable risk), and implement an overall Risk Management and Analysis process. Currently NASA is drafting spaceflight health standards for neurosensory alterations, space radiation exposure, behavioral health, muscle atrophy, cardiovascular fitness, immunological compromise, bone demineralization, and nutrition.
NASA Astrophysics Data System (ADS)
Marie, S.; Irving, J. D.; Looms, M. C.; Nielsen, L.; Holliger, K.
2011-12-01
Geophysical methods such as ground-penetrating radar (GPR) can provide valuable information on the hydrological properties of the vadose zone. In particular, there is evidence to suggest that the stochastic inversion of such data may allow for significant reductions in uncertainty regarding subsurface van-Genuchten-Mualem (VGM) parameters, which characterize unsaturated hydrodynamic behaviour as defined by the combination of the water retention and hydraulic conductivity functions. A significant challenge associated with the use of geophysical methods in a hydrological context is that they generally exhibit an indirect and/or weak sensitivity to the hydraulic parameters of interest. A novel and increasingly popular means of addressing this issue involves the acquisition of geophysical data in a time-lapse fashion while changes occur in the hydrological condition of the probed subsurface region. Another significant challenge when attempting to use geophysical data for the estimation of subsurface hydrological properties is the inherent non-linearity and non-uniqueness of the corresponding inverse problems. Stochastic inversion approaches have the advantage of providing a comprehensive exploration of the model space, which makes them ideally suited for addressing such issues. In this work, we present the stochastic inversion of time-lapse zero-offset-profile (ZOP) crosshole GPR traveltime data, collected during a forced infiltration experiment at the Arreneas field site in Denmark, in order to estimate subsurface VGM parameters and their corresponding uncertainties. We do this using a Bayesian Markov-chain-Monte-Carlo (MCMC) inversion approach. We find that the Bayesian-MCMC methodology indeed allows for a substantial refinement in the inferred posterior parameter distributions of the VGM parameters as compared to the corresponding priors. To further understand the potential impact on capturing the underlying hydrological behaviour, we also explore how the posterior VGM parameter distributions affect the hydrodynamic characteristics. In doing so, we find clear evidence that the approach pursued in this study allows for effective characterization of the hydrological behaviour of the probed subsurface region.
DECIPHERING THERMAL PHASE CURVES OF DRY, TIDALLY LOCKED TERRESTRIAL PLANETS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koll, Daniel D. B.; Abbot, Dorian S., E-mail: dkoll@uchicago.edu
2015-03-20
Next-generation space telescopes will allow us to characterize terrestrial exoplanets. To do so effectively it will be crucial to make use of all available data. We investigate which atmospheric properties can, and cannot, be inferred from the broadband thermal phase curve of a dry and tidally locked terrestrial planet. First, we use dimensional analysis to show that phase curves are controlled by six nondimensional parameters. Second, we use an idealized general circulation model to explore the relative sensitivity of phase curves to these parameters. We find that the feature of phase curves most sensitive to atmospheric parameters is the peak-to-troughmore » amplitude. Moreover, except for hot and rapidly rotating planets, the phase amplitude is primarily sensitive to only two nondimensional parameters: (1) the ratio of dynamical to radiative timescales and (2) the longwave optical depth at the surface. As an application of this technique, we show how phase curve measurements can be combined with transit or emission spectroscopy to yield a new constraint for the surface pressure and atmospheric mass of terrestrial planets. We estimate that a single broadband phase curve, measured over half an orbit with the James Webb Space Telescope, could meaningfully constrain the atmospheric mass of a nearby super-Earth. Such constraints will be important for studying the atmospheric evolution of terrestrial exoplanets as well as characterizing the surface conditions on potentially habitable planets.« less
Duchêne, Gaëtan; Peeters, Frank; Peeters, André; Duprez, Thierry
2017-08-01
To compare the sensitivity and early temporal changes of diffusion parameters obtained from diffusion tensor imaging (DTI), diffusional kurtosis imaging (DKI), q-space analysis (QSA) and bi-exponential modelling in hyperacute stroke patients. A single investigational acquisition allowing the four diffusion analyses was performed on seven hyperacute stroke patients with a 3T system. The percentage change between ipsi- and contralateral regions were compared at admission and 24 h later. Two out of the seven patients were imaged every 6 h during this period. Kurtoses from both DKI and QSA were the most sensitive of the tested diffusion parameters in the few hours following ischemia. An early increase-maximum-decrease pattern of evolution was highlighted during the 24-h period for all parameters proportional to diffusion coefficients. A similar pattern was observed for both kurtoses in only one of two patients. Our comparison was performed using identical diffusion encoding timings and on patients in the same stage of their condition. Although preliminary, our findings confirm those of previous studies that showed enhanced sensitivity of kurtosis. A fine time mapping of diffusion metrics in hyperacute stroke patients was presented which advocates for further investigations on larger animal or human cohorts.
NASA Astrophysics Data System (ADS)
Halsig, Sebastian; Artz, Thomas; Iddink, Andreas; Nothnagel, Axel
2016-12-01
On its way through the atmosphere, radio signals are delayed and affected by bending and attenuation effects relative to a theoretical path in vacuum. In particular, the neutral part of the atmosphere contributes considerably to the error budget of space-geodetic observations. At the same time, space-geodetic techniques become more and more important in the understanding of the Earth's atmosphere, because atmospheric parameters can be linked to the water vapor content in the atmosphere. The tropospheric delay is usually taken into account by applying an adequate model for the hydrostatic component and by additionally estimating zenith wet delays for the highly variable wet component. Sometimes, the Ordinary Least Squares (OLS) approach leads to negative estimates, which would be equivalent to negative water vapor in the atmosphere and does, of course, not reflect meteorological and physical conditions in a plausible way. To cope with this phenomenon, we introduce an Inequality Constrained Least Squares (ICLS) method from the field of convex optimization and use inequality constraints to force the tropospheric parameters to be non-negative allowing for a more realistic tropospheric parameter estimation in a meteorological sense. Because deficiencies in the a priori hydrostatic modeling are almost fully compensated by the tropospheric estimates, the ICLS approach urgently requires suitable a priori hydrostatic delays. In this paper, we briefly describe the ICLS method and validate its impact with regard to station positions.
Schweiner, Frank; Laturner, Jeanine; Main, Jörg; Wunner, Günter
2017-11-01
Until now only for specific crossovers between Poissonian statistics (P), the statistics of a Gaussian orthogonal ensemble (GOE), or the statistics of a Gaussian unitary ensemble (GUE) have analytical formulas for the level spacing distribution function been derived within random matrix theory. We investigate arbitrary crossovers in the triangle between all three statistics. To this aim we propose an according formula for the level spacing distribution function depending on two parameters. Comparing the behavior of our formula for the special cases of P→GUE, P→GOE, and GOE→GUE with the results from random matrix theory, we prove that these crossovers are described reasonably. Recent investigations by F. Schweiner et al. [Phys. Rev. E 95, 062205 (2017)2470-004510.1103/PhysRevE.95.062205] have shown that the Hamiltonian of magnetoexcitons in cubic semiconductors can exhibit all three statistics in dependence on the system parameters. Evaluating the numerical results for magnetoexcitons in dependence on the excitation energy and on a parameter connected with the cubic valence band structure and comparing the results with the formula proposed allows us to distinguish between regular and chaotic behavior as well as between existent or broken antiunitary symmetries. Increasing one of the two parameters, transitions between different crossovers, e.g., from the P→GOE to the P→GUE crossover, are observed and discussed.
NASA Astrophysics Data System (ADS)
Xu, Z.; Mace, G. G.; Posselt, D. J.
2017-12-01
As we begin to contemplate the next generation atmospheric observing systems, it will be critically important that we are able to make informed decisions regarding the trade space between scientific capability and the need to keep complexity and cost within definable limits. To explore this trade space as it pertains to understanding key cloud and precipitation processes, we are developing a Markov Chain Monte Carlo (MCMC) algorithm suite that allows us to arbitrarily define the specifications of candidate observing systems and then explore how the uncertainties in key retrieved geophysical parameters respond to that observing system. MCMC algorithms produce a more complete posterior solution space, and allow for an objective examination of information contained in measurements. In our initial implementation, MCMC experiments are performed to retrieve vertical profiles of cloud and precipitation properties from a spectrum of active and passive measurements collected by aircraft during the ACE Radiation Definition Experiments (RADEX). Focusing on shallow cumulus clouds observed during the Integrated Precipitation and Hydrology EXperiment (IPHEX), observing systems in this study we consider W and Ka-band radar reflectivity, path-integrated attenuation at those frequencies, 31 and 94 GHz brightness temperatures as well as visible and near-infrared reflectance. By varying the sensitivity and uncertainty of these measurements, we quantify the capacity of various combinations of observations to characterize the physical properties of clouds and precipitation.
Frame covariant nonminimal multifield inflation
NASA Astrophysics Data System (ADS)
Karamitsos, Sotirios; Pilaftsis, Apostolos
2018-02-01
We introduce a frame-covariant formalism for inflation of scalar-curvature theories by adopting a differential geometric approach which treats the scalar fields as coordinates living on a field-space manifold. This ensures that our description of inflation is both conformally and reparameterization covariant. Our formulation gives rise to extensions of the usual Hubble and potential slow-roll parameters to generalized fully frame-covariant forms, which allow us to provide manifestly frame-invariant predictions for cosmological observables, such as the tensor-to-scalar ratio r, the spectral indices nR and nT, their runnings αR and αT, the non-Gaussianity parameter fNL, and the isocurvature fraction βiso. We examine the role of the field space curvature in the generation and transfer of isocurvature modes, and we investigate the effect of boundary conditions for the scalar fields at the end of inflation on the observable inflationary quantities. We explore the stability of the trajectories with respect to the boundary conditions by using a suitable sensitivity parameter. To illustrate our approach, we first analyze a simple minimal two-field scenario before studying a more realistic nonminimal model inspired by Higgs inflation. We find that isocurvature effects are greatly enhanced in the latter scenario and must be taken into account for certain values in the parameter space such that the model is properly normalized to the observed scalar power spectrum PR. Finally, we outline how our frame-covariant approach may be extended beyond the tree-level approximation through the Vilkovisky-De Witt formalism, which we generalize to take into account conformal transformations, thereby leading to a fully frame-invariant effective action at the one-loop level.
Upper limits to submillimetre-range forces from extra space-time dimensions.
Long, Joshua C; Chan, Hilton W; Churnside, Allison B; Gulbis, Eric A; Varney, Michael C M; Price, John C
2003-02-27
String theory is the most promising approach to the long-sought unified description of the four forces of nature and the elementary particles, but direct evidence supporting it is lacking. The theory requires six extra spatial dimensions beyond the three that we observe; it is usually supposed that these extra dimensions are curled up into small spaces. This 'compactification' induces 'moduli' fields, which describe the size and shape of the compact dimensions at each point in space-time. These moduli fields generate forces with strengths comparable to gravity, which according to some recent predictions might be detected on length scales of about 100 microm. Here we report a search for gravitational-strength forces using planar oscillators separated by a gap of 108 micro m. No new forces are observed, ruling out a substantial portion of the previously allowed parameter space for the strange and gluon moduli forces, and setting a new upper limit on the range of the string dilaton and radion forces.
NASA Technical Reports Server (NTRS)
Bula, R. J.
1997-01-01
The ASTROCULTURE(trademark) plant growth unit flown as part on the STS-63 mission in February 1995, represented the first time plants were flown in microgravity in a enclosed controlled environment plant growth facility. In addition to control of the major environmental parameters, nutrients were provided to the plants with the ZEOPONICS system developed by NASA Johnson Space Center scientists. Two plant species were included in this space experiment, dwarf wheat (Triticum aestivum) and a unique mustard called "Wisconsin Fast Plants" (Brassica rapa). Extensive post-flight analyses have been performed on the plant material and it has been concluded that plant growth and development was normal during the period the plants were in the microgravity environment of space. However, adequate plant growth and development control data were not available for direct comparisons of plant responses to the microgravity environment with those of plants grown at 1 g. Such data would allow for a more complete interpretation of the extent that microgravity affects plant growth and development.
Measuring the Microlensing Parallax from Various Space Observatories
NASA Astrophysics Data System (ADS)
Bachelet, E.; Hinse, T. C.; Street, R.
2018-05-01
A few observational methods allow the measurement of the mass and distance of the lens-star for a microlensing event. A first estimate can be obtained by measuring the microlensing parallax effect produced by either the motion of the Earth (annual parallax) or the contemporaneous observation of the lensing event from two (or more) observatories (space or terrestrial parallax) sufficiently separated from each other. Further developing ideas originally outlined by Gould as well as Mogavero & Beaulieu, we review the possibility of measuring systematically the microlensing parallax using a telescope based on the Moon surface and other space-based observing platforms, including the upcoming WFIRST space-telescope. We first generalize the Fisher matrix formulation and present results demonstrating the advantage for each observing scenario. We conclude by outlining the limitation of the Fisher matrix analysis when submitted to a practical data modeling process. By considering a lunar-based parallax observation, we find that parameter correlations introduce a significant loss in detection efficiency of the probed lunar parallax effect.
NASA Astrophysics Data System (ADS)
Tarnopolski, Mariusz
2018-01-01
The Chirikov standard map and the 2D Froeschlé map are investigated. A few thousand values of the Hurst exponent (HE) and the maximal Lyapunov exponent (mLE) are plotted in a mixed space of the nonlinear parameter versus the initial condition. Both characteristic exponents reveal remarkably similar structures in this space. A tight correlation between the HEs and mLEs is found, with the Spearman rank ρ = 0 . 83 and ρ = 0 . 75 for the Chirikov and 2D Froeschlé maps, respectively. Based on this relation, a machine learning (ML) procedure, using the nearest neighbor algorithm, is performed to reproduce the HE distribution based on the mLE distribution alone. A few thousand HE and mLE values from the mixed spaces were used for training, and then using 2 - 2 . 4 × 105 mLEs, the HEs were retrieved. The ML procedure allowed to reproduce the structure of the mixed spaces in great detail.
NASA Astrophysics Data System (ADS)
Ryblewski, Radoslaw; Strickland, Michael
2015-07-01
We compute dilepton production from the deconfined phase of the quark-gluon plasma using leading-order (3 +1 )-dimensional anisotropic hydrodynamics. The anisotropic hydrodynamics equations employed describe the full spatiotemporal evolution of the transverse temperature, spheroidal momentum-space anisotropy parameter, and the associated three-dimensional collective flow of the matter. The momentum-space anisotropy is also taken into account in the computation of the dilepton production rate, allowing for a self-consistent description of dilepton production from the quark-gluon plasma. For our final results, we present predictions for high-energy dilepton yields as a function of invariant mass, transverse momentum, and pair rapidity. We demonstrate that high-energy dilepton production is extremely sensitive to the assumed level of initial momentum-space anisotropy of the quark-gluon plasma. As a result, it may be possible to experimentally constrain the early-time momentum-space anisotropy of the quark-gluon plasma generated in relativistic heavy-ion collisions using high-energy dilepton yields.
Aeolus End-To-End Simulator and Wind Retrieval Algorithms up to Level 1B
NASA Astrophysics Data System (ADS)
Reitebuch, Oliver; Marksteiner, Uwe; Rompel, Marc; Meringer, Markus; Schmidt, Karsten; Huber, Dorit; Nikolaus, Ines; Dabas, Alain; Marshall, Jonathan; de Bruin, Frank; Kanitz, Thomas; Straume, Anne-Grete
2018-04-01
The first wind lidar in space ALADIN will be deployed on ESÁs Aeolus mission. In order to assess the performance of ALADIN and to optimize the wind retrieval and calibration algorithms an end-to-end simulator was developed. This allows realistic simulations of data downlinked by Aeolus. Together with operational processors this setup is used to assess random and systematic error sources and perform sensitivity studies about the influence of atmospheric and instrument parameters.
Lerman, Gilad M; Levy, Uriel
2007-08-01
We study the tight-focusing properties of spatially variant vector optical fields with elliptical symmetry of linear polarization. We found the eccentricity of the incident polarized light to be an important parameter providing an additional degree of freedom assisting in controlling the field properties at the focus and allowing matching of the field distribution at the focus to the specific application. Applications of these space-variant polarized beams vary from lithography and optical storage to particle beam trapping and material processing.
R (D(*)) anomalies in light of a nonminimal universal extra dimension
NASA Astrophysics Data System (ADS)
Biswas, Aritra; Shaw, Avirup; Patra, Sunando Kumar
2018-02-01
We estimate contributions from Kaluza-Klein excitations of gauge bosons and physical charge scalar for the explanation of the lepton flavor universality violating excess in the ratios R (D ) and R (D*) in 5 dimensional universal extra dimensional scenario with nonvanishing boundary localized terms. This model is conventionally known as nonminimal universal extra dimensional model. We obtain the allowed parameter space in accordance with constraints coming from Bc→τ ν decay, as well as those from the electroweak precision tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akkelin, S.V.; Sinyukov, Yu.M.
A method allowing analysis of the overpopulation of phase space in heavy ion collisions in a model-independent way is proposed within the hydrodynamic approach. It makes it possible to extract a chemical potential of thermal pions at freeze-out, irrespective of the form of freeze-out (isothermal) hypersurface in Minkowski space and transverse flows on it. The contributions of resonance (with masses up to 2 GeV) decays to spectra, interferometry volumes, and phase-space densities are calculated and discussed in detail. The estimates of average phase-space densities and chemical potentials of thermal pions are obtained for SPS and RHIC energies. They demonstrate thatmore » multibosonic phenomena at those energies might be considered as a correction factor rather than as a significant physical effect. The analysis of the evolution of the pion average phase-space density in chemically frozen hadron systems shows that it is almost constant or slightly increases with time while the particle density and phase-space density at each space point decreases rapidly during the system's expansion. We found that, unlike the particle density, the average phase-space density has no direct link to the freeze-out criterion and final thermodynamic parameters, being connected rather to the initial phase-space density of hadronic matter formed in relativistic nucleus-nucleus collisions.« less
International Space Station USOS Crew Quarters On-orbit vs Design Performance Comparison
NASA Technical Reports Server (NTRS)
Broyan, James Lee, Jr.; Borrego, Melissa Ann; Bahr, Juergen F.
2008-01-01
The International Space Station (ISS) United States Operational Segment (USOS) received the first two permanent ISS Crew Quarters (CQ) on Utility Logistics Flight Two (ULF2) in November 2008. Up to four CQs can be installed into the Node 2 element to increase the ISS crewmember size to six. The CQs provide private crewmember space with enhanced acoustic noise mitigation, integrated radiation reduction material, communication equipment, redundant electrical systems, and redundant caution and warning systems. The racksized CQ is a system with multiple crewmember restraints, adjustable lighting, controllable ventilation, and interfaces that allow each crewmember to personalize their CQ workspace. The deployment and initial operational checkout during integration of the ISS CQ to the Node is described. Additionally, the comparison of on-orbit to original design performance is outlined for the following key operational parameters: interior acoustic performance, air flow rate, temperature rise, and crewmember feedback on provisioning and restraint layout.
Leaking in history space: A way to analyze systems subjected to arbitrary driving
NASA Astrophysics Data System (ADS)
Kaszás, Bálint; Feudel, Ulrike; Tél, Tamás
2018-03-01
Our aim is to unfold phase space structures underlying systems with a drift in their parameters. Such systems are non-autonomous and belong to the class of non-periodically driven systems where the traditional theory of chaos (based e.g., on periodic orbits) does not hold. We demonstrate that even such systems possess an underlying topological horseshoe-like structure at least for a finite period of time. This result is based on a specifically developed method which allows to compute the corresponding time-dependent stable and unstable foliations. These structures can be made visible by prescribing a certain type of history for an ensemble of trajectories in phase space and by analyzing the trajectories fulfilling this constraint. The process can be considered as a leaking in history space—a generalization of traditional leaking, a method that has become widespread in traditional chaotic systems, to leaks depending on time.
Bootstrapping conformal field theories with the extremal functional method.
El-Showk, Sheer; Paulos, Miguel F
2013-12-13
The existence of a positive linear functional acting on the space of (differences between) conformal blocks has been shown to rule out regions in the parameter space of conformal field theories (CFTs). We argue that at the boundary of the allowed region the extremal functional contains, in principle, enough information to determine the dimensions and operator product expansion (OPE) coefficients of an infinite number of operators appearing in the correlator under analysis. Based on this idea we develop the extremal functional method (EFM), a numerical procedure for deriving the spectrum and OPE coefficients of CFTs lying on the boundary (of solution space). We test the EFM by using it to rederive the low lying spectrum and OPE coefficients of the two-dimensional Ising model based solely on the dimension of a single scalar quasiprimary--no Virasoro algebra required. Our work serves as a benchmark for applications to more interesting, less known CFTs in the near future.
Attitude determination for high-accuracy submicroradian jitter pointing on space-based platforms
NASA Astrophysics Data System (ADS)
Gupta, Avanindra A.; van Houten, Charles N.; Germann, Lawrence M.
1990-10-01
A description of the requirement definition process is given for a new wideband attitude determination subsystem (ADS) for image motion compensation (IMC) systems. The subsystem consists of either lateral accelerometers functioning in differential pairs or gas-bearing gyros for high-frequency sensors using CCD-based star trackers for low-frequency sensors. To minimize error the sensor signals are combined so that the mixing filter does not allow phase distortion. The two ADS models are introduced in an IMC simulation to predict measurement error, correction capability, and residual image jitter for a variety of system parameters. The IMC three-axis testbed is utilized to simulate an incoming beam in inertial space. Results demonstrate that both mechanical and electronic IMC meet the requirements of image stabilization for space-based observation at submicroradian-jitter levels. Currently available technology may be employed to implement IMC systems.
Overview of computational control research at UT Austin
NASA Technical Reports Server (NTRS)
Bong, Wie
1989-01-01
An overview of current research activities at UT Austin is presented to discuss certain technical issues in the following areas: (1) Computer-Aided Nonlinear Control Design: In this project, the describing function method is employed for the nonlinear control analysis and design of a flexible spacecraft equipped with pulse modulated reaction jets. INCA program has been enhanced to allow the numerical calculation of describing functions as well as the nonlinear limit cycle analysis capability in the frequency domain; (2) Robust Linear Quadratic Gaussian (LQG) Compensator Synthesis: Robust control design techniques and software tools are developed for flexible space structures with parameter uncertainty. In particular, an interactive, robust multivariable control design capability is being developed for INCA program; and (3) LQR-Based Autonomous Control System for the Space Station: In this project, real time implementation of LQR-based autonomous control system is investigated for the space station with time-varying inertias and with significant multibody dynamic interactions.
Noncommutative products of Euclidean spaces
NASA Astrophysics Data System (ADS)
Dubois-Violette, Michel; Landi, Giovanni
2018-05-01
We present natural families of coordinate algebras on noncommutative products of Euclidean spaces R^{N_1} × _R R^{N_2} . These coordinate algebras are quadratic ones associated with an R -matrix which is involutive and satisfies the Yang-Baxter equations. As a consequence, they enjoy a list of nice properties, being regular of finite global dimension. Notably, we have eight-dimensional noncommutative euclidean spaces R4 × _R R4 . Among these, particularly well behaved ones have deformation parameter u \\in S^2 . Quotients include seven spheres S7_u as well as noncommutative quaternionic tori TH_u = S^3 × _u S^3 . There is invariance for an action of {{SU}}(2) × {{SU}}(2) on the torus TH_u in parallel with the action of U(1) × U(1) on a `complex' noncommutative torus T^2_θ which allows one to construct quaternionic toric noncommutative manifolds. Additional classes of solutions are disjoint from the classical case.
Description of and preliminary tests results for the Joint Damping Experiment (JDX)
NASA Technical Reports Server (NTRS)
Bingham, Jeffrey G.; Folkman, Steven L.
1995-01-01
An effort is currently underway to develop an experiment titled joint Damping E_periment (JDX) to fly on the Space Shuttle as Get Away Special Payload G-726. This project is funded by NASA's IN-Space Technology Experiments Program and is scheduled to fly in July 1995 on STS-69. JDX will measure the influence of gravity on the structural damping of a three bay truss having clearance fit pinned joints. Structural damping is an important parameter in the dynamics of space structures. Future space structures will require more precise knowledge of structural damping than is currently available. The mission objectives are to develop a small-scale shuttle flight experiment that allows researchers to: (1) characterize the influence of gravity and joint gaps on structural damping and dynamic behavior of a small-scale truss model, and (2) evaluate the applicability of low-g aircraft test results for predicting on-orbit behavior. Completing the above objectives will allow a better understanding and/or prediction of structural damping occurring in a pin jointed truss. Predicting damping in joints is quite difficult. One of the important variables influencing joint damping is gravity. Previous work has shown that gravity loads can influence damping in a pin jointed truss structure. Flying this experiment as a GAS payload will allow testing in a microgravity environment. The on-orbit data (in micro-gravity) will be compared with ground test results. These data will be used to help develop improved models to predict damping due to pinned joints. Ground and low-g aircraft testing of this experiment has been completed. This paper describes the experiment and presents results of both ground and low-g aircraft tests which demonstrate that damping of the truss is dramatically influenced by gravity.
An analysis of the massless planet approximation in transit light curve models
NASA Astrophysics Data System (ADS)
Millholland, Sarah; Ruch, Gerry
2015-08-01
Many extrasolar planet transit light curve models use the approximation of a massless planet. They approximate the planet as orbiting elliptically with the host star at the orbit’s focus instead of depicting the planet and star as both orbiting around a common center of mass. This approximation should generally be very good because the transit is a small fraction of the full-phase curve and the planet to stellar mass ratio is typically very small. However, to fully examine the legitimacy of this approximation, it is useful to perform a robust, all-parameter space-encompassing statistical comparison between the massless planet model and the more accurate model.Towards this goal, we establish two questions: (1) In what parameter domain is the approximation invalid? (2) If characterizing an exoplanetary system in this domain, what is the error of the parameter estimates when using the simplified model? We first address question (1). Given each parameter vector in a finite space, we can generate the simplified and more complete model curves. Associated with these model curves is a measure of the deviation between them, such as the root mean square (RMS). We use Gibbs sampling to generate a sample that is distributed according to the RMS surface. The high-density regions in the sample correspond to a large deviation between the models. To determine the domains of these high-density areas, we first employ the Ordering Points to Identify the Clustering Structure (OPTICS) algorithm. We then characterize the subclusters by performing the Patient Rule Induction Method (PRIM) on the transformed Principal Component spaces of each cluster. This process yields descriptors of the parameter domains with large discrepancies between the models.To consider question (2), we start by generating synthetic transit curve observations in the domains specified by the above analysis. We then derive the best-fit parameters of these synthetic light curves according to each model and examine the quality of agreement between the estimated parameters. Taken as a whole, these steps allow for a thorough analysis of the validity of the massless planet approximation.
The potential of a GAS can with payload G-169
NASA Technical Reports Server (NTRS)
Tamir, David
1988-01-01
The feasibility of using welding for the construction, expansion and emergency repair of space based structures is discussed and the advantages of gas tungsten arc welding (GTAW) over other welding techniques are briefly examined. The objective and design concept for the G-169 Get Away Special payload are described. The G-169 experiment will allow the comparison of a space GTA welded joint with a terrestrial GTA welded joint with all parameters held constant except for gravitational forces. Specifically, a bead-on-plate weld around the perimeter of a 2 inch diameter stainless steel pipe section will be performed. The use of Learjet microgravity simulation for the G-169 and other Get Away Special experiments is also addressed.
Construction of CASCI-type wave functions for very large active spaces.
Boguslawski, Katharina; Marti, Konrad H; Reiher, Markus
2011-06-14
We present a procedure to construct a configuration-interaction expansion containing arbitrary excitations from an underlying full-configuration-interaction-type wave function defined for a very large active space. Our procedure is based on the density-matrix renormalization group (DMRG) algorithm that provides the necessary information in terms of the eigenstates of the reduced density matrices to calculate the coefficient of any basis state in the many-particle Hilbert space. Since the dimension of the Hilbert space scales binomially with the size of the active space, a sophisticated Monte Carlo sampling routine is employed. This sampling algorithm can also construct such configuration-interaction-type wave functions from any other type of tensor network states. The configuration-interaction information obtained serves several purposes. It yields a qualitatively correct description of the molecule's electronic structure, it allows us to analyze DMRG wave functions converged for the same molecular system but with different parameter sets (e.g., different numbers of active-system (block) states), and it can be considered a balanced reference for the application of a subsequent standard multi-reference configuration-interaction method.
Unsaturated flow characterization utilizing water content data collected within the capillary fringe
Baehr, Arthur; Reilly, Timothy J.
2014-01-01
An analysis is presented to determine unsaturated zone hydraulic parameters based on detailed water content profiles, which can be readily acquired during hydrological investigations. Core samples taken through the unsaturated zone allow for the acquisition of gravimetrically determined water content data as a function of elevation at 3 inch intervals. This dense spacing of data provides several measurements of the water content within the capillary fringe, which are utilized to determine capillary pressure function parameters via least-squares calibration. The water content data collected above the capillary fringe are used to calculate dimensionless flow as a function of elevation providing a snapshot characterization of flow through the unsaturated zone. The water content at a flow stagnation point provides an in situ estimate of specific yield. In situ determinations of capillary pressure function parameters utilizing this method, together with particle-size distributions, can provide a valuable supplement to data libraries of unsaturated zone hydraulic parameters. The method is illustrated using data collected from plots within an agricultural research facility in Wisconsin.
Determination of Littlest Higgs Model Parameters at the ILC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conley, John A.; Hewett, JoAnne; Le, My Phuong
2005-07-27
We examine the effects of the extended gauge sector of the Littlest Higgs model in high energy e{sup +}e{sup -} collisions. We find that the search reach in e{sup +}e{sup -} {yields} f{bar f} at a {radical}s = 500 GeV International Linear Collider covers essentially the entire parameter region where the Littlest Higgs model is relevant to the gauge hierarchy problem. In addition, we show that this channel provides an accurate determination of the fundamental model parameters, to the precision of a few percent, provided that the LHC measures the mass of the heavy neutral gauge .eld. Additionally, we showmore » that the couplings of the extra gauge bosons to the light Higgs can be observed from the process e{sup +}e{sup -} {yields} Zh for a significant region of the parameter space. This allows for confirmation of the structure of the cancellation of the Higgs mass quadratic divergence and would verify the little Higgs mechanism.« less
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.
1992-01-01
The aspect of controller design for improving the ride quality of aircraft in terms of damping ratio and natural frequency specifications on the short period dynamics is addressed. The controller is designed to be robust with respect to uncertainties in the real parameters of the control design model such as uncertainties in the dimensional stability derivatives, imperfections in actuator/sensor locations and possibly variations in flight conditions, etc. The design is based on a new robust root clustering theory developed by the author by extending the nominal root clustering theory of Gutman and Jury to perturbed matrices. The proposed methodology allows to get an explicit relationship between the parameters of the root clustering region and the uncertainty radius of the parameter space. The current literature available for robust stability becomes a special case of this unified theory. The bounds derived on the parameter perturbation for robust root clustering are then used in selecting the robust controller.
Linear system identification via backward-time observer models
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Phan, Minh Q.
1992-01-01
Presented here is an algorithm to compute the Markov parameters of a backward-time observer for a backward-time model from experimental input and output data. The backward-time observer Markov parameters are decomposed to obtain the backward-time system Markov parameters (backward-time pulse response samples) for the backward-time system identification. The identified backward-time system Markov parameters are used in the Eigensystem Realization Algorithm to identify a backward-time state-space model, which can be easily converted to the usual forward-time representation. If one reverses time in the model to be identified, what were damped true system modes become modes with negative damping, growing as the reversed time increases. On the other hand, the noise modes in the identification still maintain the property that they are stable. The shift from positive damping to negative damping of the true system modes allows one to distinguish these modes from noise modes. Experimental results are given to illustrate when and to what extent this concept works.
Ring rolling process simulation for geometry optimization
NASA Astrophysics Data System (ADS)
Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio
2017-10-01
Ring Rolling is a complex hot forming process where different rolls are involved in the production of seamless rings. Since each roll must be independently controlled, different speed laws must be set; usually, in the industrial environment, a milling curve is introduced to monitor the shape of the workpiece during the deformation in order to ensure the correct ring production. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular speed of main roll) on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR (Hot Ring Rolling) has been implemented in SFTC DEFORM V11. The FEM model has been used to formulate a proper optimization problem. The optimization procedure has been implemented in the commercial software DS ISight in order to find the combination of process parameters which allows to minimize the percentage error of each obtained dimension with respect to its nominal value. The software allows to find the relationship between input and output parameters applying Response Surface Methodology (RSM), by using the exact values of output parameters in the control points of the design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. After the calculation of the response surfaces for the selected output parameters, an optimization procedure based on Genetic Algorithms has been applied. At the end, the error between each obtained dimension and its nominal value has been minimized. The constraints imposed were the maximum values of standard deviations of the dimensions obtained for the final ring.
NASA Astrophysics Data System (ADS)
Djallel Dilmi, Mohamed; Mallet, Cécile; Barthes, Laurent; Chazottes, Aymeric
2017-04-01
Rain time series records are generally studied using rainfall rate or accumulation parameters, which are estimated for a fixed duration (typically 1 min, 1 h or 1 day). In this study we use the concept of rain events
. The aim of the first part of this paper is to establish a parsimonious characterization of rain events, using a minimal set of variables selected among those normally used for the characterization of these events. A methodology is proposed, based on the combined use of a genetic algorithm (GA) and self-organizing maps (SOMs). It can be advantageous to use an SOM, since it allows a high-dimensional data space to be mapped onto a two-dimensional space while preserving, in an unsupervised manner, most of the information contained in the initial space topology. The 2-D maps obtained in this way allow the relationships between variables to be determined and redundant variables to be removed, thus leading to a minimal subset of variables. We verify that such 2-D maps make it possible to determine the characteristics of all events, on the basis of only five features (the event duration, the peak rain rate, the rain event depth, the standard deviation of the rain rate event and the absolute rain rate variation of the order of 0.5). From this minimal subset of variables, hierarchical cluster analyses were carried out. We show that clustering into two classes allows the conventional convective and stratiform classes to be determined, whereas classification into five classes allows this convective-stratiform classification to be further refined. Finally, our study made it possible to reveal the presence of some specific relationships between these five classes and the microphysics of their associated rain events.
24 CFR 982.624 - Manufactured home space rental: Utility allowance schedule.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false Manufactured home space rental... Special Housing Types Manufactured Home Space Rental § 982.624 Manufactured home space rental: Utility allowance schedule. The PHA must establish utility allowances for manufactured home space rental. For the...
24 CFR 982.624 - Manufactured home space rental: Utility allowance schedule.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Manufactured home space rental... Special Housing Types Manufactured Home Space Rental § 982.624 Manufactured home space rental: Utility allowance schedule. The PHA must establish utility allowances for manufactured home space rental. For the...
24 CFR 982.624 - Manufactured home space rental: Utility allowance schedule.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 4 2014-04-01 2014-04-01 false Manufactured home space rental... Special Housing Types Manufactured Home Space Rental § 982.624 Manufactured home space rental: Utility allowance schedule. The PHA must establish utility allowances for manufactured home space rental. For the...
2013-01-01
Background We present a software tool called SENB, which allows the geometric and biophysical neuronal properties in a simple computational model of a Hodgkin-Huxley (HH) axon to be changed. The aim of this work is to develop a didactic and easy-to-use computational tool in the NEURON simulation environment, which allows graphical visualization of both the passive and active conduction parameters and the geometric characteristics of a cylindrical axon with HH properties. Results The SENB software offers several advantages for teaching and learning electrophysiology. First, SENB offers ease and flexibility in determining the number of stimuli. Second, SENB allows immediate and simultaneous visualization, in the same window and time frame, of the evolution of the electrophysiological variables. Third, SENB calculates parameters such as time and space constants, stimuli frequency, cellular area and volume, sodium and potassium equilibrium potentials, and propagation velocity of the action potentials. Furthermore, it allows the user to see all this information immediately in the main window. Finally, with just one click SENB can save an image of the main window as evidence. Conclusions The SENB software is didactic and versatile, and can be used to improve and facilitate the teaching and learning of the underlying mechanisms in the electrical activity of an axon using the biophysical properties of the squid giant axon. PMID:23675833
NASA Astrophysics Data System (ADS)
He, Yan; Wright, Kevin; Kouachi, Said; Chien, Chih-Chun
2018-02-01
One-dimensional superlattices with periodic spatial modulations of onsite potentials or tunneling coefficients can exhibit a variety of properties associated with topology or symmetry. Recent developments of ring-shaped optical lattices allow a systematic study of those properties in superlattices with or without boundaries. While superlattices with additional modulating parameters are shown to have quantized topological invariants in the augmented parameter space, we also found localized or zero-energy states associated with symmetries of the Hamiltonians. Probing those states in ultracold atoms is possible by utilizing recently proposed methods analyzing particle depletion or the local density of states. Moreover, we summarize feasible realizations of configurable optical superlattices using currently available techniques.
Multidimensional Skyrme-density-functional study of the spontaneous fission of 238U
Sadhukhan, J.; Mazurek, K.; Dobaczewski, J.; ...
2015-01-01
We determined the spontaneous fission lifetime of 238U by a minimization of the action integral in a three-dimensional space of collective variables. Apart from the mass-distribution multipole moments Q 20 (elongation) and Q 30 (left–right asymmetry), we also considered the pairing-fluctuation parameter λ 2 as a collective coordinate. The collective potential was obtained self-consistently using the Skyrme energy density functional SkM*. The inertia tensor was obtained within the nonperturbative cranking approximation to the adiabatic time-dependent Hartree–Fock–Bogoliubov approach. As a result, the pairing-fluctuation parameter λ 2 allowed us to control the pairing gap along the fission path, which significantly changed themore » spontaneous fission lifetime.« less
Distributed reacceleration of cosmic rays
NASA Technical Reports Server (NTRS)
Wandel, Amri; Eichler, David; Letaw, John R.; Silberberg, Rein; Tsao, C. H.
1985-01-01
A model is developed in which cosmic rays, in addition to their initial acceleration by a strong shock, are continuously reaccelerated while propagating through the Galaxy. The equations describing this acceleration scheme are solved analytically and numerically. Solutions for the spectra of primary and secondary cosmic rays are given in a closed analytic form, allowing a rapid search in parameter space for viable propagation models with distributed reeacceleration included. The observed boron-to-carbon ratio can be reproduced by the reacceleration theory over a range of escape parameters, some of them quite different from the standard leaky-box model. It is also shown that even a very modest amount of reacceleration by strong shocks causes the boron-to-carbon ratio to level off at sufficiently high energies.
Average activity of excitatory and inhibitory neural populations
NASA Astrophysics Data System (ADS)
Roulet, Javier; Mindlin, Gabriel B.
2016-09-01
We develop an extension of the Ott-Antonsen method [E. Ott and T. M. Antonsen, Chaos 18(3), 037113 (2008)] that allows obtaining the mean activity (spiking rate) of a population of excitable units. By means of the Ott-Antonsen method, equations for the dynamics of the order parameters of coupled excitatory and inhibitory populations of excitable units are obtained, and their mean activities are computed. Two different excitable systems are studied: Adler units and theta neurons. The resulting bifurcation diagrams are compared with those obtained from studying the phenomenological Wilson-Cowan model in some regions of the parameter space. Compatible behaviors, as well as higher dimensional chaotic solutions, are observed. We study numerical simulations to further validate the equations.
Average activity of excitatory and inhibitory neural populations
Mindlin, Gabriel B.
2016-01-01
We develop an extension of the Ott-Antonsen method [E. Ott and T. M. Antonsen, Chaos 18(3), 037113 (2008)] that allows obtaining the mean activity (spiking rate) of a population of excitable units. By means of the Ott-Antonsen method, equations for the dynamics of the order parameters of coupled excitatory and inhibitory populations of excitable units are obtained, and their mean activities are computed. Two different excitable systems are studied: Adler units and theta neurons. The resulting bifurcation diagrams are compared with those obtained from studying the phenomenological Wilson-Cowan model in some regions of the parameter space. Compatible behaviors, as well as higher dimensional chaotic solutions, are observed. We study numerical simulations to further validate the equations. PMID:27781447
NASA Astrophysics Data System (ADS)
Krenn, Julia; Zangerl, Christian; Mergili, Martin
2017-04-01
r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This strategy is best demonstrated for two input parameters, but can be extended arbitrarily. We use a set of small rock avalanches from western Austria, and some larger ones from Canada and New Zealand, to optimize the basal friction coefficient and the mass-to-drag ratio of the two-parameter friction model implemented with r.randomwalk. Thereby we repeat the optimization procedure with conservative and non-conservative assumptions of a set of complementary parameters and with different raster cell sizes. Our preliminary results indicate that the model performance in terms of AUROC achieved with broad parameter spaces is hardly surpassed by the performance achieved with narrow parameter spaces. However, broad spaces may result in very conservative or very non-conservative predictions. Therefore, guiding parameter spaces have to be (i) broad enough to avoid the risk of being off target; and (ii) narrow enough to ensure a reasonable level of conservativeness of the results. The next steps will consist in (i) extending the study to other types of mass flow processes in order to support forward calculations using r.randomwalk; and (ii) in applying the same strategy to the more complex, dynamic model r.avaflow.
Zimmermann, Johannes; Wright, Aidan G C
2017-01-01
The interpersonal circumplex is a well-established structural model that organizes interpersonal functioning within the two-dimensional space marked by dominance and affiliation. The structural summary method (SSM) was developed to evaluate the interpersonal nature of other constructs and measures outside the interpersonal circumplex. To date, this method has been primarily descriptive, providing no way to draw inferences when comparing SSM parameters across constructs or groups. We describe a newly developed resampling-based method for deriving confidence intervals, which allows for SSM parameter comparisons. In a series of five studies, we evaluated the accuracy of the approach across a wide range of possible sample sizes and parameter values, and demonstrated its utility for posing theoretical questions on the interpersonal nature of relevant constructs (e.g., personality disorders) using real-world data. As a result, the SSM is strengthened for its intended purpose of construct evaluation and theory building. © The Author(s) 2015.
Saturn systems holddown acoustic efficiency and normalized acoustic power spectrum.
NASA Technical Reports Server (NTRS)
Gilbert, D. W.
1972-01-01
Saturn systems field acoustic data are used to derive mid- and far-field prediction parameters for rocket engine noise. The data were obtained during Saturn vehicle launches at the Kennedy Space Center. The data base is a sorted set of acoustic data measured during the period 1961 through 1971 for Saturn system launches SA-1 through AS-509. The model assumes hemispherical radiation from a simple source located at the intersection of the longitudinal axis of each booster and the engine exit plane. The model parameters are evaluated only during vehicle holddown. The acoustic normalized power spectrum and efficiency for each system are isolated as a composite from the data using linear numerical methods. The specific definitions of each allows separation. The resulting power spectra are nondimensionalized as a function of rocket engine parameters. The nondimensional Saturn system acoustic spectrum and efficiencies are compared as a function of Strouhal number with power spectra from other systems.
NASA Technical Reports Server (NTRS)
Steele, John W.; Rector, Tony; Gazda, Daniel; Lewis, John
2011-01-01
An EMU water processing kit (Airlock Coolant Loop Recovery -- A/L CLR) was developed as a corrective action to Extravehicular Mobility Unit (EMU) coolant flow disruptions experienced on the International Space Station (ISS) in May of 2004 and thereafter. A conservative duty cycle and set of use parameters for A/L CLR use and component life were initially developed and implemented based on prior analysis results and analytical modeling. Several initiatives were undertaken to optimize the duty cycle and use parameters of the hardware. Examination of post-flight samples and EMU Coolant Loop hardware provided invaluable information on the performance of the A/L CLR and has allowed for an optimization of the process. The intent of this paper is to detail the evolution of the A/L CLR hardware, efforts to optimize the duty cycle and use parameters, and the final recommendations for implementation in the post-Shuttle retirement era.
Dynamics of a neuron model in different two-dimensional parameter-spaces
NASA Astrophysics Data System (ADS)
Rech, Paulo C.
2011-03-01
We report some two-dimensional parameter-space diagrams numerically obtained for the multi-parameter Hindmarsh-Rose neuron model. Several different parameter planes are considered, and we show that regardless of the combination of parameters, a typical scenario is preserved: for all choice of two parameters, the parameter-space presents a comb-shaped chaotic region immersed in a large periodic region. We also show that exist regions close these chaotic region, separated by the comb teeth, organized themselves in period-adding bifurcation cascades.
Transformation to equivalent dimensions—a new methodology to study earthquake clustering
NASA Astrophysics Data System (ADS)
Lasocki, Stanislaw
2014-05-01
A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.
GUMICS-4 Year Run: Ground Magnetic Field Predictions
NASA Astrophysics Data System (ADS)
Honkonen, I. J.; Viljanen, A.; Juusola, L.; Facsko, G.; Vanhamäki, H.
2013-12-01
Space weather can have severe effects even at ground level when Geomagnetically Induced Currents (GIC) disrupt power transmission networks, the worst case being a complete blackout affecting millions of people. The importance of space weather forecasting as well as the need for model improvement and validation has been recognized internationally. The recently concluded GUMICS-4 one year run, in which solar wind observations obtained from OMNIWeb for the period 2002-01-29 to 2003-02-02 were given as input to the model, will allow GUMICS to be validated against observations on an unprecedented scale. The performance of GUMICS can be quantified statistically, as a function of, for example, the solar wind driver, various geomagnetic indices, magnetic local time and other parameters. Here we concentrate on the ability of GUMICS to predict ground magnetic field observations for one year of simulated results. The ground magnetic field predictions are compared to observations of the mainland IMAGE magnetometer stations located at CGM latitudes 54-68 N. Furthermore the GIC derived from ground magnetic field predictions are compared to observations along the natural gas pipeline at Mäntsälä, South Finland. Various metrics are used to objectively evaluate the performance of GUMICS as a function of different parameters, thereby providing significant insight into the space weather forecasting ability of models based on first principles.
NASA Astrophysics Data System (ADS)
Ishin, Artem; Perevalova, Natalia; Voeykov, Sergey; Khakhinov, Vitaliy
2017-12-01
Global and regional networks of GNSS receivers have been successfully used for geophysical research for many years; the number of continuous GNSS stations in the world is steadily growing. The article presents the first results of the use of a new regional network of GNSS stations (SibNet) in active space experiments. The Institute of Solar-Terrestrial Physics of Siberian Branch of Russian Academy of Sciences (ISTP SB RAS) has established this network in the South Baikal region. We describe in detail SibNet, characteristics of receivers in use, parameters of antennas and methods of their installation. We also present the general structure of observation site and the plot of coverage of the receiver operating zone at 50-55° latitudes by radio paths. It is shown that the selected location of receivers allows us to detect ionospheric irregularities of various scales. The purpose of the active space experiments was to reveal and record parameters of the ionospheric irregu larities caused by effects from jet streams of Progress cargo spacecraft. The mapping technique enabled us to identify weak, vertically localized ionospheric irregularities and associate them with the Progress spacecraft engine impact. Thus, it has been shown that SibNet deployed in the Southern Baikal region is an effective instrument for monitoring ionospheric conditions.
Extracting galactic structure parameters from multivariated density estimation
NASA Technical Reports Server (NTRS)
Chen, B.; Creze, M.; Robin, A.; Bienayme, O.
1992-01-01
Multivariate statistical analysis, including includes cluster analysis (unsupervised classification), discriminant analysis (supervised classification) and principle component analysis (dimensionlity reduction method), and nonparameter density estimation have been successfully used to search for meaningful associations in the 5-dimensional space of observables between observed points and the sets of simulated points generated from a synthetic approach of galaxy modelling. These methodologies can be applied as the new tools to obtain information about hidden structure otherwise unrecognizable, and place important constraints on the space distribution of various stellar populations in the Milky Way. In this paper, we concentrate on illustrating how to use nonparameter density estimation to substitute for the true densities in both of the simulating sample and real sample in the five-dimensional space. In order to fit model predicted densities to reality, we derive a set of equations which include n lines (where n is the total number of observed points) and m (where m: the numbers of predefined groups) unknown parameters. A least-square estimation will allow us to determine the density law of different groups and components in the Galaxy. The output from our software, which can be used in many research fields, will also give out the systematic error between the model and the observation by a Bayes rule.
The Planetary and Space Simulation Facilities at DLR Cologne
NASA Astrophysics Data System (ADS)
Rabbow, Elke; Parpart, André; Reitz, Günther
2016-06-01
Astrobiology strives to increase our knowledge on the origin, evolution and distribution of life, on Earth and beyond. In the past centuries, life has been found on Earth in environments with extreme conditions that were expected to be uninhabitable. Scientific investigations of the underlying metabolic mechanisms and strategies that lead to the high adaptability of these extremophile organisms increase our understanding of evolution and distribution of life on Earth. Life as we know it depends on the availability of liquid water. Exposure of organisms to defined and complex extreme environmental conditions, in particular those that limit the water availability, allows the investigation of the survival mechanisms as well as an estimation of the possibility of the distribution to and survivability on other celestial bodies of selected organisms. Space missions in low Earth orbit (LEO) provide access for experiments to complex environmental conditions not available on Earth, but studies on the molecular and cellular mechanisms of adaption to these hostile conditions and on the limits of life cannot be performed exclusively in space experiments. Experimental space is limited and allows only the investigation of selected endpoints. An additional intensive ground based program is required, with easy to access facilities capable to simulate space and planetary environments, in particular with focus on temperature, pressure, atmospheric composition and short wavelength solar ultraviolet radiation (UV). DLR Cologne operates a number of Planetary and Space Simulation facilities (PSI) where microorganisms from extreme terrestrial environments or known for their high adaptability are exposed for mechanistic studies. Space or planetary parameters are simulated individually or in combination in temperature controlled vacuum facilities equipped with a variety of defined and calibrated irradiation sources. The PSI support basic research and were recurrently used for pre-flight test programs for several astrobiological space missions. Parallel experiments on ground provided essential complementary data supporting the scientific interpretation of the data received from the space missions.
Six-quark decays of the Higgs boson in supersymmetry with R-parity violation.
Carpenter, Linda M; Kaplan, David E; Rhee, Eun-Jung
2007-11-23
Both electroweak precision measurements and simple supersymmetric extensions of the standard model prefer a mass of the Higgs boson less than the experimental lower limit (on a standard-model-like Higgs boson) of 114 GeV. We show that supersymmetric models with R parity violation and baryon-number violation have a significant range of parameter space in which the Higgs boson dominantly decays to six jets. These decays are much more weakly constrained by current CERN LEP analyses and would allow for a Higgs boson mass near that of the Z. In general, lighter scalar quark and other superpartner masses are allowed. The Higgs boson would potentially be discovered at hadron colliders via the appearance of new displaced vertices.
CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila
2015-03-10
We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less
Ab Initio Crystal Field for Lanthanides.
Ungur, Liviu; Chibotaru, Liviu F
2017-03-13
An ab initio methodology for the first-principle derivation of crystal-field (CF) parameters for lanthanides is described. The methodology is applied to the analysis of CF parameters in [Tb(Pc) 2 ] - (Pc=phthalocyanine) and Dy 4 K 2 ([Dy 4 K 2 O(OtBu) 12 ]) complexes, and compared with often used approximate and model descriptions. It is found that the application of geometry symmetrization, and the use of electrostatic point-charge and phenomenological CF models, lead to unacceptably large deviations from predictions based on ab initio calculations for experimental geometry. It is shown how the predictions of standard CASSCF (Complete Active Space Self-Consistent Field) calculations (with 4f orbitals in the active space) can be systematically improved by including effects of dynamical electronic correlation (CASPT2 step) and by admixing electronic configurations of the 5d shell. This is exemplified for the well-studied Er-trensal complex (H 3 trensal=2,2',2"-tris(salicylideneimido)trimethylamine). The electrostatic contributions to CF parameters in this complex, calculated with true charge distributions in the ligands, yield less than half of the total CF splitting, thus pointing to the dominant role of covalent effects. This analysis allows the conclusion that ab initio crystal field is an essential tool for the decent description of lanthanides. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Van Steenkiste, Gwendolyn; Jeurissen, Ben; Veraart, Jelle; den Dekker, Arnold J; Parizel, Paul M; Poot, Dirk H J; Sijbers, Jan
2016-01-01
Diffusion MRI is hampered by long acquisition times, low spatial resolution, and a low signal-to-noise ratio. Recently, methods have been proposed to improve the trade-off between spatial resolution, signal-to-noise ratio, and acquisition time of diffusion-weighted images via super-resolution reconstruction (SRR) techniques. However, during the reconstruction, these SRR methods neglect the q-space relation between the different diffusion-weighted images. An SRR method that includes a diffusion model and directly reconstructs high resolution diffusion parameters from a set of low resolution diffusion-weighted images was proposed. Our method allows an arbitrary combination of diffusion gradient directions and slice orientations for the low resolution diffusion-weighted images, optimally samples the q- and k-space, and performs motion correction with b-matrix rotation. Experiments with synthetic data and in vivo human brain data show an increase of spatial resolution of the diffusion parameters, while preserving a high signal-to-noise ratio and low scan time. Moreover, the proposed SRR method outperforms the previous methods in terms of the root-mean-square error. The proposed SRR method substantially increases the spatial resolution of MRI that can be obtained in a clinically feasible scan time. © 2015 Wiley Periodicals, Inc.
Constraining the near-core rotation of the γ Doradus star 43 Cygni using BRITE-Constellation data
NASA Astrophysics Data System (ADS)
Zwintz, K.; Van Reeth, T.; Tkachenko, A.; Gössl, S.; Pigulski, A.; Kuschnig, R.; Handler, G.; Moffat, A. F. J.; Popowicz, A.; Wade, G.; Weiss, W. W.
2017-12-01
Context. Photometric time series of the γ Doradus star 43 Cyg obtained with the BRITE-Constellation nano-satellites allow us to study its pulsational properties in detail and to constrain its interior structure. Aims: We aim to find a g-mode period-spacing pattern that allows us to determine the near-core rotation rate of 43 Cyg and redetermine the star's fundamental atmospheric parameters and chemical composition. Methods: We conducted a frequency analysis using the 156-day long data set obtained with the BRITE-Toronto satellite and employed a suite of MESA/GYRE models to derive the mode identification, asymptotic period-spacing, and near-core rotation rate. We also used high-resolution spectroscopic data with high signal-to-noise ratio obtained at the 1.2 m Mercator telescope with the HERMES spectrograph to redetermine the fundamental atmospheric parameters and chemical composition of 43 Cyg using the software Spectroscopy Made Easy (SME). Results: We detected 43 intrinsic pulsation frequencies and identified 18 of them to be part of a period-spacing pattern consisting of prograde dipole modes with an asymptotic period-spacing ΔΠl = 1 of 2970-570+700 s. The near-core rotation rate was determined to be frot = 0.56-0.14+0.12 d-1. The atmosphere of 43 Cyg shows solar chemical composition at an effective temperature, Teff, of 7150 ± 150 K, a log g of 4.2 ± 0.6 dex, and a projected rotational velocity, υsini, of 44 ± 4 km s-1. Conclusions: The morphology of the observed period-spacing patterns shows indications of a significant chemical gradient in the stellar interior. Based on data collected by the BRITE Constellation satellite mission, designed, built, launched, operated and supported by the Austrian Research Promotion Agency (FFG), the University of Vienna, the Technical University of Graz, the Canadian Space Agency (CSA), the University of Toronto Institute for Aerospace Studies (UTIAS), the Foundation for Polish Science & Technology (FNiTP MNiSW), and National Science Centre (NCN).The light curves (in tabular form) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/608/A103
Evidence for inflation in an axion landscape
NASA Astrophysics Data System (ADS)
Nath, Pran; Piskunov, Maksim
2018-03-01
We discuss inflation models within supersymmetry and supergravity frameworks with a landscape of chiral superfields and one U(1) shift symmetry which is broken by non-perturbative symmetry breaking terms in the superpotential. We label the pseudo scalar component of the chiral fields axions and their real parts saxions. Thus in the models only one combination of axions will be a pseudo-Nambu-Goldstone-boson which will act as the inflaton. The proposed models constitute consistent inflation for the following reasons: the inflation potential arises dynamically with stabilized saxions, the axion decay constant can lie in the sub-Planckian region, and consistency with the Planck data is achieved. The axion landscape consisting of m axion pairs is assumed with the axions in each pair having opposite charges. A fast roll-slow roll splitting mechanism for the axion potential is proposed which is realized with a special choice of the axion basis. In this basis the 2 m coupled equations split into 2 m - 1 equations which enter in the fast roll and there is one unique linear combination of the 2 m fields which controls the slow roll and thus the power spectrum of curvature and tensor perturbations. It is shown that a significant part of the parameter space exists where inflation is successful, i.e., N pivot = [50, 60], the spectral index n s of curvature perturbations, and the ratio r of the power spectrum of tensor perturbations and curvature perturbations, lie in the experimentally allowed regions given by the Planck experiment. Further, it is shown that the model allows for a significant region of the parameter space where the effective axion decay constant can lie in the sub-Planckian domain. An analysis of the tensor spectral index n t is also given and the future experimental data which constraints n t will further narrow down the parameter space of the proposed inflationary models. Topics of further interest include implications of the model for gravitational waves and non-Gaussianities in the curvature perturbations. Also of interest is embedding of the model in strings which are expected to possess a large axionic landscape.
Virtual Construction of Space Habitats: Connecting Building Information Models (BIM) and SysML
NASA Technical Reports Server (NTRS)
Polit-Casillas, Raul; Howe, A. Scott
2013-01-01
Current trends in design, construction and management of complex projects make use of Building Information Models (BIM) connecting different types of data to geometrical models. This information model allow different types of analysis beyond pure graphical representations. Space habitats, regardless their size, are also complex systems that require the synchronization of many types of information and disciplines beyond mass, volume, power or other basic volumetric parameters. For this, the state-of-the-art model based systems engineering languages and processes - for instance SysML - represent a solid way to tackle this problem from a programmatic point of view. Nevertheless integrating this with a powerful geometrical architectural design tool with BIM capabilities could represent a change in the workflow and paradigm of space habitats design applicable to other aerospace complex systems. This paper shows some general findings and overall conclusions based on the ongoing research to create a design protocol and method that practically connects a systems engineering approach with a BIM architectural and engineering design as a complete Model Based Engineering approach. Therefore, one hypothetical example is created and followed during the design process. In order to make it possible this research also tackles the application of IFC categories and parameters in the aerospace field starting with the application upon the space habitats design as way to understand the information flow between disciplines and tools. By building virtual space habitats we can potentially improve in the near future the way more complex designs are developed from very little detail from concept to manufacturing.
A stochastic fractional dynamics model of space-time variability of rain
NASA Astrophysics Data System (ADS)
Kundu, Prasun K.; Travis, James E.
2013-09-01
varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, which allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and time scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and on the Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to fit the second moment statistics of radar data at the smaller spatiotemporal scales. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well at these scales without any further adjustment.
Wavefront Control Toolbox for James Webb Space Telescope Testbed
NASA Technical Reports Server (NTRS)
Shiri, Ron; Aronstein, David L.; Smith, Jeffery Scott; Dean, Bruce H.; Sabatke, Erin
2007-01-01
We have developed a Matlab toolbox for wavefront control of optical systems. We have applied this toolbox to the optical models of James Webb Space Telescope (JWST) in general and to the JWST Testbed Telescope (TBT) in particular, implementing both unconstrained and constrained wavefront optimization to correct for possible misalignments present on the segmented primary mirror or the monolithic secondary mirror. The optical models implemented in Zemax optical design program and information is exchanged between Matlab and Zemax via the Dynamic Data Exchange (DDE) interface. The model configuration is managed using the XML protocol. The optimization algorithm uses influence functions for each adjustable degree of freedom of the optical mode. The iterative and non-iterative algorithms have been developed to converge to a local minimum of the root-mean-square (rms) of wavefront error using singular value decomposition technique of the control matrix of influence functions. The toolkit is highly modular and allows the user to choose control strategies for the degrees of freedom to be adjusted on a given iteration and wavefront convergence criterion. As the influence functions are nonlinear over the control parameter space, the toolkit also allows for trade-offs between frequency of updating the local influence functions and execution speed. The functionality of the toolbox and the validity of the underlying algorithms have been verified through extensive simulations.
Modeling AWSoM CMEs with EEGGL: A New Approach for Space Weather Forecasting
NASA Astrophysics Data System (ADS)
Jin, M.; Manchester, W.; van der Holst, B.; Sokolov, I.; Toth, G.; Vourlidas, A.; de Koning, C. A.; Gombosi, T. I.
2015-12-01
The major source of destructive space weather is coronal mass ejections (CMEs). However, our understanding of CMEs and their propagation in the heliosphere is limited by the insufficient observations. Therefore, the development of first-principals numerical models plays a vital role in both theoretical investigation and providing space weather forecasts. Here, we present results of the simulation of CME propagation from the Sun to 1AU by combining the analytical Gibson & Low (GL) flux rope model with the state-of-art solar wind model AWSoM. We also provide an approach for transferring this research model to a space weather forecasting tool by demonstrating how the free parameters of the GL flux rope can be prescribed based on remote observations via the new Eruptive Event Generator by Gibson-Low (EEGGL) toolkit. This capability allows us to predict the long-term evolution of the CME in interplanetary space. We perform proof-of-concept case studies to show the capability of the model to capture physical processes that determine CME evolution while also reproducing many observed features both in the corona and at 1 AU. We discuss the potential and limitations of this model as a future space weather forecasting tool.
Simulation Exploration through Immersive Parallel Planes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunhart-Lupo, Nicholas J; Bush, Brian W; Gruchalla, Kenny M
We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less
Estimating Consequences of MMOD Penetrations on ISS
NASA Technical Reports Server (NTRS)
Evans, H.; Hyde, James; Christiansen, E.; Lear, D.
2017-01-01
The threat from micrometeoroid and orbital debris (MMOD) impacts on space vehicles is often quantified in terms of the probability of no penetration (PNP). However, for large spacecraft, especially those with multiple compartments, a penetration may have a number of possible outcomes. The extent of the damage (diameter of hole, crack length or penetration depth), the location of the damage relative to critical equipment or crew, crew response, and even the time of day of the penetration are among the many factors that can affect the outcome. For the International Space Station (ISS), a Monte-Carlo style software code called Manned Spacecraft Crew Survivability (MSCSurv) is used to predict the probability of several outcomes of an MMOD penetration-broadly classified as loss of crew (LOC), crew evacuation (Evac), loss of escape vehicle (LEV), and nominal end of mission (NEOM). By generating large numbers of MMOD impacts (typically in the billions) and tracking the consequences, MSCSurv allows for the inclusion of a large number of parameters and models as well as enabling the consideration of uncertainties in the models and parameters. MSCSurv builds upon the results from NASA's Bumper software (which provides the probability of penetration and critical input data to MSCSurv) to allow analysts to estimate the probability of LOC, Evac, LEV, and NEOM. This paper briefly describes the overall methodology used by NASA to quantify LOC, Evac, LEV, and NEOM with particular emphasis on describing in broad terms how MSCSurv works and its capabilities and most significant models.
An astrosphere around the blue supergiant κ Cas: possible explanation of its filamentary structure
NASA Astrophysics Data System (ADS)
Katushkina, O. A.; Alexashov, D. B.; Gvaramadze, V. V.; Izmodenov, V. V.
2018-01-01
High-resolution mid-infrared observations carried out by the Spitzer Space Telescope allowed one to resolve the fine structure of many astrospheres. In particular, they showed that the astrosphere around the B0.7 Ia star κ Cas (HD 2905) has a clear-cut arc structure with numerous cirrus-like filaments beyond it. Previously, we suggested a physical mechanism for the formation of such filamentary structures. Namely, we showed theoretically that they might represent the non-monotonic spatial distribution of the interstellar dust in astrospheres (viewed as filaments) caused by interaction of the dust grains with the interstellar magnetic field disturbed in the astrosphere due to colliding of the stellar and interstellar winds. In this paper, we invoke this mechanism to explain the structure of the astrosphere around κ Cas. We performed 3D magnetohydrodynamic modelling of the astrosphere for realistic parameters of the stellar wind and space velocity. The dust dynamics and the density distribution in the astrosphere were calculated in the framework of a kinetic model. It is found that the model results with the classical MRN (Mathis, Rumpl & Nordsieck 1977) size distribution of dust in the interstellar medium do not match the observations, and that the observed filamentary structure of the astrosphere can be reproduced only if the dust is composed mainly of big (μm-sized) grains. Comparison of the model results with observations allowed us to estimate parameters (number density and magnetic field strength) of the surrounding interstellar medium.
Simulation Exploration through Immersive Parallel Planes: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny
We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less
NASA Astrophysics Data System (ADS)
Allanach, Ben; Kvellestad, Anders; Raklev, Are
2015-06-01
The CMS experiment recently reported an excess consistent with an invariant mass edge in opposite-sign same flavor leptons, when produced in conjunction with at least two jets and missing transverse momentum. We provide an interpretation of the edge in terms of (anti)squark pair production followed by the "golden cascade" decay for one of the squarks: q ˜ →χ˜2 0q →l ˜ l q →χ˜1 0q l l in the minimal supersymmetric standard model. A simplified model involving binos, winos, an on-shell slepton, and the first two generations of squarks fits the event rate and the invariant mass edge. We check consistency with a recent ATLAS search in a similar region, finding that much of the good-fit parameter space is still allowed at the 95% confidence level (C.L.). However, a combination of other LHC searches, notably two-lepton stop pair searches and jets plus p T, rule out all of the remaining parameter space at the 95% C.L.
Light weakly coupled axial forces: models, constraints, and projections
Kahn, Yonatan; Krnjaic, Gordan; Mishra-Sharma, Siddharth; ...
2017-05-01
Here, we investigate the landscape of constraints on MeV-GeV scale, hidden U(1) forces with nonzero axial-vector couplings to Standard Model fermions. While the purely vector-coupled dark photon, which may arise from kinetic mixing, is a well-motivated scenario, several MeV-scale anomalies motivate a theory with axial couplings which can be UV-completed consistent with Standard Model gauge invariance. Moreover, existing constraints on dark photons depend on products of various combinations of axial and vector couplings, making it difficult to isolate the e ects of axial couplings for particular flavors of SM fermions. We present a representative renormalizable, UV-complete model of a darkmore » photon with adjustable axial and vector couplings, discuss its general features, and show how some UV constraints may be relaxed in a model with nonrenormalizable Yukawa couplings at the expense of fine-tuning. We survey the existing parameter space and the projected reach of planned experiments, brie y commenting on the relevance of the allowed parameter space to low-energy anomalies in π 0 and 8Be* decay.« less
NASA Technical Reports Server (NTRS)
Tatnall, Chistopher R.
1998-01-01
The counter-rotating pair of wake vortices shed by flying aircraft can pose a threat to ensuing aircraft, particularly on landing approach. To allow adequate time for the vortices to disperse/decay, landing aircraft are required to maintain certain fixed separation distances. The Aircraft Vortex Spacing System (AVOSS), under development at NASA, is designed to prescribe safe aircraft landing approach separation distances appropriate to the ambient weather conditions. A key component of the AVOSS is a ground sensor, to ensure, safety by making wake observations to verify predicted behavior. This task requires knowledge of a flowfield strength metric which gauges the severity of disturbance an encountering aircraft could potentially experience. Several proposed strength metric concepts are defined and evaluated for various combinations of metric parameters and sensor line-of-sight elevation angles. Representative populations of generating and following aircraft types are selected, and their associated wake flowfields are modeled using various wake geometry definitions. Strength metric candidates are then rated and compared based on the correspondence of their computed values to associated aircraft response values, using basic statistical analyses.
Light weakly coupled axial forces: models, constraints, and projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kahn, Yonatan; Krnjaic, Gordan; Mishra-Sharma, Siddharth
Here, we investigate the landscape of constraints on MeV-GeV scale, hidden U(1) forces with nonzero axial-vector couplings to Standard Model fermions. While the purely vector-coupled dark photon, which may arise from kinetic mixing, is a well-motivated scenario, several MeV-scale anomalies motivate a theory with axial couplings which can be UV-completed consistent with Standard Model gauge invariance. Moreover, existing constraints on dark photons depend on products of various combinations of axial and vector couplings, making it difficult to isolate the e ects of axial couplings for particular flavors of SM fermions. We present a representative renormalizable, UV-complete model of a darkmore » photon with adjustable axial and vector couplings, discuss its general features, and show how some UV constraints may be relaxed in a model with nonrenormalizable Yukawa couplings at the expense of fine-tuning. We survey the existing parameter space and the projected reach of planned experiments, brie y commenting on the relevance of the allowed parameter space to low-energy anomalies in π 0 and 8Be* decay.« less
Structural analysis of three space crane articulated-truss joint concepts
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey; Sutter, Thomas R.
1992-01-01
Three space crane articulated truss joint concepts are studied to evaluate their static structural performance over a range of geometric design parameters. Emphasis is placed on maintaining the four longeron reference truss performance across the joint while allowing large angle articulation. A maximum positive articulation angle and the actuator length ratio required to reach the angle are computed for each concept as the design parameters are varied. Configurations with a maximum articulation angle less than 120 degrees or actuators requiring a length ratio over two are not considered. Tip rotation and lateral deflection of a truss beam with an articulated truss joint at the midspan are used to select a point design for each concept. Deflections for one point design are up to 40 percent higher than for the other two designs. Dynamic performance of the three point design is computed as a function of joint articulation angle. The two lowest frequencies of each point design are relatively insensitive to large variations in joint articulation angle. One point design has a higher maximum tip velocity for the emergency stop than the other designs.
PLUTO'S SEASONS: NEW PREDICTIONS FOR NEW HORIZONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, L. A.
Since the last Pluto volatile transport models were published in 1996, we have (1) new stellar occultation data from 2002 and 2006-2012 that show roughly twice the pressure as the first definitive occultation from 1988, (2) new information about the surface properties of Pluto, (3) a spacecraft due to arrive at Pluto in 2015, and (4) a new volatile transport model that is rapid enough to allow a large parameter-space search. Such a parameter-space search coarsely constrained by occultation results reveals three broad solutions: a high-thermal inertia, large volatile inventory solution with permanent northern volatiles (PNVs; using the rotational northmore » pole convention); a lower thermal-inertia, smaller volatile inventory solution with exchanges of volatiles between hemispheres and a pressure plateau beyond 2015 (exchange with pressure plateau, EPP); and solutions with still smaller volatile inventories, with exchanges of volatiles between hemispheres and an early collapse of the atmosphere prior to 2015 (exchange with early collapse, EEC). PNV and EPP are favored by stellar occultation data, but EEC cannot yet be definitively ruled out without more atmospheric modeling or additional occultation observations and analysis.« less
Explaining dark matter and B decay anomalies with an L μ - L τ model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altmannshofer, Wolfgang; Gori, Stefania; Profumo, Stefano
We present a dark sector model based on gauging the L μ - L τ symmetry that addresses anomalies in b→ sμ +μ - decays and that features a particle dark matter candidate. The dark matter particle candidate is a vector-like Dirac fermion coupled to the Z' gauge boson of the L μ - L τ symmetry. We compute the dark matter thermal relic density, its pair-annihilation cross section, and the loop-suppressed dark matter-nucleon scattering cross section, and compare our predictions with current and future experimental results. We demonstrate that after taking into account bounds from Bs meson oscillations, darkmore » matter direct detection, and the CMB, the model is highly predictive: B physics anomalies and a viable particle dark matter candidate, with a mass of ~ (5 - 23) GeV, can be accommodated only in a tightly-constrained region of parameter space, with sharp predictions for future experimental tests. The viable region of parameter space expands if the dark matter is allowed to have L μ - L τ charges that are smaller than those of the SM leptons.« less
Explaining dark matter and B decay anomalies with an L μ - L τ model
Altmannshofer, Wolfgang; Gori, Stefania; Profumo, Stefano; ...
2016-12-20
We present a dark sector model based on gauging the L μ - L τ symmetry that addresses anomalies in b→ sμ +μ - decays and that features a particle dark matter candidate. The dark matter particle candidate is a vector-like Dirac fermion coupled to the Z' gauge boson of the L μ - L τ symmetry. We compute the dark matter thermal relic density, its pair-annihilation cross section, and the loop-suppressed dark matter-nucleon scattering cross section, and compare our predictions with current and future experimental results. We demonstrate that after taking into account bounds from Bs meson oscillations, darkmore » matter direct detection, and the CMB, the model is highly predictive: B physics anomalies and a viable particle dark matter candidate, with a mass of ~ (5 - 23) GeV, can be accommodated only in a tightly-constrained region of parameter space, with sharp predictions for future experimental tests. The viable region of parameter space expands if the dark matter is allowed to have L μ - L τ charges that are smaller than those of the SM leptons.« less
Non-stationary noise estimation using dictionary learning and Gaussian mixture models
NASA Astrophysics Data System (ADS)
Hughes, James M.; Rockmore, Daniel N.; Wang, Yang
2014-02-01
Stationarity of the noise distribution is a common assumption in image processing. This assumption greatly simplifies denoising estimators and other model parameters and consequently assuming stationarity is often a matter of convenience rather than an accurate model of noise characteristics. The problematic nature of this assumption is exacerbated in real-world contexts, where noise is often highly non-stationary and can possess time- and space-varying characteristics. Regardless of model complexity, estimating the parameters of noise dis- tributions in digital images is a difficult task, and estimates are often based on heuristic assumptions. Recently, sparse Bayesian dictionary learning methods were shown to produce accurate estimates of the level of additive white Gaussian noise in images with minimal assumptions. We show that a similar model is capable of accu- rately modeling certain kinds of non-stationary noise processes, allowing for space-varying noise in images to be estimated, detected, and removed. We apply this modeling concept to several types of non-stationary noise and demonstrate the model's effectiveness on real-world problems, including denoising and segmentation of images according to noise characteristics, which has applications in image forensics.
Optimal design of focused experiments and surveys
NASA Astrophysics Data System (ADS)
Curtis, Andrew
1999-10-01
Experiments and surveys are often performed to obtain data that constrain some previously underconstrained model. Often, constraints are most desired in a particular subspace of model space. Experiment design optimization requires that the quality of any particular design can be both quantified and then maximized. This study shows how the quality can be defined such that it depends on the amount of information that is focused in the particular subspace of interest. In addition, algorithms are presented which allow one particular focused quality measure (from the class of focused measures) to be evaluated efficiently. A subclass of focused quality measures is also related to the standard variance and resolution measures from linearized inverse theory. The theory presented here requires that the relationship between model parameters and data can be linearized around a reference model without significant loss of information. Physical and financial constraints define the space of possible experiment designs. Cross-well tomographic examples are presented, plus a strategy for survey design to maximize information about linear combinations of parameters such as bulk modulus, κ =λ+ 2μ/3.
Mitigating direct detection bounds in non-minimal Higgs portal scalar dark matter models
NASA Astrophysics Data System (ADS)
Bhattacharya, Subhaditya; Ghosh, Purusottam; Maity, Tarak Nath; Ray, Tirtha Sankar
2017-10-01
The minimal Higgs portal dark matter model is increasingly in tension with recent results form direct detection experiments like LUX and XENON. In this paper we make a systematic study of simple extensions of the Z_2 stabilized singlet scalar Higgs portal scenario in terms of their prospects at direct detection experiments. We consider both enlarging the stabilizing symmetry to Z_3 and incorporating multipartite features in the dark sector. We demonstrate that in these non-minimal models the interplay of annihilation, co-annihilation and semi-annihilation processes considerably relax constraints from present and proposed direct detection experiments while simultaneously saturating observed dark matter relic density. We explore in particular the resonant semi-annihilation channel within the multipartite Z_3 framework which results in new unexplored regions of parameter space that would be difficult to constrain by direct detection experiments in the near future. The role of dark matter exchange processes within multi-component Z_3× Z_3^' } framework is illustrated. We make quantitative estimates to elucidate the role of various annihilation processes in the different allowed regions of parameter space within these models.
Ballistic Majorana nanowire devices
NASA Astrophysics Data System (ADS)
Gül, Ã.-nder; Zhang, Hao; Bommer, Jouri D. S.; de Moor, Michiel W. A.; Car, Diana; Plissard, Sébastien R.; Bakkers, Erik P. A. M.; Geresdi, Attila; Watanabe, Kenji; Taniguchi, Takashi; Kouwenhoven, Leo P.
2018-01-01
Majorana modes are zero-energy excitations of a topological superconductor that exhibit non-Abelian statistics1-3. Following proposals for their detection in a semiconductor nanowire coupled to an s-wave superconductor4,5, several tunnelling experiments reported characteristic Majorana signatures6-11. Reducing disorder has been a prime challenge for these experiments because disorder can mimic the zero-energy signatures of Majoranas12-16, and renders the topological properties inaccessible17-20. Here, we show characteristic Majorana signatures in InSb nanowire devices exhibiting clear ballistic transport properties. Application of a magnetic field and spatial control of carrier density using local gates generates a zero bias peak that is rigid over a large region in the parameter space of chemical potential, Zeeman energy and tunnel barrier potential. The reduction of disorder allows us to resolve separate regions in the parameter space with and without a zero bias peak, indicating topologically distinct phases. These observations are consistent with the Majorana theory in a ballistic system21, and exclude the known alternative explanations that invoke disorder12-16 or a nonuniform chemical potential22,23.
AGN neutrino flux estimates for a realistic hybrid model
NASA Astrophysics Data System (ADS)
Richter, S.; Spanier, F.
2018-07-01
Recent reports of possible correlations between high energy neutrinos observed by IceCube and Active Galactic Nuclei (AGN) activity sparked a burst of publications that attempt to predict the neutrino flux of these sources. However, often rather crude estimates are used to derive the neutrino rate from the observed photon spectra. In this work neutrino fluxes were computed in a wide parameter space. The starting point of the model was a representation of the full spectral energy density (SED) of 3C 279. The time-dependent hybrid model that was used for this study takes into account the full pγ reaction chain as well as proton synchrotron, electron-positron-pair cascades and the full SSC scheme. We compare our results to estimates frequently used in the literature. This allows to identify regions in the parameter space for which such estimates are still valid and those in which they can produce significant errors. Furthermore, if estimates for the Doppler factor, magnetic field, proton and electron densities of a source exist, the expected IceCube detection rate is readily available.
NASA Astrophysics Data System (ADS)
Denardini, Clezio Marcos; Padilha, Antonio; Takahashi, Hisao; Souza, Jonas; Mendes, Odim; Batista, Inez S.; SantAnna, Nilson; Gatto, Rubens; Costa, D. Joaquim
On August 2007 the National Institute for Space Research started a task force to develop and operate a space weather program, which is kwon by the acronyms Embrace that stands for the Portuguese statement “Estudo e Monitoramento BRAasileiro de Clima Espacial” Program (Brazilian Space Weather Study and Monitoring program). The main purpose of the Embrace Program is to monitor the space climate and weather from sun, interplanetary space, magnetosphere and ionosphere-atmosphere, and to provide useful information to space related communities, technological, industrial and academic areas. Since then we have being visiting several different space weather costumers and we have host two workshops of Brazilian space weather users at the Embrace facilities. From the inputs and requests collected from the users the Embrace Program decided to monitored several physical parameters of the sun-earth environment through a large ground base network of scientific sensors and under collaboration with space weather centers partners. Most of these physical parameters are daily published on the Brazilian space weather program web portal, related to the entire network sensors available. A comprehensive data bank and an interface layer are under development to allow an easy and direct access to the useful information. Nowadays, the users will count on products derived from a GNSS monitor network that covers most of the South American territory; a digisonde network that monitors the ionospheric profiles in two equatorial sites and in one low latitude site; several solar radio telescopes to monitor solar activity, and a magnetometer network, besides a global ionospheric physical model. Regarding outreach, we publish a daily bulletin in Portuguese with the status of the space weather environment on the Sun, in the Interplanetary Medium and close to the Earth. Since December 2011, all these activities are carried out at the Embrace Headquarter, a building located at the INPE's main campus. Recently, we have release brand new products, among them, some regional magnetic indices and the GNSS vertical error map over South America. Contacting Author: C. M. Denardini (clezio.denardin@inpe.br)
Containerless processing of undercooled melts
NASA Technical Reports Server (NTRS)
Shong, D. S.; Graves, J. A.; Ujiie, Y.; Perepezko, J. H.
1987-01-01
Containerless drop tube processing allows for significant levels of liquid undercooling through control of parameters such as sample size, surface coating and cooling rate. A laboratory scale (3 m) drop tube has been developed which allows the undercooling and solidification behavior of powder samples to be evaluated under low gravity free-fall conditions. The level of undercooling obtained in an InSb-Sb eutectic alloy has been evaluated by comparing the eutectic spacing in drop tube samples with a spacing/undercooling relationship established using thermal analysis techniques. Undercoolings of 0.17 and 0.23 T(e) were produced by processing under vacuum and He gas conditions respectively. Alternatively, the formation of an amorphous phase in a Ni-Nb eutectic alloy indicates that undercooling levels of approximately 500 C were obtained by drop tube processing. The influence of droplet size and gas environment on undercooling behavior in the Ni-Nb eutectic was evaluated through their effect on the amorphous/crystalline phase ratio. To supplement the structural analysis, heat flow modeling has been developed to describe the undercooling history during drop tube processing, and the model has been tested experimentally.
Bioinformatic prediction and in vivo validation of residue-residue interactions in human proteins
NASA Astrophysics Data System (ADS)
Jordan, Daniel; Davis, Erica; Katsanis, Nicholas; Sunyaev, Shamil
2014-03-01
Identifying residue-residue interactions in protein molecules is important for understanding both protein structure and function in the context of evolutionary dynamics and medical genetics. Such interactions can be difficult to predict using existing empirical or physical potentials, especially when residues are far from each other in sequence space. Using a multiple sequence alignment of 46 diverse vertebrate species we explore the space of allowed sequences for orthologous protein families. Amino acid changes that are known to damage protein function allow us to identify specific changes that are likely to have interacting partners. We fit the parameters of the continuous-time Markov process used in the alignment to conclude that these interactions are primarily pairwise, rather than higher order. Candidates for sites under pairwise epistasis are predicted, which can then be tested by experiment. We report the results of an initial round of in vivo experiments in a zebrafish model that verify the presence of multiple pairwise interactions predicted by our model. These experimentally validated interactions are novel, distant in sequence, and are not readily explained by known biochemical or biophysical features.
Assessment of Predictive Capabilities of L1 Orbiters using Realtime Solar Wind Data
NASA Astrophysics Data System (ADS)
Holmes, J.; Kasper, J. C.; Welling, D. T.
2017-12-01
Realtime measurements of solar wind conditions at L1 point allow us to predict geomagnetic activity at Earth up to an hour in advance. These predictions are quantified in the form of geomagnetic indices such as Kp and Ap, allowing for a concise, standardized prediction and measurement system. For years, the Space Weather Prediction Center used ACE realtime solar wind data to develop its one and four-hour Kp forecasts, but has in the past year switched to using DSCOVR data as its source. In this study, the performance of both orbiters in predicting Kp over the course of one month was assessed in an attempt to determine whether or not switching to DSCOVR data has resulted in improved forecasts. The period of study was chosen to encompass a time when the satellites were close to each other, and when moderate to high activity was observed. Kp predictions were made using the Geospace Model, part of the Space Weather Modeling Framework, to simulate conditions based on observed solar wind parameters. The performance of each satellite was assessed by comparing the model output to observed data.
Faint source detection in ISOCAM images
NASA Astrophysics Data System (ADS)
Starck, J. L.; Aussel, H.; Elbaz, D.; Fadda, D.; Cesarsky, C.
1999-08-01
We present a tool adapted to the detection of faint mid-infrared sources within ISOCAM mosaics. This tool is based on a wavelet analysis which allows us to discriminate sources from cosmic ray impacts at the very limit of the instrument, four orders of magnitudes below IRAS. It is called PRETI for Pattern REcognition Technique for ISOCAM data, because glitches with transient behaviors are isolated in the wavelet space, i.e. frequency space, where they present peculiar signatures in the form of patterns automatically identified and then reconstructed. We have tested PRETI with Monte-Carlo simulations of fake ISOCAM data. These simulations allowed us to define the fraction of remaining false sources due to cosmic rays, the sensitivity and completeness limits as well as the photometric accuracy as a function of the observation parameters. Although the main scientific applications of this technique have appeared or will appear in separated papers, we present here an application to the ISOCAM-Hubble Deep Field image. This work completes and confirms the results already published (\\cite[Aussel et al. 1999]{starck:aussel99}).
NASA Astrophysics Data System (ADS)
Sumin, V. I.; Smolentseva, T. E.; Belokurov, S. V.; Lankin, O. V.
2018-03-01
In the work the process of formation of trainee characteristics with their subsequent change is analyzed and analyzed. Characteristics of trainees were obtained as a result of testing for each section of information on the chosen discipline. The results obtained during testing were input to the dynamic system. The area of control actions consisting of elements of the dynamic system is formed. The limit of deterministic predictability of element trajectories in dynamical systems based on local or global attractors is revealed. The dimension of the phase space of the dynamic system is determined, which allows estimating the parameters of the initial system. On the basis of time series of observations, it is possible to determine the predictability interval of all parameters, which make it possible to determine the behavior of the system discretely in time. Then the measure of predictability will be the sum of Lyapunov’s positive indicators, which are a quantitative measure for all elements of the system. The components for the formation of an algorithm allowing to determine the correlation dimension of the attractor for known initial experimental values of the variables are revealed. The generated algorithm makes it possible to carry out an experimental study of the dynamics of changes in the trainee’s parameters with initial uncertainty.
[Contribution of X-ray computed tomography in the evaluation of kidney performance].
Lemoine, Sandrine; Rognant, Nicolas; Collet-Benzaquen, Diane; Juillard, Laurent
2012-07-01
X-ray computer assisted tomography scanner is an imaging method based on the use of X-ray attenuation in tissue. This attenuation is proportional to the density of the tissue (without or after contrast media injection) in each pixel image of the image. Spiral scanner, the electron beam computed tomography (EBCT) scanner and multidetector computed tomography scanner allow renal anatomical measurements, such as cortical and medullary volume, but also the measurement of renal functional parameters, such as regional renal perfusion, renal blood flow and glomerular filtration rate. These functional parameters are extracted from the modeling of the kinetics of the contrast media concentration in the vascular space and the renal tissue, using two main mathematical models (the gamma variate model and the Patlak model). Renal functional imaging allows measuring quantitative parameters on each kidney separately, in a non-invasive manner, providing significant opportunities in nephrology, both for experimental and clinical studies. However, this method uses contrast media that may alter renal function, thus limiting its use in patients with chronic renal failure. Moreover, the increase irradiation delivered to the patient with multi detector computed tomography (MDCT) should be considered. Copyright © 2011 Association Société de néphrologie. Published by Elsevier SAS. All rights reserved.
Can we close the Bohr-Einstein quantum debate?
NASA Astrophysics Data System (ADS)
Kupczynski, Marian
2017-10-01
Recent experiments allow one to conclude that Bell-type inequalities are indeed violated; thus, it is important to understand what this means and how we can explain the existence of strong correlations between outcomes of distant measurements. Do we have to announce that Einstein was wrong, Nature is non-local and non-local correlations are produced due to quantum magic and emerge, somehow, from outside space-time? Fortunately, such conclusions are unfounded because, if supplementary parameters describing measuring instruments are correctly incorporated in a theoretical model, then Bell-type inequalities may not be proved. We construct a simple probabilistic model allowing these correlations to be explained in a locally causal way. In our model, measurement outcomes are neither predetermined nor produced in an irreducibly random way. We explain why, contrary to the general belief, the introduction of setting-dependent parameters does not restrict experimenters' freedom of choice. Since the violation of Bell-type inequalities does not allow the conclusion that Nature is non-local and that quantum theory is complete, the Bohr-Einstein quantum debate may not be closed. The continuation of this debate is important not only for a better understanding of Nature but also for various practical applications of quantum phenomena. This article is part of the themed issue `Second quantum revolution: foundational questions'.
Can we close the Bohr-Einstein quantum debate?
Kupczynski, Marian
2017-11-13
Recent experiments allow one to conclude that Bell-type inequalities are indeed violated; thus, it is important to understand what this means and how we can explain the existence of strong correlations between outcomes of distant measurements. Do we have to announce that Einstein was wrong, Nature is non-local and non-local correlations are produced due to quantum magic and emerge, somehow, from outside space-time? Fortunately, such conclusions are unfounded because, if supplementary parameters describing measuring instruments are correctly incorporated in a theoretical model, then Bell-type inequalities may not be proved. We construct a simple probabilistic model allowing these correlations to be explained in a locally causal way. In our model, measurement outcomes are neither predetermined nor produced in an irreducibly random way. We explain why, contrary to the general belief, the introduction of setting-dependent parameters does not restrict experimenters' freedom of choice. Since the violation of Bell-type inequalities does not allow the conclusion that Nature is non-local and that quantum theory is complete, the Bohr-Einstein quantum debate may not be closed. The continuation of this debate is important not only for a better understanding of Nature but also for various practical applications of quantum phenomena.This article is part of the themed issue 'Second quantum revolution: foundational questions'. © 2017 The Author(s).
Parameter Estimation as a Problem in Statistical Thermodynamics.
Earle, Keith A; Schneider, David J
2011-03-14
In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.
Exchange interaction and the tunneling induced transparency in coupled quantum dots
NASA Astrophysics Data System (ADS)
Borges, Halyne; Alcalde, Augusto; Ulloa, Sergio
2014-03-01
Stacked semiconductor quantum dots coupled by tunneling are unique ``quantum molecule'' where it is possible to create a multilevel structure of excitonic states. This structure allows the investigation of quantum interference processes and their control via electric external fields. In this work, we investigate the optical response of a quantum molecule coherently driven by a polarized laser, considering the splitting in excitonic levels caused by isotropic and anisotropic exchange interactions. In our model we consider interdot transitions mediated by the the hole tunneling between states with the same total spin and, between bright and dark exciton states. Using realistic experimental parameters, we demonstrate that the excitonic states coupled by tunneling exhibit an enriched and controllable optical response. Our results show that through the appropriate control of the external electric field and light polarization, the tunneling coupling establishes an efficient destructive quantum interference path that creates a transparency window in the absorption spectra, whenever states of appropriate symmetry are mixed by the hole tunneling. We explore the relevant parameters space that would allows with the experiments. CAPES, INCT-IQ and MWN/CIAM-NSF.
Lessons and prospects from the pMSSM after LHC Run I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cahill-Rowley, M.; Hewett, J. L.; Ismail, A.
2015-03-01
We study SUSY signatures at the 7, 8 and 14 TeV LHC employing the 19-parameter, R-parity conserving p(henomenological)MSSM, in the scenario with a neutralino lightest supersymmetric particle (LSP). Our results were obtained via a fast Monte Carlo simulation of the ATLAS SUSY analysis suite. The flexibility of this framework allows us to study a wide variety of SUSY phenomena simultaneously and to probe for weak spots in existing SUSY search analyses. We determine the ranges of the sparticle masses that are either disfavored or allowed after the searches with the 7 and 8 TeV data sets are combined. We findmore » that natural SUSY models with light squarks and gluinos remain viable. We extrapolate to 14 TeV with both 300 fb(-1) and 3 ab(-1) of integrated luminosity and determine the expected sensitivity of the jets + MET and stop searches to the pMSSM parameter space. We find that the high-luminosity LHC will be powerful in probing SUSY with neutralino LSPs and can provide a more definitive statement on the existence of natural supersymmetry.« less
What hadron collider is required to discover or falsify natural supersymmetry?
NASA Astrophysics Data System (ADS)
Baer, Howard; Barger, Vernon; Gainer, James S.; Huang, Peisi; Savoy, Michael; Serce, Hasan; Tata, Xerxes
2017-11-01
Weak scale supersymmetry (SUSY) remains a compelling extension of the Standard Model because it stabilizes the quantum corrections to the Higgs and W , Z boson masses. In natural SUSY models these corrections are, by definition, never much larger than the corresponding masses. Natural SUSY models all have an upper limit on the gluino mass, too high to lead to observable signals even at the high luminosity LHC. However, in models with gaugino mass unification, the wino is sufficiently light that supersymmetry discovery is possible in other channels over the entire natural SUSY parameter space with no worse than 3% fine-tuning. Here, we examine the SUSY reach in more general models with and without gaugino mass unification (specifically, natural generalized mirage mediation), and show that the high energy LHC (HE-LHC), a pp collider with √{ s } = 33 TeV, will be able to detect the SUSY signal over the entire allowed mass range. Thus, HE-LHC would either discover or conclusively falsify natural SUSY with better than 3% fine-tuning using a conservative measure that allows for correlations among the model parameters.
NASA Astrophysics Data System (ADS)
De Geyter, G.; Baes, M.; Fritz, J.; Camps, P.
2013-02-01
We present FitSKIRT, a method to efficiently fit radiative transfer models to UV/optical images of dusty galaxies. These images have the advantage that they have better spatial resolution compared to FIR/submm data. FitSKIRT uses the GAlib genetic algorithm library to optimize the output of the SKIRT Monte Carlo radiative transfer code. Genetic algorithms prove to be a valuable tool in handling the multi- dimensional search space as well as the noise induced by the random nature of the Monte Carlo radiative transfer code. FitSKIRT is tested on artificial images of a simulated edge-on spiral galaxy, where we gradually increase the number of fitted parameters. We find that we can recover all model parameters, even if all 11 model parameters are left unconstrained. Finally, we apply the FitSKIRT code to a V-band image of the edge-on spiral galaxy NGC 4013. This galaxy has been modeled previously by other authors using different combinations of radiative transfer codes and optimization methods. Given the different models and techniques and the complexity and degeneracies in the parameter space, we find reasonable agreement between the different models. We conclude that the FitSKIRT method allows comparison between different models and geometries in a quantitative manner and minimizes the need of human intervention and biasing. The high level of automation makes it an ideal tool to use on larger sets of observed data.
Scanning the parameter space of collapsing rotating thin shells
NASA Astrophysics Data System (ADS)
Rocha, Jorge V.; Santarelli, Raphael
2018-06-01
We present results of a comprehensive study of collapsing and bouncing thin shells with rotation, framing it in the context of the weak cosmic censorship conjecture. The analysis is based on a formalism developed specifically for higher odd dimensions that is able to describe the dynamics of collapsing rotating shells exactly. We analyse and classify a plethora of shell trajectories in asymptotically flat spacetimes. The parameters varied include the shell’s mass and angular momentum, its radial velocity at infinity, the (linear) equation-of-state parameter and the spacetime dimensionality. We find that plunges of rotating shells into black holes never produce naked singularities, as long as the matter shell obeys the weak energy condition, and so respects cosmic censorship. This applies to collapses of dust shells starting from rest or with a finite velocity at infinity. Not even shells with a negative isotropic pressure component (i.e. tension) lead to the formation of naked singularities, as long as the weak energy condition is satisfied. Endowing the shells with a positive isotropic pressure component allows for the existence of bouncing trajectories satisfying the dominant energy condition and fully contained outside rotating black holes. Otherwise any turning point occurs always inside the horizon. These results are based on strong numerical evidence from scans of numerous sections in the large parameter space available to these collapsing shells. The generalisation of the radial equation of motion to a polytropic equation-of-state for the matter shell is also included in an appendix.
19 CFR 19.30 - Domestic wheat not to be allowed in bonded space.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Domestic wheat not to be allowed in bonded space... THEREIN Space Bonded for the Storage of Wheat § 19.30 Domestic wheat not to be allowed in bonded space. The presence of domestic wheat in space bonded for the storage of imported wheat shall not be...
Mechanical Analog Approach to Parameter Estimation of Lateral Spacecraft Fuel Slosh
NASA Technical Reports Server (NTRS)
Chatman, Yadira; Gangadharan, Sathya; Schlee, Keith; Sudermann, James; Walker, Charles; Ristow, James; Hubert, Carl
2007-01-01
The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. Even with modern computing systems, CFD type simulations are not fast enough to allow for large scale Monte Carlo analyses of spacecraft and launch vehicle dynamic behavior with slosh included. Simplified mechanical analogs for the slosh are preferred during the initial stages of design to reduce computational time and effort to evaluate the Nutation Time Constant (NTC). Analytic determination of the slosh analog parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices such as elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks, these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the hand-derived equations of motion for the mechanical analog are evaluated and their results compared with the experimental results. Of particular interest is the effect of diaphragms and bladders on the slosh dynamics and how best to model these devices. An experimental set-up is designed and built to include a diaphragm in the simulated spacecraft fuel tank subjected to lateral slosh. This research paper focuses on the parameter estimation of a SimMechanics model of the simulated spacecraft propellant tank with and without diaphragms using lateral fuel slosh experiments. Automating the parameter identification process will save time and thus allow earlier identification of potential vehicle problems.
Investigations on the hierarchy of reference frames in geodesy and geodynamics
NASA Technical Reports Server (NTRS)
Grafarend, E. W.; Mueller, I. I.; Papo, H. B.; Richter, B.
1979-01-01
Problems related to reference directions were investigated. Space and time variant angular parameters are illustrated in hierarchic structures or towers. Using least squares techniques, model towers of triads are presented which allow the formation of linear observation equations. Translational and rotational degrees of freedom (origin and orientation) are discussed along with and the notion of length and scale degrees of freedom. According to the notion of scale parallelism, scale factors with respect to a unit length are given. Three-dimensional geodesy was constructed from the set of three base vectors (gravity, earth-rotation and the ecliptic normal vector). Space and time variations are given with respect to a polar and singular value decomposition or in terms of changes in translation, rotation, deformation (shear, dilatation or angular and scale distortions).
Constraining the loop quantum gravity parameter space from phenomenology
NASA Astrophysics Data System (ADS)
Brahma, Suddhasattwa; Ronco, Michele
2018-03-01
Development of quantum gravity theories rarely takes inputs from experimental physics. In this letter, we take a small step towards correcting this by establishing a paradigm for incorporating putative quantum corrections, arising from canonical quantum gravity (QG) theories, in deriving falsifiable modified dispersion relations (MDRs) for particles on a deformed Minkowski space-time. This allows us to differentiate and, hopefully, pick between several quantization choices via testable, state-of-the-art phenomenological predictions. Although a few explicit examples from loop quantum gravity (LQG) (such as the regularization scheme used or the representation of the gauge group) are shown here to establish the claim, our framework is more general and is capable of addressing other quantization ambiguities within LQG and also those arising from other similar QG approaches.
Fluid Distribution for In-space Cryogenic Propulsion
NASA Technical Reports Server (NTRS)
Lear, William
2005-01-01
The ultimate goal of this task is to enable the use of a single supply of cryogenic propellants for three distinct spacecraft propulsion missions: main propulsion, orbital maneuvering, and attitude control. A fluid distribution system is sought which allows large propellant flows during the first two missions while still allowing control of small propellant flows during attitude control. Existing research has identified the probable benefits of a combined thermal management/power/fluid distribution system based on the Solar Integrated Thermal Management and Power (SITMAP) cycle. Both a numerical model and an experimental model are constructed in order to predict the performance of such an integrated thermal management/propulsion system. This research task provides a numerical model and an experimental apparatus which will simulate an integrated thermal/power/fluid management system based on the SITMAP cycle, and assess its feasibility for various space missions. Various modifications are done to the cycle, such as the addition of a regeneration process that allows heat to be transferred into the working fluid prior to the solar collector, thereby reducing the collector size and weight. Fabri choking analysis was also accounted for. Finally the cycle is to be optimized for various space missions based on a mass based figure of merit, namely the System Mass Ratio (SMR). -. 1 he theoretical and experimental results from these models are be used to develop a design code (JETSIT code) which is able to provide design parameters for such a system, over a range of cooling loads, power generation, and attitude control thrust levels. The performance gains and mass savings will be compared to those of existing spacecraft systems.
Life Sciences Research in the Centrifuge Accommodation Module of the International Space Station
NASA Technical Reports Server (NTRS)
Dalton, Bonnie P.; Plaut, Karen; Meeker, Gabrielle B.; Sun, Sid (Technical Monitor)
2000-01-01
The Centrifuge Accommodation Module (CAM) will be the home of the fundamental biology research facilities on the International Space Station (ISS). These facilities are being built by the Biological Research Project (BRP), whose goal is to oversee development of a wide variety of habitats and host systems to support life sciences research on the ISS. The habitats and host systems are designed to provide life support for a variety of specimens including cells, bacteria, yeast, plants, fish, rodents, eggs (e.g., quail), and insects. Each habitat contains specimen chambers that allow for easy manipulation of specimens and alteration of sample numbers. All habitats are capable of sustaining life support for 90 days and have automated as well as full telescience capabilities for sending habitat parameters data to investigator homesite laboratories. The habitats provide all basic life support capabilities including temperature control, humidity monitoring and control, waste management, food, media and water delivery as well as adjustable lighting. All habitats will have either an internal centrifuge or are fitted to the 2.5-meter diameter centrifuge allowing for variable centrifugation up to 2 g. Specimen chambers are removable so that the specimens can be handled in the life sciences glovebox. Laboratory support equipment is provided for handling the specimens. This includes a compound and dissecting microscope with advanced video imaging, mass measuring devices, refrigerated centrifuge for processing biological samples, pH meter, fixation and complete cryogenic storage capabilities. The research capabilities provided by the fundamental biology facilities will allow for flexibility and efficiency for long term research on the International Space Station.
Parameter redundancy in discrete state-space and integrated models.
Cole, Diana J; McCrea, Rachel S
2016-09-01
Discrete state-space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state-space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state-space models using discrete analogues of methods for continuous state-space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Electroweak phase transition in the {mu}{nu}SSM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, Daniel J. H.; School of Physics, Korea Institute for Advanced Study, 207-43, Cheongnyangni2-dong, Dongdaemun-gu, Seoul 130-722; Long, Andrew J.
2010-06-15
An extension of the minimal supersymmetric standard model called the {mu}{nu}SSM does not allow a conventional thermal leptogenesis scenario because of the low scale seesaw that it utilizes. Hence, we investigate the possibility of electroweak baryogenesis. Specifically, we identify a parameter region for which the electroweak phase transition is sufficiently strongly first order to realize electroweak baryogenesis. In addition to transitions that are similar to those in the next-to-minimal supersymmetric standard model, we find a novel class of phase transitions in which there is a rotation in the singlet vector space.
NASA Technical Reports Server (NTRS)
Wheeler, D. B.
1978-01-01
Engine performance data, combustion gas thermodynamic properties, and turbine gas parameters were determined for various high power cycle engine configurations derived from the space shuttle main engine that will allow sequential burning of LOX/hydrocarbon and LOX/hydrogen fuels. Both stage combustion and gas generator pump power cycles were considered. Engine concepts were formulated for LOX/RP-1, LOX/CH4, and LOX/C3H8 propellants. Flowrates and operating conditions were established for this initial set of engine systems, and the adaptability of the major components of shuttle main engine was investigated.
Monte Carlo generators for studies of the 3D structure of the nucleon
Avakian, Harut; D'Alesio, U.; Murgia, F.
2015-01-23
In this study, extraction of transverse momentum and space distributions of partons from measurements of spin and azimuthal asymmetries requires development of a self consistent analysis framework, accounting for evolution effects, and allowing control of systematic uncertainties due to variations of input parameters and models. Development of realistic Monte-Carlo generators, accounting for TMD evolution effects, spin-orbit and quark-gluon correlations will be crucial for future studies of quark-gluon dynamics in general and 3D structure of the nucleon in particular.
Solar neutrinos and the MSW effect for three-neutrino mixing
NASA Technical Reports Server (NTRS)
Shi, X.; Schramm, David N.
1991-01-01
Researchers considered three-neutrino Mikheyev-Smirnov-Wolfenstein (MSW) mixing, assuming m sub 3 is much greater than m sub 2 is greater than m sub 1 as expected from theoretical consideration if neutrinos have mass. They calculated the corresponding mixing parameter space allowed by the Cl-37 and Kamiokande 2 experiments. They also calculated the expected depletion for the Ga-71 experiment. They explored a range of theoretical uncertainty due to possible astrophysical effects by varying the B-8 neutrino flux and redoing the MSW mixing calculation.
Cosmology of a covariant Galilean field.
De Felice, Antonio; Tsujikawa, Shinji
2010-09-10
We study the cosmology of a covariant scalar field respecting a Galilean symmetry in flat space-time. We show the existence of a tracker solution that finally approaches a de Sitter fixed point responsible for cosmic acceleration today. The viable region of model parameters is clarified by deriving conditions under which ghosts and Laplacian instabilities of scalar and tensor perturbations are absent. The field equation of state exhibits a peculiar phantomlike behavior along the tracker, which allows a possibility to observationally distinguish the Galileon gravity from the cold dark matter model with a cosmological constant.
The role of CP violating scatterings in baryogenesis—case study of the neutron portal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldes, Iason; Bell, Nicole F.; Millar, Alexander
Many baryogenesis scenarios invoke the charge parity (CP) violating out-of-equilibrium decay of a heavy particle in order to explain the baryon asymmetry. Such scenarios will in general also allow CP violating scatterings. We study the effect of these CP violating scatterings on the final asymmetry in a neutron portal scenario. We solve the Boltzmann equations governing the evolution of the baryon number numerically and show that the CP violating scatterings play a dominant role in a significant portion of the parameter space.
NASA Astrophysics Data System (ADS)
Kočí, Jan; Maděra, Jiří; Kočí, Václav; Hlaváčová, Zuzana; Černý, Robert
2017-11-01
A simple laboratory experiment for the determination of thermal response of a studied sample during thawing is described in the paper. The sample made of autoclaved aerated concrete was partially water saturated and frozen. Then, the temperature development during thawing was recorded, allowing to identify the time scale of the phase change process taking place inside the sample. The experimental data was then used in the inverse analysis, in order to find unknown parameters of the smoothed effective specific heat capacity model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fireman, E.C.; dos Santos, R.J.
1997-04-01
Within a recently developed extended decoration-transformation formalism we study the thermodynamic properties of a linear chain of alternate Ising {sigma}=1 and S=3/2 spins. We allow for different anisotropy fields on each subchain of different spins. For some range of the parameter space we show the existence of a crossover from a ferromagnetic to an antiferromagnetic-like behavior of the model, as explicitly captured in the susceptibility results. {copyright} {ital 1997 American Institute of Physics.}
Constraining the optical potential in the search for η-mesic 4He
NASA Astrophysics Data System (ADS)
Skurzok, M.; Moskal, P.; Kelkar, N. G.; Hirenzaki, S.; Nagahiro, H.; Ikeno, N.
2018-07-01
A consistent description of the dd →4Heη and dd → (4Heη)bound→ X cross sections was recently proposed with a broad range of real (V0) and imaginary (W0), η-4He optical potential parameters leading to a good agreement with the dd →4Heη data. Here we compare the predictions of the model below the η production threshold, with the WASA-at-COSY excitation functions for the dd →3HeNπ reactions to put stronger constraints on (V0 ,W0). The allowed parameter space (with |V0 | < ∼ 60 MeV and |W0 | < ∼ 7 MeV estimated at 90% CL) excludes most optical model predictions of η-4He nuclei except for some loosely bound narrow states.
Interactive visualization of multi-data-set Rietveld analyses using Cinema:Debye-Scherrer.
Vogel, Sven C; Biwer, Chris M; Rogers, David H; Ahrens, James P; Hackenberg, Robert E; Onken, Drew; Zhang, Jianzhong
2018-06-01
A tool named Cinema:Debye-Scherrer to visualize the results of a series of Rietveld analyses is presented. The multi-axis visualization of the high-dimensional data sets resulting from powder diffraction analyses allows identification of analysis problems, prediction of suitable starting values, identification of gaps in the experimental parameter space and acceleration of scientific insight from the experimental data. The tool is demonstrated with analysis results from 59 U-Nb alloy samples with different compositions, annealing times and annealing temperatures as well as with a high-temperature study of the crystal structure of CsPbBr 3 . A script to extract parameters from a series of Rietveld analyses employing the widely used GSAS Rietveld software is also described. Both software tools are available for download.
Interactive visualization of multi-data-set Rietveld analyses using Cinema:Debye-Scherrer
Biwer, Chris M.; Rogers, David H.; Ahrens, James P.; Hackenberg, Robert E.; Onken, Drew; Zhang, Jianzhong
2018-01-01
A tool named Cinema:Debye-Scherrer to visualize the results of a series of Rietveld analyses is presented. The multi-axis visualization of the high-dimensional data sets resulting from powder diffraction analyses allows identification of analysis problems, prediction of suitable starting values, identification of gaps in the experimental parameter space and acceleration of scientific insight from the experimental data. The tool is demonstrated with analysis results from 59 U–Nb alloy samples with different compositions, annealing times and annealing temperatures as well as with a high-temperature study of the crystal structure of CsPbBr3. A script to extract parameters from a series of Rietveld analyses employing the widely used GSAS Rietveld software is also described. Both software tools are available for download. PMID:29896062
Parameter Estimation of Spacecraft Fuel Slosh Model
NASA Technical Reports Server (NTRS)
Gangadharan, Sathya; Sudermann, James; Marlowe, Andrea; Njengam Charles
2004-01-01
Fuel slosh in the upper stages of a spinning spacecraft during launch has been a long standing concern for the success of a space mission. Energy loss through the movement of the liquid fuel in the fuel tank affects the gyroscopic stability of the spacecraft and leads to nutation (wobble) which can cause devastating control issues. The rate at which nutation develops (defined by Nutation Time Constant (NTC can be tedious to calculate and largely inaccurate if done during the early stages of spacecraft design. Pure analytical means of predicting the influence of onboard liquids have generally failed. A strong need exists to identify and model the conditions of resonance between nutation motion and liquid modes and to understand the general characteristics of the liquid motion that causes the problem in spinning spacecraft. A 3-D computerized model of the fuel slosh that accounts for any resonant modes found in the experimental testing will allow for increased accuracy in the overall modeling process. Development of a more accurate model of the fuel slosh currently lies in a more generalized 3-D computerized model incorporating masses, springs and dampers. Parameters describing the model include the inertia tensor of the fuel, spring constants, and damper coefficients. Refinement and understanding the effects of these parameters allow for a more accurate simulation of fuel slosh. The current research will focus on developing models of different complexity and estimating the model parameters that will ultimately provide a more realistic prediction of Nutation Time Constant obtained through simulation.
NASA Astrophysics Data System (ADS)
Baranov, V. M.; Baevsky, R. M.; Drescher, J.; Tank, J.
parameters describing the results of the function of these systems like heart rate, arterial pressure, cardiac output, or breathing frequency, concentration of O2 and CO2 , etc. Missing significant changes of these parameters during weightlessness supports the hypothesis that adaptational and compensatory mechanisms are sufficient and guarantee cardiovascular homeostasis under changing environmental conditions. characteristic changes of the vegetative balance and of the activity of different regulatory elements at the brainstem and subcortical level. This changes guaranteed the adaptation to long term weightlessness. However, it remains unclear to what extent the different levels are involved. Moreover, the criteria describing the efficacy of cardiorespiratory interaction for the different functional states are not defined yet. The investigation of this problems is highly relevant in order to improve the medical control, especially if considering that the disruption of regulatory systems mostly precedes dangerous destruction of homeostasis. cardiovascular and respiratory function on Board the International Space Station (ISS) aiming to obtain new insights into the interaction between different regulatory elements. "Puls" is measures ECG, photoplethysmogram (PPG), and the pneumotachogram (PTG). The ECG is used to measure time series of R-R intervals and to analyse HRV. PPG is used to define the pulse wave velocity, phases of the cardiac cycle, and an estimate of the filling of finger vessels. The variability of these parameters is also calculated and compared to HRV. The analysis of the PTG allows to describe the interaction of the regulatory parameters of the cardiovascular and respiratory systems. Hence, an important feature of the experiment "Puls" is the investigation of regulatory mechanisms rather than of cardiovascular homeostasis. cardiography) and left ventricular contractility (seismocardiography) will be obtained. This expansion is of major importance because, it allows us to get deeper insight into regulatory mechanisms of the cardiorespiratory system and into the state of cardiovascular homeostasis. have the same size (90 x 60 x 20 mm), identical technology, and identical interfaces with the computer. the onboard experiment and to store the obtained data; 2) "Editor": to archive and dearchive the obtained data, to edit them and to insert necessary comments and markers; 3) "Earth": to edit and analyse the data under laboratory conditions.The subprogram "Earth" is an original software package for data analysis, peak detection, calculation of a variety of parameters, time series forming and editing, statistical and spectral time series analysis. Furthermore, a specialized data base is designated for storing of the biosignals, results of analysis, information about the investigated subjects and comments of simple autonomic function tests will allow to assess different elements of the regulatory mechanisms. Special interest will be given to respiratory tests in order to evaluate the interaction between the cardiovascular system and respiration. volunteers and in patients with different cardiovascular diseases. The results were used to establish normal values and criteria for the prognosis of pathologic changes. These materials will be used at valuation the data obtained during researches on ISS. respiratory systems onboard the ISS are the following: 1) definition of the most important parameters, which can be measured simple and reliable during weightlessness; 2) development of miniaturized devices which can be kept on the astronauts body and which could be used in future as an autonomic system of operational medical control; 3) development of original software packages which allow to detect prognostic changes of the regulatory pattern preceding diseases and based on time series analysis of a large number of cardiorespiratory parameters.
Planetary and Space Simulation Facilities (PSI) at DLR
NASA Astrophysics Data System (ADS)
Panitz, Corinna; Rabbow, E.; Rettberg, P.; Kloss, M.; Reitz, G.; Horneck, G.
2010-05-01
The Planetary and Space Simulation facilities at DLR offer the possibility to expose biological and physical samples individually or integrated into space hardware to defined and controlled space conditions like ultra high vacuum, low temperature and extraterrestrial UV radiation. An x-ray facility stands for the simulation of the ionizing component at the disposal. All of the simulation facilities are required for the preparation of space experiments: - for testing of the newly developed space hardware - for investigating the effect of different space parameters on biological systems as a preparation for the flight experiment - for performing the 'Experiment Verification Tests' (EVT) for the specification of the test parameters - and 'Experiment Sequence Tests' (EST) by simulating sample assemblies, exposure to selected space parameters, and sample disassembly. To test the compatibility of the different biological and chemical systems and their adaptation to the opportunities and constraints of space conditions a profound ground support program has been developed among many others for the ESA facilities of the ongoing missions EXPOSE-R and EXPOSE-E on board of the International Space Station ISS . Several experiment verification tests EVTs and an experiment sequence test EST have been conducted in the carefully equipped and monitored planetary and space simulation facilities PSI of the Institute of Aerospace Medicine at DLR in Cologne, Germany. These ground based pre-flight studies allowed the investigation of a much wider variety of samples and the selection of the most promising organisms for the flight experiment. EXPOSE-E had been attached to the outer balcony of the European Columbus module of the ISS in February 2008 and stayed for 1,5 years in space; EXPOSE-R has been attached to the Russian Svezda module of the ISS in spring 2009 and mission duration will be approx. 1,5 years. The missions will give new insights into the survivability of terrestrial organisms in space and will contribute to the understanding of the organic chemistry processes in space, the biological adaptation strategies to extreme conditions, e.g. on early Earth and Mars, and the distribution of life beyond its planet of origin The results gained during the simulation experiments demonstrated mission preparation as a basic requirement for successful and significant results of every space flight experiment. Hence, the Mission preparation program that was performed in the context of the space missions EXPOSE-E and EXPOSE-R proofed the outstanding importance and accentuated need for ground based experiments before and during a space mission. The facilities are also necessary for the performance of the ground control experiment during the mission, the so-called Mission Simulation Test (MST) under simulated space conditions, by parallel exposure of samples to simulated space parameters according to flight data received by telemetry. Finally the facilities also provide the possibility to simulate the surface and climate conditions of the planet Mars. In this way they offer the possibility to investigate under simulated Mars conditions the chances for development of life on Mars and to gain previous knowledge for the search for life on today's Mars and in this context especially the parameters for a manned mission to Mars. References [1] Rabbow E, Rettberg P, Panitz C, Drescher J, Horneck G, Reitz G (2005) SSIOUX - Space Simulation for Investigating Organics, Evolution and Exobiology, Adv. Space Res. 36 (2) 297-302, doi:10.1016/j.asr.2005.08.040Aman, A. and Bman, B. (1997) JGR, 90,1151-1154. [2] Fekete A, Modos K, Hegedüs M, Kovacs G, Ronto Gy, Peter A, Lammer H, Panitz C (2005) DNA Damage under simulated extraterrestrial conditions in bacteriophage T7 Adv. Space Res. 305-310Aman, A. et al. (1997) Meteoritics & Planet. Sci., 32,A74. [3] Cockell Ch, Schuerger AC, Billi D., Friedmann EI, Panitz C (2005) Effects of a Simulated Martian UV Flux on the Cyanobacterium, Chroococcidiopsis sp. 029, Astrobiology, 5/2 127-140Aman, A. (1996) LPS XXVII, 1344-1 [4] de la Torre Noetzel, R.; Sancho, L.G.; Pintado,A.; Rettberg, Petra; Rabbow, Elke; Panitz,Corinna; Deutschmann, U.; Reina, M.; Horneck, Gerda (2007): BIOPAN experiment LICHENS on the Foton M2 mission Pre-flight verification tests of the Rhizocarpon geographicum-granite ecosystem. COSPAR [Hrsg.]: Advances in Space Research, 40, Elsevier, S. 1665 - 1671, DOI 10.1016/j.asr.2007.02.022
Kim, K H; Kim, K S; Kim, J E; Kim, D W; Seol, K H; Lee, S H; Chae, B J; Kim, Y H
2017-03-01
This study was conducted to determine the optimal space allowance for maximizing the growth performance of pigs at each of the following five growth stages (based on BW ranges): stage 1, 11 to 25 kg BW; stage 2, 25 to 45 kg BW; stage 3, 45 to 65 kg BW; stage 4, 65 to 85 kg BW; and stage 5, 85 to 110 kg BW. A total of 1590 crossbred (Landrace×Yorkshire×Duroc) pigs were assigned to one of four treatments at each growth stage, with three replicates each. Pen areas at each growth stage were 6, 11, 16, 19.5 and 20 m2 for stages 1 to 5, respectively. Space allowances for the four treatments at each growth stage were modified by varying the number of pigs per pen (22, 25, 28 and 31 pigs in T1, T2, T3 and T4, respectively). Blood samples were collected on the final day of each growth stage. The average daily gain (ADG) decreased significantly with decreased space allowances at all growth stages, except at stage 2. Average daily feed intake (ADFI) was not significantly affected by space allowances at stages 1 to 4; however, at stage 5, there was a linear effect of space allowance on ADFI. Thus, the feed conversion ratio showed results similar to those for ADG. Serum cortisol concentrations, indicating the level of stress response, increased as space allowances decreased. The highest serum cortisol concentrations were observed in T3 at stages 2 to 5. Serum tumor necrosis factor-α levels were significantly higher in association with a small space allowance than with at large space allowance at stages 2, 4 and 5. Serum interleukin-1β levels also increased in a significant linear manner at every growth stage in pigs reared at a low space allowance, except at stage 4 (P=0.068). This study found that limited space allowance decreases the growth performance of pigs and induces stress and inflammatory responses. We confirmed that no significant effect of space allowance on growth performance and serum cortisol concentrations are observed between T1 and T2 across all growth stages. We suggest that the optimal space allowances for pigs according to their BW are as follows: 0.24, 0.44, 0.64, 0.78 and 0.80 m2/pig for BWs of 11 to 25, 25 to 45, 45 to 65, 65 to 85 and 85 to 115 kg, respectively.
Peeters, Elisabeth; De Beer, Thomas; Vervaet, Chris; Remon, Jean-Paul
2015-04-01
Tableting is a complex process due to the large number of process parameters that can be varied. Knowledge and understanding of the influence of these parameters on the final product quality is of great importance for the industry, allowing economic efficiency and parametric release. The aim of this study was to investigate the influence of paddle speeds and fill depth at different tableting speeds on the weight and weight variability of tablets. Two excipients possessing different flow behavior, microcrystalline cellulose (MCC) and dibasic calcium phosphate dihydrate (DCP), were selected as model powders. Tablets were manufactured via a high-speed rotary tablet press using design of experiments (DoE). During each experiment also the volume of powder in the forced feeder was measured. Analysis of the DoE revealed that paddle speeds are of minor importance for tablet weight but significantly affect volume of powder inside the feeder in case of powders with excellent flowability (DCP). The opposite effect of paddle speed was observed for fairly flowing powders (MCC). Tableting speed played a role in weight and weight variability, whereas changing fill depth exclusively influenced tablet weight. The DoE approach allowed predicting the optimum combination of process parameters leading to minimum tablet weight variability. Monte Carlo simulations allowed assessing the probability to exceed the acceptable response limits if factor settings were varied around their optimum. This multi-dimensional combination and interaction of input variables leading to response criteria with acceptable probability reflected the design space.
Torres, Yadir; Lascano, Sheila; Bris, Jorge; Pavón, Juan; Rodriguez, José A
2014-04-01
One of the most important concerns in long-term prostheses is bone resorption as a result of the stress shielding due to stiffness mismatch between bone and implant. The aim of this study was to obtain porous titanium with stiffness values similar to that exhibited by cortical bone. Porous samples of commercial pure titanium grade-4 were obtained by following both loose-sintering processing and space-holder technique with NaCl between 40 and 70% in volume fraction. Both mechanical properties and porosity morphology were assessed. Young's modulus was measured using uniaxial compression testing, as well as ultrasound methodology. Complete characterization and mechanical testing results allowed us to determine some important findings: (i) optimal parameters for both processing routes; (ii) better mechanical response was obtained by using space-holder technique; (iii) pore geometry of loose sintering samples becomes more regular with increasing sintering temperature; in the case of the space-holder technique that trend was observed for decreasing volume fraction; (iv) most reliable Young's modulus measurements were achieved by ultrasound technique. Copyright © 2013 Elsevier B.V. All rights reserved.
3-D model-based Bayesian classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soenneland, L.; Tenneboe, P.; Gehrmann, T.
1994-12-31
The challenging task of the interpreter is to integrate different pieces of information and combine them into an earth model. The sophistication level of this earth model might vary from the simplest geometrical description to the most complex set of reservoir parameters related to the geometrical description. Obviously the sophistication level also depend on the completeness of the available information. The authors describe the interpreter`s task as a mapping between the observation space and the model space. The information available to the interpreter exists in observation space and the task is to infer a model in model-space. It is well-knownmore » that this inversion problem is non-unique. Therefore any attempt to find a solution depend son constraints being added in some manner. The solution will obviously depend on which constraints are introduced and it would be desirable to allow the interpreter to modify the constraints in a problem-dependent manner. They will present a probabilistic framework that gives the interpreter the tools to integrate the different types of information and produce constrained solutions. The constraints can be adapted to the problem at hand.« less
Lightweight Radiator for in Space Nuclear Electric Propulsion
NASA Technical Reports Server (NTRS)
Craven, Paul; Tomboulian, Briana; SanSoucie, Michael
2014-01-01
Nuclear electric propulsion (NEP) is a promising option for high-speed in-space travel due to the high energy density of nuclear fission power sources and efficient electric thrusters. Advanced power conversion technologies may require high operating temperatures and would benefit from lightweight radiator materials. Radiator performance dictates power output for nuclear electric propulsion systems. Game-changing propulsion systems are often enabled by novel designs using advanced materials. Pitch-based carbon fiber materials have the potential to offer significant improvements in operating temperature, thermal conductivity, and mass. These properties combine to allow advances in operational efficiency and high temperature feasibility. An effort at the NASA Marshall Space Flight Center to show that woven high thermal conductivity carbon fiber mats can be used to replace standard metal and composite radiator fins to dissipate waste heat from NEP systems is ongoing. The goals of this effort are to demonstrate a proof of concept, to show that a significant improvement of specific power (power/mass) can be achieved, and to develop a thermal model with predictive capabilities making use of constrained input parameter space. A description of this effort is presented.
Velocity-based movement modeling for individual and population level inference
Hanks, Ephraim M.; Hooten, Mevin B.; Johnson, Devin S.; Sterling, Jeremy T.
2011-01-01
Understanding animal movement and resource selection provides important information about the ecology of the animal, but an animal's movement and behavior are not typically constant in time. We present a velocity-based approach for modeling animal movement in space and time that allows for temporal heterogeneity in an animal's response to the environment, allows for temporal irregularity in telemetry data, and accounts for the uncertainty in the location information. Population-level inference on movement patterns and resource selection can then be made through cluster analysis of the parameters related to movement and behavior. We illustrate this approach through a study of northern fur seal (Callorhinus ursinus) movement in the Bering Sea, Alaska, USA. Results show sex differentiation, with female northern fur seals exhibiting stronger response to environmental variables.
NASA Astrophysics Data System (ADS)
Fitzpatrick, Matthew R. C.; Kennett, Malcolm P.
2018-05-01
We develop a formalism that allows the study of correlations in space and time in both the superfluid and Mott insulating phases of the Bose-Hubbard Model. Specifically, we obtain a two particle irreducible effective action within the contour-time formalism that allows for both equilibrium and out of equilibrium phenomena. We derive equations of motion for both the superfluid order parameter and two-point correlation functions. To assess the accuracy of this formalism, we study the equilibrium solution of the equations of motion and compare our results to existing strong coupling methods as well as exact methods where possible. We discuss applications of this formalism to out of equilibrium situations.
Processing the Viking lander camera data
NASA Technical Reports Server (NTRS)
Levinthal, E. C.; Tucker, R.; Green, W.; Jones, K. L.
1977-01-01
Over 1000 camera events were returned from the two Viking landers during the Primary Mission. A system was devised for processing camera data as they were received, in real time, from the Deep Space Network. This system provided a flexible choice of parameters for three computer-enhanced versions of the data for display or hard-copy generation. Software systems allowed all but 0.3% of the imagery scan lines received on earth to be placed correctly in the camera data record. A second-order processing system was developed which allowed extensive interactive image processing including computer-assisted photogrammetry, a variety of geometric and photometric transformations, mosaicking, and color balancing using six different filtered images of a common scene. These results have been completely cataloged and documented to produce an Experiment Data Record.
Velocity-Based Movement Modeling for Individual and Population Level Inference
Hanks, Ephraim M.; Hooten, Mevin B.; Johnson, Devin S.; Sterling, Jeremy T.
2011-01-01
Understanding animal movement and resource selection provides important information about the ecology of the animal, but an animal's movement and behavior are not typically constant in time. We present a velocity-based approach for modeling animal movement in space and time that allows for temporal heterogeneity in an animal's response to the environment, allows for temporal irregularity in telemetry data, and accounts for the uncertainty in the location information. Population-level inference on movement patterns and resource selection can then be made through cluster analysis of the parameters related to movement and behavior. We illustrate this approach through a study of northern fur seal (Callorhinus ursinus) movement in the Bering Sea, Alaska, USA. Results show sex differentiation, with female northern fur seals exhibiting stronger response to environmental variables. PMID:21931584
Korez, Robert; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž
2014-10-01
Gradual degeneration of intervertebral discs of the lumbar spine is one of the most common causes of low back pain. Although conservative treatment for low back pain may provide relief to most individuals, surgical intervention may be required for individuals with significant continuing symptoms, which is usually performed by replacing the degenerated intervertebral disc with an artificial implant. For designing implants with good bone contact and continuous force distribution, the morphology of the intervertebral disc space and vertebral body endplates is of considerable importance. In this study, we propose a method for parametric modeling of the intervertebral disc space in three dimensions (3D) and show its application to computed tomography (CT) images of the lumbar spine. The initial 3D model of the intervertebral disc space is generated according to the superquadric approach and therefore represented by a truncated elliptical cone, which is initialized by parameters obtained from 3D models of adjacent vertebral bodies. In an optimization procedure, the 3D model of the intervertebral disc space is incrementally deformed by adding parameters that provide a more detailed morphometric description of the observed shape, and aligned to the observed intervertebral disc space in the 3D image. By applying the proposed method to CT images of 20 lumbar spines, the shape and pose of each of the 100 intervertebral disc spaces were represented by a 3D parametric model. The resulting mean (±standard deviation) accuracy of modeling was 1.06±0.98mm in terms of radial Euclidean distance against manually defined ground truth points, with the corresponding success rate of 93% (i.e. 93 out of 100 intervertebral disc spaces were modeled successfully). As the resulting 3D models provide a description of the shape of intervertebral disc spaces in a complete parametric form, morphometric analysis was straightforwardly enabled and allowed the computation of the corresponding heights, widths and volumes, as well as of other geometric features that in detail describe the shape of intervertebral disc spaces. Copyright © 2014 Elsevier Ltd. All rights reserved.
Extending the modeling of the anisotropic galaxy power spectrum to k = 0.4 h Mpc{sup −1}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hand, Nick; Seljak, Uroš; Beutler, Florian
We present a model for the redshift-space power spectrum of galaxies and demonstrate its accuracy in describing the monopole, quadrupole, and hexadecapole of the galaxy density field down to scales of k = 0.4 h Mpc{sup −1}. The model describes the clustering of galaxies in the context of a halo model and the clustering of the underlying halos in redshift space using a combination of Eulerian perturbation theory and N -body simulations. The modeling of redshift-space distortions is done using the so-called distribution function approach. The final model has 13 free parameters, and each parameter is physically motivated rather thanmore » a nuisance parameter, which allows the use of well-motivated priors. We account for the Finger-of-God effect from centrals and both isolated and non-isolated satellites rather than using a single velocity dispersion to describe the combined effect. We test and validate the accuracy of the model on several sets of high-fidelity N -body simulations, as well as realistic mock catalogs designed to simulate the BOSS DR12 CMASS data set. The suite of simulations covers a range of cosmologies and galaxy bias models, providing a rigorous test of the level of theoretical systematics present in the model. The level of bias in the recovered values of f σ{sub 8} is found to be small. When including scales to k = 0.4 h Mpc{sup −1}, we find 15-30% gains in the statistical precision of f σ{sub 8} relative to k = 0.2 h Mpc{sup −1} and a roughly 10–15% improvement for the perpendicular Alcock-Paczynski parameter α{sub ⊥}. Using the BOSS DR12 CMASS mocks as a benchmark for comparison, we estimate an uncertainty on f σ{sub 8} that is ∼10–20% larger than other similar Fourier-space RSD models in the literature that use k ≤ 0.2 h Mpc{sup −1}, suggesting that these models likely have a too-limited parametrization.« less
Relativistic Quantum Metrology: Exploiting relativity to improve quantum measurement technologies
Ahmadi, Mehdi; Bruschi, David Edward; Sabín, Carlos; Adesso, Gerardo; Fuentes, Ivette
2014-01-01
We present a framework for relativistic quantum metrology that is useful for both Earth-based and space-based technologies. Quantum metrology has been so far successfully applied to design precision instruments such as clocks and sensors which outperform classical devices by exploiting quantum properties. There are advanced plans to implement these and other quantum technologies in space, for instance Space-QUEST and Space Optical Clock projects intend to implement quantum communications and quantum clocks at regimes where relativity starts to kick in. However, typical setups do not take into account the effects of relativity on quantum properties. To include and exploit these effects, we introduce techniques for the application of metrology to quantum field theory. Quantum field theory properly incorporates quantum theory and relativity, in particular, at regimes where space-based experiments take place. This framework allows for high precision estimation of parameters that appear in quantum field theory including proper times and accelerations. Indeed, the techniques can be applied to develop a novel generation of relativistic quantum technologies for gravimeters, clocks and sensors. As an example, we present a high precision device which in principle improves the state-of-the-art in quantum accelerometers by exploiting relativistic effects. PMID:24851858
Relativistic quantum metrology: exploiting relativity to improve quantum measurement technologies.
Ahmadi, Mehdi; Bruschi, David Edward; Sabín, Carlos; Adesso, Gerardo; Fuentes, Ivette
2014-05-22
We present a framework for relativistic quantum metrology that is useful for both Earth-based and space-based technologies. Quantum metrology has been so far successfully applied to design precision instruments such as clocks and sensors which outperform classical devices by exploiting quantum properties. There are advanced plans to implement these and other quantum technologies in space, for instance Space-QUEST and Space Optical Clock projects intend to implement quantum communications and quantum clocks at regimes where relativity starts to kick in. However, typical setups do not take into account the effects of relativity on quantum properties. To include and exploit these effects, we introduce techniques for the application of metrology to quantum field theory. Quantum field theory properly incorporates quantum theory and relativity, in particular, at regimes where space-based experiments take place. This framework allows for high precision estimation of parameters that appear in quantum field theory including proper times and accelerations. Indeed, the techniques can be applied to develop a novel generation of relativistic quantum technologies for gravimeters, clocks and sensors. As an example, we present a high precision device which in principle improves the state-of-the-art in quantum accelerometers by exploiting relativistic effects.
Mortier, Séverine Thérèse F C; Van Bockstal, Pieter-Jan; Corver, Jos; Nopens, Ingmar; Gernaey, Krist V; De Beer, Thomas
2016-06-01
Large molecules, such as biopharmaceuticals, are considered the key driver of growth for the pharmaceutical industry. Freeze-drying is the preferred way to stabilise these products when needed. However, it is an expensive, inefficient, time- and energy-consuming process. During freeze-drying, there are only two main process variables to be set, i.e. the shelf temperature and the chamber pressure, however preferably in a dynamic way. This manuscript focuses on the essential use of uncertainty analysis for the determination and experimental verification of the dynamic primary drying Design Space for pharmaceutical freeze-drying. Traditionally, the chamber pressure and shelf temperature are kept constant during primary drying, leading to less optimal process conditions. In this paper it is demonstrated how a mechanistic model of the primary drying step gives the opportunity to determine the optimal dynamic values for both process variables during processing, resulting in a dynamic Design Space with a well-known risk of failure. This allows running the primary drying process step as time efficient as possible, hereby guaranteeing that the temperature at the sublimation front does not exceed the collapse temperature. The Design Space is the multidimensional combination and interaction of input variables and process parameters leading to the expected product specifications with a controlled (i.e., high) probability. Therefore, inclusion of parameter uncertainty is an essential part in the definition of the Design Space, although it is often neglected. To quantitatively assess the inherent uncertainty on the parameters of the mechanistic model, an uncertainty analysis was performed to establish the borders of the dynamic Design Space, i.e. a time-varying shelf temperature and chamber pressure, associated with a specific risk of failure. A risk of failure acceptance level of 0.01%, i.e. a 'zero-failure' situation, results in an increased primary drying process time compared to the deterministic dynamic Design Space; however, the risk of failure is under control. Experimental verification revealed that only a risk of failure acceptance level of 0.01% yielded a guaranteed zero-defect quality end-product. The computed process settings with a risk of failure acceptance level of 0.01% resulted in a decrease of more than half of the primary drying time in comparison with a regular, conservative cycle with fixed settings. Copyright © 2016. Published by Elsevier B.V.
Understanding space charge and controlling beam loss in high intensity synchrotrons
NASA Astrophysics Data System (ADS)
Cousineau, Sarah M.
Future high intensity synchrotrons will require unprecedented control of beam loss in order to comply with radiation safety regulations and to allow for safe, hands-on maintenance of machine hardware. A major cause of beam loss in high intensity synchrotrons is the space charge force of the beam, which can lead to beam halo and emittance dilution. This dissertation presents a comprehensive study of space charge effects in high intensity synchrotron beams. Experimental measurements taken at the Proton Storage Ring (PSR) in Los Alamos National Laboratory and detailed simulations of the experiments are used to identify and characterize resonances that affect these beams. The collective motion of the beam is extensively studied and is shown to be more relevant than the single particle dynamics in describing the resonance response. The emittance evolution of the PSR beam and methods for reducing the space-charge-induced emittance growth are addressed. In a separate study, the emittance evolution of an intense space charge beam is experimentally measured at the Cooler Injector Synchrotron (CIS) at Indiana University. This dissertation also investigates the sophisticated two-stage collimation system of the future Spallation Neutron Source (SNS) high intensity accumulator ring. A realistic Monte-Carlo collimation simulation is developed and used to optimize the SNS ring collimation system parameters. The finalized parameters and predicted beam loss distribution around the ring are presented. The collimators will additionally be used in conjunction with a set of fast kickers to remove the beam from the gap region before the rise of the extraction magnets. The gap cleaning process is optimized and the cleaning efficiency versus momentum spread of the beam is examined.
Rojas-González, L; Chacín-Almarza, B; Corzo-Alvarez, G; Sanabria-Vera, C; Nuñez-González, J
2000-12-01
To measure the corporal dimensions of the workers and the relationships with the spaces and equipments used in the printing processes, as the initial phase for the design and implementation of a surveillance program of work-related musculoskeletal disorders, 38 workers of a press were studied, by making an anthropometric record for ergonomic studies (CAPEE). The interior spaces and machinery were measured according to a format designed for that purpose. When the anthropometric parameters for each sex, the width elbow-elbow, height of the plane of the seat--elbow, height floor--upper face of the thigh and maximum width of hips were compared, they did not present significant differences. The other anthropometric parameters differ statistically (p < 0.05), being greater in men, except the height of the heel (p < 0.01). When relating the anthropometric measures and those of the interior spaces, there were no relationships among the maximum vertical reach of knuckles with the minimum height of objects and controls, the plane height of the seat-eye with the height of the computer's monitor and the sacrum-knee distance with the height of the work surface. The other variables showed a significant statistical relationship (p < 0.05). The interior spaces of the press are adapted to the anthropometric measures of its workers, fulfilling ergonomics approaches. These anthropometric measures and the ergonomics aspects of objects and workplace provide elements that will allow the design and the implementation of surveillance programs for the control and the prevention of work-related musculoskeletal disorders, related to the personnel's inadequate selection and to the redesign of interior spaces, and the selection of the machinery and tools to use in the technological processes.
Dupas, Laura; Massire, Aurélien; Amadon, Alexis; Vignaud, Alexandre; Boulant, Nicolas
2015-06-01
The spokes method combined with parallel transmission is a promising technique to mitigate the B1(+) inhomogeneity at ultra-high field in 2D imaging. To date however, the spokes placement optimization combined with the magnitude least squares pulse design has never been done in direct conjunction with the explicit Specific Absorption Rate (SAR) and hardware constraints. In this work, the joint optimization of 2-spoke trajectories and RF subpulse weights is performed under these constraints explicitly and in the small tip angle regime. The problem is first considerably simplified by making the observation that only the vector between the 2 spokes is relevant in the magnitude least squares cost-function, thereby reducing the size of the parameter space and allowing a more exhaustive search. The algorithm starts from a set of initial k-space candidates and performs in parallel for all of them optimizations of the RF subpulse weights and the k-space locations simultaneously, under explicit SAR and power constraints, using an active-set algorithm. The dimensionality of the spoke placement parameter space being low, the RF pulse performance is computed for every location in k-space to study the robustness of the proposed approach with respect to initialization, by looking at the probability to converge towards a possible global minimum. Moreover, the optimization of the spoke placement is repeated with an increased pulse bandwidth in order to investigate the impact of the constraints on the result. Bloch simulations and in vivo T2(∗)-weighted images acquired at 7 T validate the approach. The algorithm returns simulated normalized root mean square errors systematically smaller than 5% in 10 s. Copyright © 2015 Elsevier Inc. All rights reserved.
Microgravity Impact Experiments: The Prime Campaign on the NASA KC-135
NASA Astrophysics Data System (ADS)
Colwell, Joshua E.; Sture, Stein; Lemos, Andreas R.
2002-11-01
Low velocity collisions (v less than 100 m/s) occur in a number of astrophysical contexts, including planetary rings, protoplanetary disks, the Kuiper belt of comets, and in secondary cratering events on asteroids and planetary satellites. In most of these situations the surface gravity of the target is less than a few per cent of 1 g. Asteroids and planetary satellites are observed to have a regolith consisting of loose, unconsolidated material. Planetary ring particles likely are also coated with dust based on observations of dust within ring systems. The formation of planetesimals in protoplanetary disks begins with the accretion of dust particles. The response of the surface dust layer to collisions in the near absence of gravity is necessary for understanding the evolution of these systems. The Collisions Into Dust Experiment (COLLIDE) performs six impact experiments into simulated regolith in microgravity conditions on the space shuttle. The parameter space to be explored is quite large, including effects such as impactor mass and velocity, impact angle, target porosity, size distribution, and particle shape. We have developed an experiment, the Physics of Regolith Impacts in Microgravity Experiment (PRIME), that is analogous to COLLIDE that is optimized for flight on the NASA KC-135 reduced gravity aircraft. The KC-135 environment provides the advantage of more rapid turnover between experiments, allowing a broader range of parameters to be studied quickly, and more room for the experiment so that more impact experiments can be performed each flight. The acceleration environment of the KC-135 is not as stable and minimal as on the space shuttle, and this requires impact velocities to be higher than the minimum achievable with COLLIDE. The experiment consists of an evacuated PRIME Impact Chamber (PIC) with an aluminum base plate and acrylic sides and top. A target tray, launcher, and mirror mount to the base plate. The launcher may be positioned to allow for impacts at angles of 30, 45, 60, and 90 degrees with respect to the target surface. The target material is contained in a 10 cm by 10 cm by 2 cm tray with a rotating door that is opened via a mechanical feed-through on the base plate. A spring-loaded inner door provides uniform compression on the target material prior to operation of the experiment to keep the material from settling or locking up during vibrations prior to the experiment. Data is recorded with the NASA high speed video camera. Frame rates are selected according to the impact parameters. The direct camera view is orthogonal to the projectile line of motion, and the mirrors within the PIC provide a view normal to the target surface. The spring-loaded launchers allow for projectile speeds between 10 cm/s and 500 cm/s with a variety of impactor sizes and densities. On each flight 8 PICs will be used, each one with a different set of impact parameters. Additional information is included in the original extended abstract.
NASA Astrophysics Data System (ADS)
Kalanov, Temur Z.
2003-04-01
A new theory of space is suggested. It represents the new point of view which has arisen from the critical analysis of the foundations of physics (in particular the theory of relativity and quantum mechanics), mathematics, cosmology and philosophy. The main idea following from the analysis is that the concept of movement represents a key to understanding of the essence of space. The starting-point of the theory is represented by the following philosophical (dialectical materialistic) principles. (a) The principle of the materiality (of the objective reality) of the Nature: the Nature (the Universe) is a system (a set) of material objects (particles, bodies, fields); each object has properties, features, and the properties, the features are inseparable characteristics of material object and belong only to material object. (b) The principle of the existence of material object: an object exists as the objective reality, and movement is a form of existence of object. (c) The principle (definition) of movement of object: the movement is change (i.e. transition of some states into others) in general; the movement determines a direction, and direction characterizes the movement. (d) The principle of existence of time: the time exists as the parameter of the system of reference. These principles lead to the following statements expressing the essence of space. (1) There is no space in general, and there exist space only as a form of existence of the properties and features of the object. It means that the space is a set of the measures of the object (the measure is the philosophical category meaning unity of the qualitative and quantitative determinacy of the object). In other words, the space of the object is a set of the states of the object. (2) The states of the object are manifested only in a system of reference. The main informational property of the unitary system researched physical object + system of reference is that the system of reference determines (measures, calculates) the parameters of the subsystem researched physical object (for example, the coordinates of the object M); the parameters characterize the system of reference (for example, the system of coordinates S). (3) Each parameter of the object is its measure. Total number of the mutually independent parameters of the object is called dimension of the space of the object. (4) The set of numerical values (i.e. the range, the spectrum) of each parameter is the subspace of the object. (The coordinate space, the momentum space and the energy space are examples of the subspaces of the object). (5) The set of the parameters of the object is divided into two non intersecting (opposite) classes: the class of the internal parameters and the class of the non internal (i.e. external) parameters. The class of the external parameters is divided into two non intersecting (opposite) subclasses: the subclass of the absolute parameters (characterizing the form, the sizes of the object) and the subclass of the non absolute (relative) parameters (characterizing the position, the coordinates of the object). (6) Set of the external parameters forms the external space of object. It is called geometrical space of object. (7) Since a macroscopic object has three mutually independent sizes, the dimension of its external absolute space is equal to three. Consequently, the dimension of its external relative space is also equal to three. Thus, the total dimension of the external space of the macroscopic object is equal to six. (8) In general case, the external absolute space (i.e. the form, the sizes) and the external relative space (i.e. the position, the coordinates) of any object are mutually dependent because of influence of a medium. The geometrical space of such object is called non Euclidean space. If the external absolute space and the external relative space of some object are mutually independent, then the external relative space of such object is the homogeneous and isotropic geometrical space. It is called Euclidean space of the object. Consequences: (i) the question of true geometry of the Universe is incorrect; (ii) the theory of relativity has no physical meaning.
NASA Astrophysics Data System (ADS)
Atanasov, Victor
2017-07-01
We extend the superconductor's free energy to include an interaction of the order parameter with the curvature of space-time. This interaction leads to geometry dependent coherence length and Ginzburg-Landau parameter which suggests that the curvature of space-time can change the superconductor's type. The curvature of space-time doesn't affect the ideal diamagnetism of the superconductor but acts as chemical potential. In a particular circumstance, the geometric field becomes order-parameter dependent, therefore the superconductor's order parameter dynamics affects the curvature of space-time and electrical or internal quantum mechanical energy can be channelled into the curvature of space-time. Experimental consequences are discussed.
A fast and accurate method for perturbative resummation of transverse momentum-dependent observables
NASA Astrophysics Data System (ADS)
Kang, Daekyoung; Lee, Christopher; Vaidya, Varun
2018-04-01
We propose a novel strategy for the perturbative resummation of transverse momentum-dependent (TMD) observables, using the q T spectra of gauge bosons ( γ ∗, Higgs) in pp collisions in the regime of low (but perturbative) transverse momentum q T as a specific example. First we introduce a scheme to choose the factorization scale for virtuality in momentum space instead of in impact parameter space, allowing us to avoid integrating over (or cutting off) a Landau pole in the inverse Fourier transform of the latter to the former. The factorization scale for rapidity is still chosen as a function of impact parameter b, but in such a way designed to obtain a Gaussian form (in ln b) for the exponentiated rapidity evolution kernel, guaranteeing convergence of the b integral. We then apply this scheme to obtain the q T spectra for Drell-Yan and Higgs production at NNLL accuracy. In addition, using this scheme we are able to obtain a fast semi-analytic formula for the perturbative resummed cross sections in momentum space: analytic in its dependence on all physical variables at each order of logarithmic accuracy, up to a numerical expansion for the pure mathematical Bessel function in the inverse Fourier transform that needs to be performed just once for all observables and kinematics, to any desired accuracy.
A fast and accurate method for perturbative resummation of transverse momentum-dependent observables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Daekyoung; Lee, Christopher; Vaidya, Varun
Here, we propose a novel strategy for the perturbative resummation of transverse momentum-dependent (TMD) observables, using the q T spectra of gauge bosons (γ*, Higgs) in pp collisions in the regime of low (but perturbative) transverse momentum q T as a specific example. First we introduce a scheme to choose the factorization scale for virtuality in momentum space instead of in impact parameter space, allowing us to avoid integrating over (or cutting off) a Landau pole in the inverse Fourier transform of the latter to the former. The factorization scale for rapidity is still chosen as a function of impactmore » parameter b, but in such a way designed to obtain a Gaussian form (in ln b) for the exponentiated rapidity evolution kernel, guaranteeing convergence of the b integral. We then apply this scheme to obtain the q T spectra for Drell-Yan and Higgs production at NNLL accuracy. In addition, using this scheme we are able to obtain a fast semi-analytic formula for the perturbative resummed cross sections in momentum space: analytic in its dependence on all physical variables at each order of logarithmic accuracy, up to a numerical expansion for the pure mathematical Bessel function in the inverse Fourier transform that needs to be performed just once for all observables and kinematics, to any desired accuracy.« less
A fast and accurate method for perturbative resummation of transverse momentum-dependent observables
Kang, Daekyoung; Lee, Christopher; Vaidya, Varun
2018-04-27
Here, we propose a novel strategy for the perturbative resummation of transverse momentum-dependent (TMD) observables, using the q T spectra of gauge bosons (γ*, Higgs) in pp collisions in the regime of low (but perturbative) transverse momentum q T as a specific example. First we introduce a scheme to choose the factorization scale for virtuality in momentum space instead of in impact parameter space, allowing us to avoid integrating over (or cutting off) a Landau pole in the inverse Fourier transform of the latter to the former. The factorization scale for rapidity is still chosen as a function of impactmore » parameter b, but in such a way designed to obtain a Gaussian form (in ln b) for the exponentiated rapidity evolution kernel, guaranteeing convergence of the b integral. We then apply this scheme to obtain the q T spectra for Drell-Yan and Higgs production at NNLL accuracy. In addition, using this scheme we are able to obtain a fast semi-analytic formula for the perturbative resummed cross sections in momentum space: analytic in its dependence on all physical variables at each order of logarithmic accuracy, up to a numerical expansion for the pure mathematical Bessel function in the inverse Fourier transform that needs to be performed just once for all observables and kinematics, to any desired accuracy.« less
Electron acoustic nonlinear structures in planetary magnetospheres
NASA Astrophysics Data System (ADS)
Shah, K. H.; Qureshi, M. N. S.; Masood, W.; Shah, H. A.
2018-04-01
In this paper, we have studied linear and nonlinear propagation of electron acoustic waves (EAWs) comprising cold and hot populations in which the ions form the neutralizing background. The hot electrons have been assumed to follow the generalized ( r , q ) distribution which has the advantage that it mimics most of the distribution functions observed in space plasmas. Interestingly, it has been found that unlike Maxwellian and kappa distributions, the electron acoustic waves admit not only rarefactive structures but also allow the formation of compressive solitary structures for generalized ( r , q ) distribution. It has been found that the flatness parameter r , tail parameter q , and the nonlinear propagation velocity u affect the propagation characteristics of nonlinear EAWs. Using the plasmas parameters, typically found in Saturn's magnetosphere and the Earth's auroral region, where two populations of electrons and electron acoustic solitary waves (EASWs) have been observed, we have given an estimate of the scale lengths over which these nonlinear waves are expected to form and how the size of these structures would vary with the change in the shape of the distribution function and with the change of the plasma parameters.
NASA Astrophysics Data System (ADS)
Chaloupka, Jiří; Khaliullin, Giniyat
2015-07-01
We have explored the hidden symmetries of a generic four-parameter nearest-neighbor spin model, allowed in honeycomb-lattice compounds under trigonal compression. Our method utilizes a systematic algorithm to identify all dual transformations of the model that map the Hamiltonian on itself, changing the parameters and providing exact links between different points in its parameter space. We have found the complete set of points of hidden SU(2) symmetry at which a seemingly highly anisotropic model can be mapped back on the Heisenberg model and inherits therefore its properties such as the presence of gapless Goldstone modes. The procedure used to search for the hidden symmetries is quite general and may be extended to other bond-anisotropic spin models and other lattices, such as the triangular, kagome, hyperhoneycomb, or harmonic-honeycomb lattices. We apply our findings to the honeycomb-lattice iridates Na2IrO3 and Li2IrO3 , and illustrate how they help to identify plausible values of the model parameters that are compatible with the available experimental data.
Cosmological parameter estimation from CMB and X-ray cluster after Planck
NASA Astrophysics Data System (ADS)
Hu, Jian-Wei; Cai, Rong-Gen; Guo, Zong-Kuan; Hu, Bin
2014-05-01
We investigate constraints on cosmological parameters in three 8-parameter models with the summed neutrino mass as a free parameter, by a joint analysis of CCCP X-ray cluster data, the newly released Planck CMB data as well as some external data sets including baryon acoustic oscillation measurements from the 6dFGS, SDSS DR7 and BOSS DR9 surveys, and Hubble Space Telescope H0 measurement. We find that the combined data strongly favor a non-zero neutrino masses at more than 3σ confidence level in these non-vanilla models. Allowing the CMB lensing amplitude AL to vary, we find AL > 1 at 3σ confidence level. For dark energy with a constant equation of state w, we obtain w < -1 at 3σ confidence level. The estimate of the matter power spectrum amplitude σ8 is discrepant with the Planck value at 2σ confidence level, which reflects some tension between X-ray cluster data and Planck data in these non-vanilla models. The tension can be alleviated by adding a 9% systematic shift in the cluster mass function.
OAM-labeled free-space optical flow routing.
Gao, Shecheng; Lei, Ting; Li, Yangjin; Yuan, Yangsheng; Xie, Zhenwei; Li, Zhaohui; Yuan, Xiaocong
2016-09-19
Space-division multiplexing allows unprecedented scaling of bandwidth density for optical communication. Routing spatial channels among transmission ports is critical for future scalable optical network, however, there is still no characteristic parameter to label the overlapped optical carriers. Here we propose a free-space optical flow routing (OFR) scheme by using optical orbital angular moment (OAM) states to label optical flows and simultaneously steer each flow according to their OAM states. With an OAM multiplexer and a reconfigurable OAM demultiplexer, massive individual optical flows can be routed to the demanded optical ports. In the routing process, the OAM beams act as data carriers at the same time their topological charges act as each carrier's labels. Using this scheme, we experimentally demonstrate switching, multicasting and filtering network functions by simultaneously steer 10 input optical flows on demand to 10 output ports. The demonstration of data-carrying OFR with nonreturn-to-zero signals shows that this process enables synchronous processing of massive spatial channels and flexible optical network.
Design of a 2 kA, 30 fs Rf-Photoinjector for Waterbag Compression
NASA Astrophysics Data System (ADS)
van der Geer, S. B.; Luiten, O. J.; de Loos, M. J.
Because uniformly filled ellipsoidal ‘waterbag’ bunches have linear self-fields in all dimensions, they do not suffer from space-charge induced brightness degradation. This in turn allows very efficient longitudinal compression of high-brightness bunches at sub or mildly relativistic energies, a parameter regime inaccessible up to now due to detrimental effects of non-linear space-charge forces. To demonstrate the feasibility of this approach, we investigate ballistic bunching of 1 MeV, 100 pC waterbag electron bunches, created in a half-cell rf-photogun, by means of a two-cell booster-compressor. Detailed GPT simulations of this table-top set-up are presented, including realistic fields, 3D space-charge effects, path-length differences and image charges at the cathode. It is shown that with a single 10MW S-band klystron and fields of 100 MV/m, 2kA peak current is attainable with a pulse duration of only 30 fs at a transverse normalized emittance of 1.5 μm.
Neutral aggregation in finite-length genotype space
NASA Astrophysics Data System (ADS)
Houchmandzadeh, Bahram
2017-01-01
The advent of modern genome sequencing techniques allows for a more stringent test of the neutrality hypothesis of Darwinian evolution, where all individuals have the same fitness. Using the individual-based model of Wright and Fisher, we compute the amplitude of neutral aggregation in the genome space, i.e., the probability of finding two individuals at genetic (Hamming) distance k as a function of the genome size L , population size N , and mutation probability per base ν . In well-mixed populations, we show that for N ν <1 /L , neutral aggregation is the dominant force and most individuals are found at short genetic distances from each other. For N ν >1 , on the contrary, individuals are randomly dispersed in genome space. The results are extended to a geographically dispersed population, where the controlling parameter is shown to be a combination of mutation and migration probability. The theory we develop can be used to test the neutrality hypothesis in various ecological and evolutionary systems.
The H,G_1,G_2 photometric system with scarce observational data
NASA Astrophysics Data System (ADS)
Penttilä, A.; Granvik, M.; Muinonen, K.; Wilkman, O.
2014-07-01
The H,G_1,G_2 photometric system was officially adopted at the IAU General Assembly in Beijing, 2012. The system replaced the H,G system from 1985. The 'photometric system' is a parametrized model V(α; params) for the magnitude-phase relation of small Solar System bodies, and the main purpose is to predict the magnitude at backscattering, H := V(0°), i.e., the (absolute) magnitude of the object. The original H,G system was designed using the best available data in 1985, but since then new observations have been made showing certain features, especially near backscattering, to which the H,G function has troubles adjusting to. The H,G_1,G_2 system was developed especially to address these issues [1]. With a sufficient number of high-accuracy observations and with a wide phase-angle coverage, the H,G_1,G_2 system performs well. However, with scarce low-accuracy data the system has troubles producing a reliable fit, as would any other three-parameter nonlinear function. Therefore, simultaneously with the H,G_1,G_2 system, a two-parameter version of the model, the H,G_{12} system, was introduced [1]. The two-parameter version ties the parameters G_1,G_2 into a single parameter G_{12} by a linear relation, and still uses the H,G_1,G_2 system in the background. This version dramatically improves the possibility to receive a reliable phase-curve fit to scarce data. The amount of observed small bodies is increasing all the time, and so is the need to produce estimates for the absolute magnitude/diameter/albedo and other size/composition related parameters. The lack of small-phase-angle observations is especially topical for near-Earth objects (NEOs). With these, even the two- parameter version faces problems. The previous procedure with the H,G system in such circumstances has been that the G-parameter has been fixed to some constant value, thus only fitting a single-parameter function. In conclusion, there is a definitive need for a reliable procedure to produce photometric fits to very scarce and low-accuracy data. There are a few details that should be considered with the H,G_1,G_2 or H,G_{12} systems with scarce data. The first point is the distribution of errors in the fit. The original H,G system allowed linear regression in the flux space, thus making the estimation computationally easier. The same principle was repeated with the H,G_1,G_2 system. There is, however, a major hidden assumption in the transformation. With regression modeling, the residuals should be distributed symmetrically around zero. If they are normally distributed, even better. We have noticed that, at least with some NEO observations, the residuals in the flux space are far from symmetric, and seem to be much more symmetric in the magnitude space. The result is that the nonlinear fit in magnitude space is far more reliable than the linear fit in the flux space. Since the computers and nonlinear regression algorithms are efficient enough, we conclude that, in many cases, with low-accuracy data the nonlinear fit should be favored. In fact, there are statistical procedures that should be employed with the photometric fit. At the moment, the choice between the three-parameter and two-parameter versions is simply based on subjective decision-making. By checking parameter error and model comparison statistics, the choice could be done objectively. Similarly, the choice between the linear fit in flux space and the nonlinear fit in magnitude space should be based on a statistical test of unbiased residuals. Furthermore, the so-called Box-Cox transform could be employed to find an optimal transformation somewhere between the magnitude and flux spaces. The H,G_1,G_2 system is based on cubic splines, and is therefore a bit more complicated to implement than a system with simpler basis functions. The same applies to a complete program that would automatically choose the best transforms to data, test if two- or three-parameter version of the model should be fitted, and produce the fitted parameters with their error estimates. Our group has already made implementations of the H,G_1,G_2 system publicly available [2]. We plan to implement the abovementioned improvements to the system and make also these tools public.
A stochastic approach for model reduction and memory function design in hydrogeophysical inversion
NASA Astrophysics Data System (ADS)
Hou, Z.; Kellogg, A.; Terry, N.
2009-12-01
Geophysical (e.g., seismic, electromagnetic, radar) techniques and statistical methods are essential for research related to subsurface characterization, including monitoring subsurface flow and transport processes, oil/gas reservoir identification, etc. For deep subsurface characterization such as reservoir petroleum exploration, seismic methods have been widely used. Recently, electromagnetic (EM) methods have drawn great attention in the area of reservoir characterization. However, considering the enormous computational demand corresponding to seismic and EM forward modeling, it is usually a big problem to have too many unknown parameters in the modeling domain. For shallow subsurface applications, the characterization can be very complicated considering the complexity and nonlinearity of flow and transport processes in the unsaturated zone. It is warranted to reduce the dimension of parameter space to a reasonable level. Another common concern is how to make the best use of time-lapse data with spatial-temporal correlations. This is even more critical when we try to monitor subsurface processes using geophysical data collected at different times. The normal practice is to get the inverse images individually. These images are not necessarily continuous or even reasonably related, because of the non-uniqueness of hydrogeophysical inversion. We propose to use a stochastic framework by integrating minimum-relative-entropy concept, quasi Monto Carlo sampling techniques, and statistical tests. The approach allows efficient and sufficient exploration of all possibilities of model parameters and evaluation of their significances to geophysical responses. The analyses enable us to reduce the parameter space significantly. The approach can be combined with Bayesian updating, allowing us to treat the updated ‘posterior’ pdf as a memory function, which stores all the information up to date about the distributions of soil/field attributes/properties, then consider the memory function as a new prior and generate samples from it for further updating when more geophysical data is available. We applied this approach for deep oil reservoir characterization and for shallow subsurface flow monitoring. The model reduction approach reliably helps reduce the joint seismic/EM/radar inversion computational time to reasonable levels. Continuous inversion images are obtained using time-lapse data with the “memory function” applied in the Bayesian inversion.
Ye, Fei; Lou, Xin Yuan; Sun, Lin Fu
2017-01-01
This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm's performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem.
NASA Astrophysics Data System (ADS)
Márquez, I.; Lima Neto, G. B.; Capelato, H.; Durret, F.; Lanzoni, B.; Gerbal, D.
2001-12-01
In the present paper, we show that elliptical galaxies (Es) obey a scaling relation between potential energy and mass. Since they are relaxed systems in a post violent-relaxation stage, they are quasi-equilibrium gravitational systems and therefore they also have a quasi-constant specific entropy. Assuming that light traces mass, these two laws imply that in the space defined by the three Sérsic law parameters (intensity Sigma0 , scale a and shape nu ), elliptical galaxies are distributed on two intersecting 2-manifolds: the Entropic Surface and the Energy-Mass Surface. Using a sample of 132 galaxies belonging to three nearby clusters, we have verified that ellipticals indeed follow these laws. This also implies that they are distributed along the intersection line (the Energy-Entropy line), thus they constitute a one-parameter family. These two physical laws (separately or combined), allow to find the theoretical origin of several observed photometrical relations, such as the correlation between absolute magnitude and effective surface brightness, and the fact that ellipticals are located on a surface in the [log Reff, -2.5 log Sigma0, log nu ] space. The fact that elliptical galaxies are a one-parameter family has important implications for cosmology and galaxy formation and evolution models. Moreover, the Energy-Entropy line could be used as a distance indicator.
Swoboda, Carl A.
1984-01-01
The disclosed ultrasonic hydrometer determines the specific gravity (density) of the electrolyte of a wet battery, such as a lead-acid battery. The hydrometer utilizes a transducer that when excited emits an ultrasonic impulse that traverses through the electrolyte back and forth between spaced sonic surfaces. The transducer detects the returning impulse, and means measures the time "t" between the initial and returning impulses. Considering the distance "d" between the spaced sonic surfaces and the measured time "t", the sonic velocity "V" is calculated with the equation "V=2d/t". The hydrometer also utilizes a thermocouple to measure the electrolyte temperature. A hydrometer database correlates three variable parameters including sonic velocity in and temperature and specific gravity of the electrolyte, for temperature values between 0.degree. and 40.degree. C. and for specific gravity values between 1.05 and 1.30. Upon knowing two parameters (the calculated sonic velocity and the measured temperature), the third parameter (specific gravity) can be uniquely found in the database. The hydrometer utilizes a microprocessor for data storage and manipulation. The disclosed modified battery has a hollow spacer nub on the battery side wall, the sonic surfaces being on the inside of the nub and the electrolyte filling between the surfaces to the exclusion of intervening structure. An accessible pad exposed on the nub wall opposite one sonic surface allows the reliable placement thereagainst of the transducer.
Lou, Xin Yuan; Sun, Lin Fu
2017-01-01
This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm’s performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem. PMID:28369096
Laboratory simulation of space plasma phenomena*
NASA Astrophysics Data System (ADS)
Amatucci, B.; Tejero, E. M.; Ganguli, G.; Blackwell, D.; Enloe, C. L.; Gillman, E.; Walker, D.; Gatling, G.
2017-12-01
Laboratory devices, such as the Naval Research Laboratory's Space Physics Simulation Chamber, are large-scale experiments dedicated to the creation of large-volume plasmas with parameters realistically scaled to those found in various regions of the near-Earth space plasma environment. Such devices make valuable contributions to the understanding of space plasmas by investigating phenomena under carefully controlled, reproducible conditions, allowing for the validation of theoretical models being applied to space data. By working in collaboration with in situ experimentalists to create realistic conditions scaled to those found during the observations of interest, the microphysics responsible for the observed events can be investigated in detail not possible in space. To date, numerous investigations of phenomena such as plasma waves, wave-particle interactions, and particle energization have been successfully performed in the laboratory. In addition to investigations such as plasma wave and instability studies, the laboratory devices can also make valuable contributions to the development and testing of space plasma diagnostics. One example is the plasma impedance probe developed at NRL. Originally developed as a laboratory diagnostic, the sensor has now been flown on a sounding rocket, is included on a CubeSat experiment, and will be included on the DoD Space Test Program's STP-H6 experiment on the International Space Station. In this presentation, we will describe several examples of the laboratory investigation of space plasma waves and instabilities and diagnostic development. *This work supported by the NRL Base Program.
Exploring cosmic origins with CORE: Cosmological parameters
NASA Astrophysics Data System (ADS)
Di Valentino, E.; Brinckmann, T.; Gerbino, M.; Poulin, V.; Bouchet, F. R.; Lesgourgues, J.; Melchiorri, A.; Chluba, J.; Clesse, S.; Delabrouille, J.; Dvorkin, C.; Forastieri, F.; Galli, S.; Hooper, D. C.; Lattanzi, M.; Martins, C. J. A. P.; Salvati, L.; Cabass, G.; Caputo, A.; Giusarma, E.; Hivon, E.; Natoli, P.; Pagano, L.; Paradiso, S.; Rubiño-Martin, J. A.; Achúcarro, A.; Ade, P.; Allison, R.; Arroja, F.; Ashdown, M.; Ballardini, M.; Banday, A. J.; Banerji, R.; Bartolo, N.; Bartlett, J. G.; Basak, S.; Baumann, D.; de Bernardis, P.; Bersanelli, M.; Bonaldi, A.; Bonato, M.; Borrill, J.; Boulanger, F.; Bucher, M.; Burigana, C.; Buzzelli, A.; Cai, Z.-Y.; Calvo, M.; Carvalho, C. S.; Castellano, G.; Challinor, A.; Charles, I.; Colantoni, I.; Coppolecchia, A.; Crook, M.; D'Alessandro, G.; De Petris, M.; De Zotti, G.; Diego, J. M.; Errard, J.; Feeney, S.; Fernandez-Cobos, R.; Ferraro, S.; Finelli, F.; de Gasperis, G.; Génova-Santos, R. T.; González-Nuevo, J.; Grandis, S.; Greenslade, J.; Hagstotz, S.; Hanany, S.; Handley, W.; Hazra, D. K.; Hernández-Monteagudo, C.; Hervias-Caimapo, C.; Hills, M.; Kiiveri, K.; Kisner, T.; Kitching, T.; Kunz, M.; Kurki-Suonio, H.; Lamagna, L.; Lasenby, A.; Lewis, A.; Liguori, M.; Lindholm, V.; Lopez-Caniego, M.; Luzzi, G.; Maffei, B.; Martin, S.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; McCarthy, D.; Melin, J.-B.; Mohr, J. J.; Molinari, D.; Monfardini, A.; Negrello, M.; Notari, A.; Paiella, A.; Paoletti, D.; Patanchon, G.; Piacentini, F.; Piat, M.; Pisano, G.; Polastri, L.; Polenta, G.; Pollo, A.; Quartin, M.; Remazeilles, M.; Roman, M.; Ringeval, C.; Tartari, A.; Tomasi, M.; Tramonte, D.; Trappe, N.; Trombetti, T.; Tucker, C.; Väliviita, J.; van de Weygaert, R.; Van Tent, B.; Vennin, V.; Vermeulen, G.; Vielva, P.; Vittorio, N.; Young, K.; Zannoni, M.
2018-04-01
We forecast the main cosmological parameter constraints achievable with the CORE space mission which is dedicated to mapping the polarisation of the Cosmic Microwave Background (CMB). CORE was recently submitted in response to ESA's fifth call for medium-sized mission proposals (M5). Here we report the results from our pre-submission study of the impact of various instrumental options, in particular the telescope size and sensitivity level, and review the great, transformative potential of the mission as proposed. Specifically, we assess the impact on a broad range of fundamental parameters of our Universe as a function of the expected CMB characteristics, with other papers in the series focusing on controlling astrophysical and instrumental residual systematics. In this paper, we assume that only a few central CORE frequency channels are usable for our purpose, all others being devoted to the cleaning of astrophysical contaminants. On the theoretical side, we assume ΛCDM as our general framework and quantify the improvement provided by CORE over the current constraints from the Planck 2015 release. We also study the joint sensitivity of CORE and of future Baryon Acoustic Oscillation and Large Scale Structure experiments like DESI and Euclid. Specific constraints on the physics of inflation are presented in another paper of the series. In addition to the six parameters of the base ΛCDM, which describe the matter content of a spatially flat universe with adiabatic and scalar primordial fluctuations from inflation, we derive the precision achievable on parameters like those describing curvature, neutrino physics, extra light relics, primordial helium abundance, dark matter annihilation, recombination physics, variation of fundamental constants, dark energy, modified gravity, reionization and cosmic birefringence. In addition to assessing the improvement on the precision of individual parameters, we also forecast the post-CORE overall reduction of the allowed parameter space with figures of merit for various models increasing by as much as ~ 107 as compared to Planck 2015, and 105 with respect to Planck 2015 + future BAO measurements.
Numerical modeling of Stickney crater and its aftermath
NASA Astrophysics Data System (ADS)
Schwartz, Stephen R.; Michel, Patrick; Bruck Syal, Megan; Owen, J. Michael; Miller, Paul L.; Richardson, Derek C.; Zhang, Yun
2016-10-01
Phobos is characterized by a large crater called Stickney. Its collisional formation and its aftermath have important implications on the final structure, morphology, and surface properties of Phobos that still need further clarification. This is particularly important in the current environment, with space mission concepts to Phobos under active study by several space agencies. SPH hydrocode simulations of the impact that formed Stickney crater [1] have been performed. Using the Soft-Sphere Discrete Element Method (SSDEM) collisional routine of the N-body code pkdgrav [2], we take the outcome of SPH simulations as inputs and model the ensuing phase of the crater formation process and its ejecta evolution under the gravitational influence of Phobos and Mars. In our simulations, about 9 million particles comprise Phobos' shape [3], and the evolution of particles that are expected to form or leave the crater is followed using multiple plausible orbits for Phobos around Mars. We track the immediate fate of low-speed ejecta (~3-8 m/s), allowing us to test an hypothesis [4] that they may scour certain groove marks that have been observed on Phobos' surface and to quantify the amounts and locations of re-impacting ejecta. We also compute the orbital fate of ejecta whose speed is below the system escape speed (about 3 km/s). This allows us to estimate the thickness and distribution of the final ejecta blanket and to check whether crater chains may form. Finally, particles forming the crater walls are followed until achieving stability, allowing us to estimate the final crater depth and diameter. We will show examples of these simulations from a set of SPH initial conditions and over a range of parameters (e.g., material friction coefficients). Work ongoing to cover a larger range of plausible impact conditions, allowing us to explore different scenarios to explain Phobos' observed properties and to infer more, giving useful constraints to space mission studies. [1] Bruck Syal, M. et al. (this meeting); [2] Schwartz, S.R. et al. 2012, Granul. Matter 14, 363; [3] Willner, K. et al. 2010, E. Earth Planet. Sci. Lett. 294, 541; [4] Wilson, L. & Head, J.W. 2015, Planet. Space Sci. 105, 26.
NASA Astrophysics Data System (ADS)
Lin, Alexander; Johnson, Lindsay C.; Shokouhi, Sepideh; Peterson, Todd E.; Kupinski, Matthew A.
2015-03-01
In synthetic-collimator SPECT imaging, two detectors are placed at different distances behind a multi-pinhole aperture. This configuration allows for image detection at different magnifications and photon energies, resulting in higher overall sensitivity while maintaining high resolution. Image multiplexing the undesired overlapping between images due to photon origin uncertainty may occur in both detector planes and is often present in the second detector plane due to greater magnification. However, artifact-free image reconstruction is possible by combining data from both the front detector (little to no multiplexing) and the back detector (noticeable multiplexing). When the two detectors are used in tandem, spatial resolution is increased, allowing for a higher sensitivity-to-detector-area ratio. Due to variability in detector distances and pinhole spacings found in synthetic-collimator SPECT systems, a large parameter space must be examined to determine optimal imaging configurations. We chose to assess image quality based on the task of estimating activity in various regions of a mouse brain. Phantom objects were simulated using mouse brain data from the Magnetic Resonance Microimaging Neurological Atlas (MRM NeAt) and projected at different angles through models of a synthetic-collimator SPECT system, which was developed by collaborators at Vanderbilt University. Uptake in the different brain regions was modeled as being normally distributed about predetermined means and variances. We computed the performance of the Wiener estimator for the task of estimating activity in different regions of the mouse brain. Our results demonstrate the utility of the method for optimizing synthetic-collimator system design.
NASA Technical Reports Server (NTRS)
Avanov, Levon A.; Gliese, Ulrik; Mariano, Albert; Tucker, Corey; Barrie, Alexander; Chornay, Dennis J.; Pollock, Craig James; Kujawski, Joseph T.; Collinson, Glyn A.; Nguyen, Quang T.;
2011-01-01
The Magnetospheric Multiscale mission (MMS) is designed to study fundamental phenomena in space plasma physics such as a magnetic reconnection. The mission consists of four spacecraft, equipped with identical scientific payloads, allowing for the first measurements of fast dynamics in the critical electron diffusion region where magnetic reconnection occurs and charged particles are demagnetized. The MMS orbit is optimized to ensure the spacecraft spend extended periods of time in locations where reconnection is known to occur: at the dayside magnetopause and in the magnetotail. In order to resolve fine structures of the three dimensional electron distributions in the diffusion region (reconnection site), the Fast Plasma Investigation's (FPI) Dual Electron Spectrometer (DES) is designed to measure three dimensional electron velocity distributions with an extremely high time resolution of 30 ms. In order to achieve this unprecedented sampling rate, four dual spectrometers, each sampling 180 x 45 degree sections of the sky, are installed on each spacecraft. We present results of the comprehensive tests performed on the DES Engineering & Test Unit (ETU). This includes main parameters of the spectrometer such as energy resolution, angular acceptance, and geometric factor along with their variations over the 16 pixels spanning the 180-degree tophat Electro Static Analyzer (ESA) field of view and over the energy of the test beam. A newly developed method for precisely defining the operational space of the instrument is presented as well. This allows optimization of the trade-off between pixel to pixel crosstalk and uniformity of the main spectrometer parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hillman, Y.; Prialnik, D.; Kovetz, A.
Can a white dwarf (WD), accreting hydrogen-rich matter from a non-degenerate companion star, ever exceed the Chandrasekhar mass and explode as a SN Ia? We explore the range of accretion rates that allow a WD to secularly grow in mass, and derive limits on the accretion rate and on the initial mass that will allow it to reach 1.4M{sub ⊙}—the Chandrasekhar mass. We follow the evolution through a long series of hydrogen flashes, during which a thick helium shell accumulates. This determines the effective helium mass accretion rate for long-term, self-consistent evolutionary runs with helium flashes. We find that netmore » mass accumulation always occurs despite helium flashes. Although the amount of mass lost during the first few helium shell flashes is a significant fraction of that accumulated prior to the flash, that fraction decreases with repeated helium shell flashes. Eventually no mass is ejected at all during subsequent flashes. This unexpected result occurs because of continual heating of the WD interior by the helium shell flashes near its surface. The effect of heating is to lower the electron degeneracy throughout the WD, especially in the outer layers. This key result yields helium burning that is quasi-steady state, instead of explosive. We thus find a remarkably large parameter space within which long-term, self-consistent simulations show that a WD can grow in mass and reach the Chandrasekhar limit, despite its helium flashes.« less
An XRPD and EPR spectroscopy study of microcrystalline calcite bioprecipitated by Bacillus subtilis
NASA Astrophysics Data System (ADS)
Perito, B.; Romanelli, M.; Buccianti, A.; Passaponti, M.; Montegrossi, G.; Di Benedetto, F.
2018-05-01
We report in this study the first XRPD and EPR spectroscopy characterisation of a biogenic calcite, obtained from the activity of the bacterium Bacillus subtilis. Microcrystalline calcite powders obtained from bacterial culture in a suitable precipitation liquid medium were analysed without further manipulation. Both techniques reveal unusual parameters, closely related to the biological source of the mineral, i.e., to the bioprecipitation process and in particular to the organic matrix observed inside calcite. In detail, XRPD analysis revealed that bacterial calcite has slightly higher c/a lattice parameters ratio than abiotic calcite. This correlation was already noticed in microcrystalline calcite samples grown by bio-mineralisation processes, but it had never been previously verified for bacterial biocalcites. EPR spectroscopy evidenced an anomalously large value of W 6, a parameter that can be linked to occupation by different chemical species in the next nearest neighbouring sites. This parameter allows to clearly distinguish bacterial and abiotic calcite. This latter achievement was obtained after having reduced the parameters space into an unbiased Euclidean one, through an isometric log-ratio transformation. We conclude that this approach enables the coupled use of XRPD and EPR for identifying the traces of bacterial activity in fossil carbonate deposits.
Pozzobon, Victor; Perre, Patrick
2018-01-21
This work provides a model and the associated set of parameters allowing for microalgae population growth computation under intermittent lightning. Han's model is coupled with a simple microalgae growth model to yield a relationship between illumination and population growth. The model parameters were obtained by fitting a dataset available in literature using Particle Swarm Optimization method. In their work, authors grew microalgae in excess of nutrients under flashing conditions. Light/dark cycles used for these experimentations are quite close to those found in photobioreactor, i.e. ranging from several seconds to one minute. In this work, in addition to producing the set of parameters, Particle Swarm Optimization robustness was assessed. To do so, two different swarm initialization techniques were used, i.e. uniform and random distribution throughout the search-space. Both yielded the same results. In addition, swarm distribution analysis reveals that the swarm converges to a unique minimum. Thus, the produced set of parameters can be trustfully used to link light intensity to population growth rate. Furthermore, the set is capable to describe photodamages effects on population growth. Hence, accounting for light overexposure effect on algal growth. Copyright © 2017 Elsevier Ltd. All rights reserved.
A comprehensive study of Mercury and MESSENGER orbit determination
NASA Astrophysics Data System (ADS)
Genova, Antonio; Mazarico, Erwan; Goossens, Sander; Lemoine, Frank G.; Neumann, Gregory A.; Nicholas, Joseph B.; Rowlands, David D.; Smith, David E.; Zuber, Maria; Solomon, Sean C.
2016-10-01
The MErcury, Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft orbited the planet Mercury for more than 4 years. The probe started its science mission in orbit around Mercury on 18 March 2011. The Mercury Laser Altimeter (MLA) and radio science system were the instruments dedicated to geodetic observations of the topography, gravity field, orientation, and tides of Mercury. X-band radio-tracking range-rate data collected by the NASA Deep Space Network (DSN) allowed the determination of Mercury's gravity field to spherical harmonic degree and order 100, the planet's obliquity, and the Love number k2.The extensive range data acquired in orbit around Mercury during the science mission (from April 2011 to April 2015), and during the three flybys of the planet in 2008 and 2009, provide a powerful dataset for the investigation of Mercury's ephemeris. The proximity of Mercury's orbit to the Sun leads to a significant perihelion precession attributable to the gravitational flattening of the Sun (J2) and the Parameterized Post-Newtonian (PPN) coefficients γ and β, which describe the space curvature produced by a unit rest mass and the nonlinearity in superposition of gravity, respectively. Therefore, the estimation of Mercury's ephemeris can provide crucial information on the interior structure of the Sun and Einstein's general theory of relativity. However, the high correlation among J2, γ, and β complicates the combined recovery of these parameters, so additional assumptions are required, such as the Nordtvedt relationship η = 4β - γ - 3.We have modified our orbit determination software, GEODYN II, to enable the simultaneous integration of the spacecraft and central body trajectories. The combined estimation of the MESSENGER and Mercury orbits allowed us to determine a more accurate gravity field, orientation, and tides of Mercury, and the values of GM and J2 for the Sun, where G is the gravitational constant and M is the solar mass. Several test cases illuminate results on the estimation of PPN parameters.
An integrated system for dynamic control of auditory perspective in a multichannel sound field
NASA Astrophysics Data System (ADS)
Corey, Jason Andrew
An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to create a perceptually strong impression of source location and movement within a simulated space.
CAPS Simulation Environment Development
NASA Technical Reports Server (NTRS)
Murphy, Douglas G.; Hoffman, James A.
2005-01-01
The final design for an effective Comet/Asteroid Protection System (CAPS) will likely come after a number of competing designs have been simulated and evaluated. Because of the large number of design parameters involved in a system capable of detecting an object, accurately determining its orbit, and diverting the impact threat, a comprehensive simulation environment will be an extremely valuable tool for the CAPS designers. A successful simulation/design tool will aid the user in identifying the critical parameters in the system and eventually allow for automatic optimization of the design once the relationships of the key parameters are understood. A CAPS configuration will consist of space-based detectors whose purpose is to scan the celestial sphere in search of objects likely to make a close approach to Earth and to determine with the greatest possible accuracy the orbits of those objects. Other components of a CAPS configuration may include systems for modifying the orbits of approaching objects, either for the purpose of preventing a collision or for positioning the object into an orbit where it can be studied or used as a mineral resource. The Synergistic Engineering Environment (SEE) is a space-systems design, evaluation, and visualization software tool being leveraged to simulate these aspects of the CAPS study. The long-term goal of the SEE is to provide capabilities to allow the user to build and compare various CAPS designs by running end-to-end simulations that encompass the scanning phase, the orbit determination phase, and the orbit modification phase of a given scenario. Herein, a brief description of the expected simulation phases is provided, the current status and available features of the SEE software system is reported, and examples are shown of how the system is used to build and evaluate a CAPS detection design. Conclusions and the roadmap for future development of the SEE are also presented.
Solar Energetic Particles Events and Human Exploration: Measurements in a Space Habitat
NASA Astrophysics Data System (ADS)
Narici, L.; Berrilli, F.; Casolino, M.; Del Moro, D.; Forte, R.; Giovannelli, L.; Martucci, M.; Mergè, M.; Picozza, P.; Rizzo, A.; Scardigli, S.; Sparvoli, R.; Zeitlin, C.
2016-12-01
Solar activity is the source of Space Weather disturbances. Flares, CME and coronal holes modulate physical conditions of circumterrestrial and interplanetary space and ultimately the fluxes of high-energy ionized particles, i.e., solar energetic particle (SEP) and galactic cosmic ray (GCR) background. This ionizing radiation affects spacecrafts and biological systems, therefore it is an important issue for human exploration of space. During a deep space travel (for example the trip to Mars) radiation risk thresholds may well be exceeded by the crew, so mitigation countermeasures must be employed. Solar particle events (SPE) constitute high risks due to their impulsive high rate dose. Forecasting SPE appears to be needed and also specifically tailored to the human exploration needs. Understanding the parameters of the SPE that produce events leading to higher health risks for the astronauts in deep space is therefore a first priority issue. Measurements of SPE effects with active devices in LEO inside the ISS can produce important information for the specific SEP measured, relative to the specific detector location in the ISS (in a human habitat with a shield typical of manned space-crafts). Active detectors can select data from specific geo-magnetic regions along the orbits, allowing geo-magnetic selections that best mimic deep space radiation. We present results from data acquired in 2010 - 2012 by the detector system ALTEA inside the ISS (18 SPEs detected). We compare this data with data from the detector Pamela on a LEO satellite, with the RAD data during the Curiosity Journey to Mars, with GOES data and with several Solar physical parameters. While several features of the radiation modulation are easily understood by the effect of the geomagnetic field, as an example we report a proportionality of the flux in the ISS with the energetic proton flux measured by GOES, some features appear more difficult to interpret. The final goal of this work is to find the characteristics of solar events leading to highest radiation risks in a human habitat during deep space exploration to best focus the needed forecasting.
Space Laboratory on a Table Top: A Next Generative ECLSS design and diagnostic tool
NASA Technical Reports Server (NTRS)
Ramachandran, N.
2005-01-01
This paper describes the development plan for a comprehensive research and diagnostic tool for aspects of advanced life support systems in space-based laboratories. Specifically it aims to build a high fidelity tabletop model that can be used for the purpose of risk mitigation, failure mode analysis, contamination tracking, and testing reliability. We envision a comprehensive approach involving experimental work coupled with numerical simulation to develop this diagnostic tool. It envisions a 10% scale transparent model of a space platform such as the International Space Station that operates with water or a specific matched index of refraction liquid as the working fluid. This allows the scaling of a 10 ft x 10 ft x 10 ft room with air flow to 1 ft x 1 ft x 1 ft tabletop model with water/liquid flow. Dynamic similitude for this length scale dictates model velocities to be 67% of full-scale and thereby the time scale of the model to represent 15% of the full- scale system; meaning identical processes in the model are completed in 15% of the full- scale-time. The use of an index matching fluid (fluid that matches the refractive index of cast acrylic, the model material) allows making the entire model (with complex internal geometry) transparent and hence conducive to non-intrusive optical diagnostics. So using such a system one can test environment control parameters such as core flows (axial flows), cross flows (from registers and diffusers), potential problem areas such as flow short circuits, inadequate oxygen content, build up of other gases beyond desirable levels, test mixing processes within the system at local nodes or compartments and assess the overall system performance. The system allows quantitative measurements of contaminants introduced in the system and allows testing and optimizing the tracking process and removal of contaminants. The envisaged system will be modular and hence flexible for quick configuration change and subsequent testing. The data and inferences from the tests will allow for improvements in the development and design of next generation life support systems and configurations. Preliminary experimental and modeling work in this area will be presented. This involves testing of a single inlet-exit model with detailed 3-D flow visualization and quantitative diagnostics and computational modeling of the system.
NASA Technical Reports Server (NTRS)
Gerberich, Matthew W.; Oleson, Steven R.
2013-01-01
The Collaborative Modeling for Parametric Assessment of Space Systems (COMPASS) team at Glenn Research Center has performed integrated system analysis of conceptual spacecraft mission designs since 2006 using a multidisciplinary concurrent engineering process. The set of completed designs was archived in a database, to allow for the study of relationships between design parameters. Although COMPASS uses a parametric spacecraft costing model, this research investigated the possibility of using a top-down approach to rapidly estimate the overall vehicle costs. This paper presents the relationships between significant design variables, including breakdowns of dry mass, wet mass, and cost. It also develops a model for a broad estimate of these parameters through basic mission characteristics, including the target location distance, the payload mass, the duration, the delta-v requirement, and the type of mission, propulsion, and electrical power. Finally, this paper examines the accuracy of this model in regards to past COMPASS designs, with an assessment of outlying spacecraft, and compares the results to historical data of completed NASA missions.
NASA Astrophysics Data System (ADS)
Yoon, Mijin; Jee, Myungkook James; Tyson, Tony
2018-01-01
The Deep Lens Survey (DLS), a precursor to the Large Synoptic Survey Telescope (LSST), is a 20 sq. deg survey carried out with NOAO’s Blanco and Mayall telescopes. The strength of the survey lies in its depth reaching down to ~27th mag in BVRz bands. This enables a broad redshift baseline study and allows us to investigate cosmological evolution of the large-scale structure. In this poster, we present the first cosmological analysis from the DLS using galaxy-shear correlations and galaxy clustering signals. Our DLS shear calibration accuracy has been validated through the most recent public weak-lensing data challenge. Photometric redshift systematic errors are tested by performing lens-source flip tests. Instead of real-space correlations, we reconstruct band-limited power spectra for cosmological parameter constraints. Our analysis puts a tight constraint on the matter density and the power spectrum normalization parameters. Our results are highly consistent with our previous cosmic shear analysis and also with the Planck CMB results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mori, Taro; Kohri, Kazunori; White, Jonathan, E-mail: moritaro@post.kek.jp, E-mail: kohri@post.kek.jp, E-mail: jwhite@post.kek.jp
We consider inflation in the system containing a Ricci scalar squared term and a canonical scalar field with quadratic mass term. In the Einstein frame this model takes the form of a two-field inflation model with a curved field space, and under the slow-roll approximation contains four free parameters corresponding to the masses of the two fields and their initial positions. We investigate how the inflationary dynamics and predictions for the primordial curvature perturbation depend on these four parameters. Our analysis is based on the δ N formalism, which allows us to determine predictions for the non-Gaussianity of the curvaturemore » perturbation as well as for quantities relating to its power spectrum. Depending on the choice of parameters, we find predictions that range from those of R {sup 2} inflation to those of quadratic chaotic inflation, with the non-Gaussianity of the curvature perturbation always remaining small. Using our results we are able to put constraints on the masses of the two fields.« less
The extended Kubelka-Munk theory and its application to colloidal systems
NASA Astrophysics Data System (ADS)
Alcaraz de la Osa, R.; Fernández, A.; Gutiérrez, Y.; Ortiz, D.; González, F.; Moreno, F.; Saiz, J. M.
2017-08-01
The use of nanoparticles is spreading in many fields and a frequent way of preparing them is in the form of colloids, whose characterization becomes increasingly important. The spectral reflectance and transmittance curves of such colloids exhibit a strong dependence with the main parameters of the system. By means of a two-flux model we have performed a colorimetric study of gold colloids varying several parameters of the system, including the radius of the particles, the particle number density, the thickness of the system and the refractive index of the surrounding medium. In all cases, trajectories in the L*a*b* color space have been obtained, as well as the evolution of the luminosity, chroma and hue, either for reflectance or transmittance. The observed colors agree well with typical colors found in the literature for colloidal gold, and could allow for a fast assessment of the parameters involved, e.g., the radius of the nanoparticle during the fabrication process.
Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework.
El-Assady, Mennatallah; Sevastjanova, Rita; Sperrle, Fabian; Keim, Daniel; Collins, Christopher
2018-01-01
Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.
Radio emission in Mercury magnetosphere
NASA Astrophysics Data System (ADS)
Varela, J.; Reville, V.; Brun, A. S.; Pantellini, F.; Zarka, P.
2016-10-01
Context. Active stars possess magnetized wind that has a direct impact on planets that can lead to radio emission. Mercury is a good test case to study the effect of the solar wind and interplanetary magnetic field (IMF) on radio emission driven in the planet magnetosphere. Such studies could be used as proxies to characterize the magnetic field topology and intensity of exoplanets. Aims: The aim of this study is to quantify the radio emission in the Hermean magnetosphere. Methods: We use the magnetohydrodynamic code PLUTO in spherical coordinates with an axisymmetric multipolar expansion for the Hermean magnetic field, to analyze the effect of the IMF orientation and intensity, as well as the hydrodynamic parameters of the solar wind (velocity, density and temperature), on the net power dissipated on the Hermean day and night side. We apply the formalism derived by Zarka et al. (2001, Astrophys. Space Sci., 277, 293), Zarka (2007, Planet. Space Sci., 55, 598) to infer the radio emission level from the net dissipated power. We perform a set of simulations with different hydrodynamic parameters of the solar wind, IMF orientations and intensities, that allow us to calculate the dissipated power distribution and infer the existence of radio emission hot spots on the planet day side, and to calculate the integrated radio emission of the Hermean magnetosphere. Results: The obtained radio emission distribution of dissipated power is determined by the IMF orientation (associated with the reconnection regions in the magnetosphere), although the radio emission strength is dependent on the IMF intensity and solar wind hydro parameters. The calculated total radio emission level is in agreement with the one estimated in Zarka et al. (2001, Astrophys. Space Sci., 277, 293) , between 5 × 105 and 2 × 106 W.
Modeling the Structure and Dynamics of Dwarf Spheroidal Galaxies with Dark Matter and Tides
NASA Astrophysics Data System (ADS)
Muñoz, Ricardo R.; Majewski, Steven R.; Johnston, Kathryn V.
2008-05-01
We report the results of N-body simulations of disrupting satellites aimed at exploring whether the observed features of dSphs can be accounted for with simple, mass-follows-light (MFL) models including tidal disruption. As a test case, we focus on the Carina dwarf spheroidal (dSph), which presently is the dSph system with the most extensive data at large radius. We find that previous N-body, MFL simulations of dSphs did not sufficiently explore the parameter space of satellite mass, density, and orbital shape to find adequate matches to Galactic dSph systems, whereas with a systematic survey of parameter space we are able to find tidally disrupting, MFL satellite models that rather faithfully reproduce Carina's velocity profile, velocity dispersion profile, and projected density distribution over its entire sampled radius. The successful MFL model satellites have very eccentric orbits, currently favored by CDM models, and central velocity dispersions that still yield an accurate representation of the bound mass and observed central M/L ~ 40 of Carina, despite inflation of the velocity dispersion outside the dSph core by unbound debris. Our survey of parameter space also allows us to address a number of commonly held misperceptions of tidal disruption and its observable effects on dSph structure and dynamics. The simulations suggest that even modest tidal disruption can have a profound effect on the observed dynamics of dSph stars at large radii. Satellites that are well described by tidally disrupting MFL models could still be fully compatible with ΛCDM if, for example, they represent a later stage in the evolution of luminous subhalos.
Understanding Epileptiform After-Discharges as Rhythmic Oscillatory Transients.
Baier, Gerold; Taylor, Peter N; Wang, Yujiang
2017-01-01
Electro-cortical activity in patients with epilepsy may show abnormal rhythmic transients in response to stimulation. Even when using the same stimulation parameters in the same patient, wide variability in the duration of transient response has been reported. These transients have long been considered important for the mapping of the excitability levels in the epileptic brain but their dynamic mechanism is still not well understood. To investigate the occurrence of abnormal transients dynamically, we use a thalamo-cortical neural population model of epileptic spike-wave activity and study the interaction between slow and fast subsystems. In a reduced version of the thalamo-cortical model, slow wave oscillations arise from a fold of cycles (FoC) bifurcation. This marks the onset of a region of bistability between a high amplitude oscillatory rhythm and the background state. In vicinity of the bistability in parameter space, the model has excitable dynamics, showing prolonged rhythmic transients in response to suprathreshold pulse stimulation. We analyse the state space geometry of the bistable and excitable states, and find that the rhythmic transient arises when the impending FoC bifurcation deforms the state space and creates an area of locally reduced attraction to the fixed point. This area essentially allows trajectories to dwell there before escaping to the stable steady state, thus creating rhythmic transients. In the full thalamo-cortical model, we find a similar FoC bifurcation structure. Based on the analysis, we propose an explanation of why stimulation induced epileptiform activity may vary between trials, and predict how the variability could be related to ongoing oscillatory background activity. We compare our dynamic mechanism with other mechanisms (such as a slow parameter change) to generate excitable transients, and we discuss the proposed excitability mechanism in the context of stimulation responses in the epileptic cortex.
NASA Astrophysics Data System (ADS)
Grieb, Jan Niklas; Sánchez, Ariel G.; Salazar-Albornoz, Salvador; Scoccimarro, Román; Crocce, Martín; Dalla Vecchia, Claudio; Montesano, Francesco; Gil-Marín, Héctor; Ross, Ashley J.; Beutler, Florian; Rodríguez-Torres, Sergio; Chuang, Chia-Hsun; Prada, Francisco; Kitaura, Francisco-Shu; Cuesta, Antonio J.; Eisenstein, Daniel J.; Percival, Will J.; Vargas-Magaña, Mariana; Tinker, Jeremy L.; Tojeiro, Rita; Brownstein, Joel R.; Maraston, Claudia; Nichol, Robert C.; Olmstead, Matthew D.; Samushia, Lado; Seo, Hee-Jong; Streblyanska, Alina; Zhao, Gong-bo
2017-05-01
We extract cosmological information from the anisotropic power-spectrum measurements from the recently completed Baryon Oscillation Spectroscopic Survey (BOSS), extending the concept of clustering wedges to Fourier space. Making use of new fast-Fourier-transform-based estimators, we measure the power-spectrum clustering wedges of the BOSS sample by filtering out the information of Legendre multipoles ℓ > 4. Our modelling of these measurements is based on novel approaches to describe non-linear evolution, bias and redshift-space distortions, which we test using synthetic catalogues based on large-volume N-body simulations. We are able to include smaller scales than in previous analyses, resulting in tighter cosmological constraints. Using three overlapping redshift bins, we measure the angular-diameter distance, the Hubble parameter and the cosmic growth rate, and explore the cosmological implications of our full-shape clustering measurements in combination with cosmic microwave background and Type Ia supernova data. Assuming a Λ cold dark matter (ΛCDM) cosmology, we constrain the matter density to Ω M= 0.311_{-0.010}^{+0.009} and the Hubble parameter to H_0 = 67.6_{-0.6}^{+0.7} km s^{-1 Mpc^{-1}}, at a confidence level of 68 per cent. We also allow for non-standard dark energy models and modifications of the growth rate, finding good agreement with the ΛCDM paradigm. For example, we constrain the equation-of-state parameter to w = -1.019_{-0.039}^{+0.048}. This paper is part of a set that analyses the final galaxy-clustering data set from BOSS. The measurements and likelihoods presented here are combined with others in Alam et al. to produce the final cosmological constraints from BOSS.
Redshift-space distortions around voids
NASA Astrophysics Data System (ADS)
Cai, Yan-Chuan; Taylor, Andy; Peacock, John A.; Padilla, Nelson
2016-11-01
We have derived estimators for the linear growth rate of density fluctuations using the cross-correlation function (CCF) of voids and haloes in redshift space. In linear theory, this CCF contains only monopole and quadrupole terms. At scales greater than the void radius, linear theory is a good match to voids traced out by haloes; small-scale random velocities are unimportant at these radii, only tending to cause small and often negligible elongation of the CCF near its origin. By extracting the monopole and quadrupole from the CCF, we measure the linear growth rate without prior knowledge of the void profile or velocity dispersion. We recover the linear growth parameter β to 9 per cent precision from an effective volume of 3( h-1Gpc)3 using voids with radius >25 h-1Mpc. Smaller voids are predominantly sub-voids, which may be more sensitive to the random velocity dispersion; they introduce noise and do not help to improve measurements. Adding velocity dispersion as a free parameter allows us to use information at radii as small as half of the void radius. The precision on β is reduced to 5 per cent. Voids show diverse shapes in redshift space, and can appear either elongated or flattened along the line of sight. This can be explained by the competing amplitudes of the local density contrast, plus the radial velocity profile and its gradient. The distortion pattern is therefore determined solely by the void profile and is different for void-in-cloud and void-in-void. This diversity of redshift-space void morphology complicates measurements of the Alcock-Paczynski effect using voids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scargill, James H. C.
Theories with more than one vacuum allow quantum transitions between them, which may proceed via bubble nucleation; theories with more than two vacua posses additional decay modes in which the wall of a bubble may further decay. The instantons which mediate such a process have O(3) symmetry (in four dimensions, rather than the usual O(4) symmetry of homogeneous vacuum decay), and have been called ‘barnacles’; previously they have been studied in flat space, in the thin wall limit, and this paper extends the analysis to include gravity. It is found that there are regions of parameter space in which, givenmore » an initial bubble, barnacles are the favoured subsequent decay process, and that the inclusion of gravity can enlarge this region. The relation to other heterogeneous vacuum decay scenarios, as well as some of the phenomenological implications of barnacles are briefly discussed.« less
Slowing of Bessel light beam group velocity
NASA Astrophysics Data System (ADS)
Alfano, Robert R.; Nolan, Daniel A.
2016-02-01
Bessel light beams experience diffraction-limited propagation. A different basic spatial property of a Bessel beam is reported and investigated. It is shown a Bessel beam is a natural waveguide causing its group velocity can be subluminal (slower than the speed of light) when the optical frequency ω approaches a critical frequency ωc. A free space dispersion relation for a Bessel beam, the dependence of its wave number on its angular frequency, is developed from which the Bessel beam's subluminal group velocity is derived. It is shown under reasonable laboratory conditions that a Bessel light beam has associated parameters that allow slowing near a critical frequency. The application of Bessel beams with 1 μm spot size to slow down 100 ps to 200 ps over 1 cm length for a natural optical buffer in free space is presented.
NASA Astrophysics Data System (ADS)
Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten
2015-04-01
A multi-scale parameter-estimation method, as presented by Samaniego et al. (2010), is implemented and extended for the conceptual hydrological model COSERO. COSERO is a HBV-type model that is specialized for alpine-environments, but has been applied over a wide range of basins all over the world (see: Kling et al., 2014 for an overview). Within the methodology available small-scale information (DEM, soil texture, land cover, etc.) is used to estimate the coarse-scale model parameters by applying a set of transfer-functions (TFs) and subsequent averaging methods, whereby only TF hyper-parameters are optimized against available observations (e.g. runoff data). The parameter regionalisation approach was extended in order to allow for a more meta-heuristical handling of the transfer-functions. The two main novelties are: 1. An explicit introduction of constrains into parameter estimation scheme: The constraint scheme replaces invalid parts of the transfer-function-solution space with valid solutions. It is inspired by applications in evolutionary algorithms and related to the combination of learning and evolution. This allows the consideration of physical and numerical constraints as well as the incorporation of a priori modeller-experience into the parameter estimation. 2. Spline-based transfer-functions: Spline-based functions enable arbitrary forms of transfer-functions: This is of importance since in many cases the general relationship between sub-grid information and parameters are known, but not the form of the transfer-function itself. The contribution presents the results and experiences with the adopted method and the introduced extensions. Simulation are performed for the pre-alpine/alpine Traisen catchment in Lower Austria. References: Samaniego, L., Kumar, R., Attinger, S. (2010): Multiscale parameter regionalization of a grid-based hydrologic model at the mesoscale, Water Resour. Res., doi: 10.1029/2008WR007327 Kling, H., Stanzel, P., Fuchs, M., and Nachtnebel, H. P. (2014): Performance of the COSERO precipitation-runoff model under non-stationary conditions in basins with different climates, Hydrolog. Sci. J., doi: 10.1080/02626667.2014.959956.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevenson, Simon; Ohme, Frank; Fairhurst, Stephen, E-mail: simon.stevenson@ligo.org
2015-09-01
The coalescence of compact binaries containing neutron stars or black holes is one of the most promising signals for advanced ground-based laser interferometer gravitational-wave (GW) detectors, with the first direct detections expected over the next few years. The rate of binary coalescences and the distribution of component masses is highly uncertain, and population synthesis models predict a wide range of plausible values. Poorly constrained parameters in population synthesis models correspond to poorly understood astrophysics at various stages in the evolution of massive binary stars, the progenitors of binary neutron star and binary black hole systems. These include effects such asmore » supernova kick velocities, parameters governing the energetics of common envelope evolution and the strength of stellar winds. Observing multiple binary black hole systems through GWs will allow us to infer details of the astrophysical mechanisms that lead to their formation. Here we simulate GW observations from a series of population synthesis models including the effects of known selection biases, measurement errors and cosmology. We compare the predictions arising from different models and show that we will be able to distinguish between them with observations (or the lack of them) from the early runs of the advanced LIGO and Virgo detectors. This will allow us to narrow down the large parameter space for binary evolution models.« less
Statistical Estimation of Heterogeneities: A New Frontier in Well Testing
NASA Astrophysics Data System (ADS)
Neuman, S. P.; Guadagnini, A.; Illman, W. A.; Riva, M.; Vesselinov, V. V.
2001-12-01
Well-testing methods have traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. Geostatistical inverse interpretation of cross-hole tests yields a smoothed but detailed "tomographic" image of how parameters actually vary in three-dimensional space, together with corresponding measures of estimation uncertainty. Moment solutions may soon allow one to interpret well tests in terms of statistical parameters such as the mean and variance of log permeability, its spatial autocorrelation and statistical anisotropy. The idea of geostatistical cross-hole tomography is illustrated through pneumatic injection tests conducted in unsaturated fractured tuff at the Apache Leap Research Site near Superior, Arizona. The idea of using moment equations to interpret well-tests statistically is illustrated through a recently developed three-dimensional solution for steady state flow to a well in a bounded, randomly heterogeneous, statistically anisotropic aquifer.
Power-law modeling based on least-squares minimization criteria.
Hernández-Bermejo, B; Fairén, V; Sorribas, A
1999-10-01
The power-law formalism has been successfully used as a modeling tool in many applications. The resulting models, either as Generalized Mass Action or as S-systems models, allow one to characterize the target system and to simulate its dynamical behavior in response to external perturbations and parameter changes. The power-law formalism was first derived as a Taylor series approximation in logarithmic space for kinetic rate-laws. The especial characteristics of this approximation produce an extremely useful systemic representation that allows a complete system characterization. Furthermore, their parameters have a precise interpretation as local sensitivities of each of the individual processes and as rate-constants. This facilitates a qualitative discussion and a quantitative estimation of their possible values in relation to the kinetic properties. Following this interpretation, parameter estimation is also possible by relating the systemic behavior to the underlying processes. Without leaving the general formalism, in this paper we suggest deriving the power-law representation in an alternative way that uses least-squares minimization. The resulting power-law mimics the target rate-law in a wider range of concentration values than the classical power-law. Although the implications of this alternative approach remain to be established, our results show that the predicted steady-state using the least-squares power-law is closest to the actual steady-state of the target system.
Multi-technique combination of space geodesy observations
NASA Astrophysics Data System (ADS)
Zoulida, Myriam; Pollet, Arnaud; Coulot, David; Biancale, Richard; Rebischung, Paul; Collilieux, Xavier
2014-05-01
Over the last few years, combination at the observation level (COL) of the different space geodesy techniques has been thoroughly studied. Various studies have shown that this type of combination can take advantage of common parameters. Some of these parameters, such as Zenithal Tropospheric Delays (ZTD), are available on co-location sites, where more than one technique is present. Local ties (LT) are provided for these sites, and act as intra-technique links and allow resulting terrestrial reference frames (TRF) to be homogeneous. However the use of LT can be problematic on weekly calculations, where their geographical distribution can be poor, and there are often differences observed between available LTs and space geodesy results. Similar co-locations can be found on multi-technique satellites, where more than one technique receiver is featured. A great advantage of these space ties (STs) is the densification of co-locations as the orbiting satellite acts as a moving station. The challenge of using space ties relies in the accurate knowledge or estimation of their values, as officially provided values are sometimes not reaching the required level of precision for the solution, due to receivers' or acting forces mismodelings and other factors. Thus, the necessity of an estimation and/or weighting strategy for the STs is introduced. To this day, on subsets of available data, using STs has shown promising results regarding the TRF determination through the stations' positions estimation, on the orbit determination of the GPS constellation and on the GPS antenna Phase Center Offsets and Variations (PCO and PCV) . In this study, results from a multi-technique combination including the Jason-2 satellite and its effect on the GNSS orbit determination during the CONT2011 period are presented, as well as some preliminary results on station positions' determination. Comparing resulting orbits with official solutions provides an assessment of the effect on the orbit calculation by introducing orbiting stations' observations. Moreover, simulated solutions will be presented, showing the effect of adding multi-technique observations on the estimation of STs parameters errors, such as Laser Retroreflector Offsets (LROs) or GNSS antennae Phase Center Offsets (PCOs).
Oscillatory cellular patterns in three-dimensional directional solidification
NASA Astrophysics Data System (ADS)
Tourret, D.; Debierre, J.-M.; Song, Y.; Mota, F. L.; Bergeon, N.; Guérin, R.; Trivedi, R.; Billia, B.; Karma, A.
2015-10-01
We present a phase-field study of oscillatory breathing modes observed during the solidification of three-dimensional cellular arrays in microgravity. Directional solidification experiments conducted onboard the International Space Station have allowed us to observe spatially extended homogeneous arrays of cells and dendrites while minimizing the amount of gravity-induced convection in the liquid. In situ observations of transparent alloys have revealed the existence, over a narrow range of control parameters, of oscillations in cellular arrays with a period ranging from about 25 to 125 min. Cellular patterns are spatially disordered, and the oscillations of individual cells are spatiotemporally uncorrelated at long distance. However, in regions displaying short-range spatial ordering, groups of cells can synchronize into oscillatory breathing modes. Quantitative phase-field simulations show that the oscillatory behavior of cells in this regime is linked to a stability limit of the spacing in hexagonal cellular array structures. For relatively high cellular front undercooling (i.e., low growth velocity or high thermal gradient), a gap appears in the otherwise continuous range of stable array spacings. Close to this gap, a sustained oscillatory regime appears with a period that compares quantitatively well with experiment. For control parameters where this gap exists, oscillations typically occur for spacings at the edge of the gap. However, after a change of growth conditions, oscillations can also occur for nearby values of control parameters where this gap just closes and a continuous range of spacings exists. In addition, sustained oscillations at to the opening of this stable gap exhibit a slow periodic modulation of the phase-shift among cells with a slower period of several hours. While long-range coherence of breathing modes can be achieved in simulations for a perfect spatial arrangement of cells as initial condition, global disorder is observed in both three-dimensional experiments and simulations from realistic noisy initial conditions. In the latter case, erratic tip-splitting events promoted by large-amplitude oscillations contribute to maintaining the long-range array disorder, unlike in thin-sample experiments where long-range coherence of oscillations is experimentally observable.
Oscillatory cellular patterns in three-dimensional directional solidification
Tourret, D.; Debierre, J. -M.; Song, Y.; ...
2015-09-11
We present a phase-field study of oscillatory breathing modes observed during the solidification of three-dimensional cellular arrays in micro-gravity. Directional solidification experiments conducted onboard the International Space Station have allowed for the first time to observe spatially extended homogeneous arrays of cells and dendrites while minimizing the amount of gravity-induced convection in the liquid. In situ observations of transparent alloys have revealed the existence, over a narrow range of control parameters, of oscillations in cellular arrays with a period ranging from about 25 to 125 minutes. Cellular patterns are spatially disordered, and the oscillations of individual cells are spatiotemporally uncorrelatedmore » at long distance. However, in regions displaying short-range spatial ordering, groups of cells can synchronize into oscillatory breathing modes. Quantitative phase-field simulations show that the oscillatory behavior of cells in this regime is linked to a stability limit of the spacing in hexagonal cellular array structures. For relatively high cellular front undercooling (\\ie low growth velocity or high thermal gradient), a gap appears in the otherwise continuous range of stable array spacings. Close to this gap, a sustained oscillatory regime appears with a period that compares quantitatively well with experiment. For control parameters where this gap exist, oscillations typically occur for spacings at the edge of the gap. However, after a change of growth conditions, oscillations can also occur for nearby values of control parameters where this gap just closes and a continuous range of spacings exists. In addition, sustained oscillations at to the opening of this stable gap exhibit a slow periodic modulation of the phase-shift among cells with a slower period of several hours. While long-range coherence of breathing modes can be achieved in simulations for a perfect spatial arrangement of cells as initial condition, global disorder is observed in both three-dimensional experiments and simulations from realistic noisy initial conditions. The, erratic tip splitting events promoted by large amplitude oscillations contribute to maintaining the long-range array disorder, unlike in thin sample experiments where long-range coherence of oscillations is experimentally observable.« less