Planning Robot-Control Parameters With Qualitative Reasoning
NASA Technical Reports Server (NTRS)
Peters, Stephen F.
1993-01-01
Qualitative-reasoning planning algorithm helps to determine quantitative parameters controlling motion of robot. Algorithm regarded as performing search in multidimensional space of control parameters from starting point to goal region in which desired result of robotic manipulation achieved. Makes use of directed graph representing qualitative physical equations describing task, and interacts, at each sampling period, with history of quantitative control parameters and sensory data, to narrow search for reliable values of quantitative control parameters.
NASA Astrophysics Data System (ADS)
Furfaro, R.; Linares, R.; Gaylor, D.; Jah, M.; Walls, R.
2016-09-01
In this paper, we present an end-to-end approach that employs machine learning techniques and Ontology-based Bayesian Networks (BN) to characterize the behavior of resident space objects. State-of-the-Art machine learning architectures (e.g. Extreme Learning Machines, Convolutional Deep Networks) are trained on physical models to learn the Resident Space Object (RSO) features in the vectorized energy and momentum states and parameters. The mapping from measurements to vectorized energy and momentum states and parameters enables behavior characterization via clustering in the features space and subsequent RSO classification. Additionally, Space Object Behavioral Ontologies (SOBO) are employed to define and capture the domain knowledge-base (KB) and BNs are constructed from the SOBO in a semi-automatic fashion to execute probabilistic reasoning over conclusions drawn from trained classifiers and/or directly from processed data. Such an approach enables integrating machine learning classifiers and probabilistic reasoning to support higher-level decision making for space domain awareness applications. The innovation here is to use these methods (which have enjoyed great success in other domains) in synergy so that it enables a "from data to discovery" paradigm by facilitating the linkage and fusion of large and disparate sources of information via a Big Data Science and Analytics framework.
Application of physical parameter identification to finite-element models
NASA Technical Reports Server (NTRS)
Bronowicki, Allen J.; Lukich, Michael S.; Kuritz, Steven P.
1987-01-01
The time domain parameter identification method described previously is applied to TRW's Large Space Structure Truss Experiment. Only control sensors and actuators are employed in the test procedure. The fit of the linear structural model to the test data is improved by more than an order of magnitude using a physically reasonable parameter set. The electro-magnetic control actuators are found to contribute significant damping due to a combination of eddy current and back electro-motive force (EMF) effects. Uncertainties in both estimated physical parameters and modal behavior variables are given.
Effects of behavioral patterns and network topology structures on Parrondo’s paradox
Ye, Ye; Cheong, Kang Hao; Cen, Yu-wan; Xie, Neng-gang
2016-01-01
A multi-agent Parrondo’s model based on complex networks is used in the current study. For Parrondo’s game A, the individual interaction can be categorized into five types of behavioral patterns: the Matthew effect, harmony, cooperation, poor-competition-rich-cooperation and a random mode. The parameter space of Parrondo’s paradox pertaining to each behavioral pattern, and the gradual change of the parameter space from a two-dimensional lattice to a random network and from a random network to a scale-free network was analyzed. The simulation results suggest that the size of the region of the parameter space that elicits Parrondo’s paradox is positively correlated with the heterogeneity of the degree distribution of the network. For two distinct sets of probability parameters, the microcosmic reasons underlying the occurrence of the paradox under the scale-free network are elaborated. Common interaction mechanisms of the asymmetric structure of game B, behavioral patterns and network topology are also revealed. PMID:27845430
Effects of behavioral patterns and network topology structures on Parrondo’s paradox
NASA Astrophysics Data System (ADS)
Ye, Ye; Cheong, Kang Hao; Cen, Yu-Wan; Xie, Neng-Gang
2016-11-01
A multi-agent Parrondo’s model based on complex networks is used in the current study. For Parrondo’s game A, the individual interaction can be categorized into five types of behavioral patterns: the Matthew effect, harmony, cooperation, poor-competition-rich-cooperation and a random mode. The parameter space of Parrondo’s paradox pertaining to each behavioral pattern, and the gradual change of the parameter space from a two-dimensional lattice to a random network and from a random network to a scale-free network was analyzed. The simulation results suggest that the size of the region of the parameter space that elicits Parrondo’s paradox is positively correlated with the heterogeneity of the degree distribution of the network. For two distinct sets of probability parameters, the microcosmic reasons underlying the occurrence of the paradox under the scale-free network are elaborated. Common interaction mechanisms of the asymmetric structure of game B, behavioral patterns and network topology are also revealed.
An Open-Source Auto-Calibration Routine Supporting the Stormwater Management Model
NASA Astrophysics Data System (ADS)
Tiernan, E. D.; Hodges, B. R.
2017-12-01
The stormwater management model (SWMM) is a clustered model that relies on subcatchment-averaged parameter assignments to correctly capture catchment stormwater runoff behavior. Model calibration is considered a critical step for SWMM performance, an arduous task that most stormwater management designers undertake manually. This research presents an open-source, automated calibration routine that increases the efficiency and accuracy of the model calibration process. The routine makes use of a preliminary sensitivity analysis to reduce the dimensions of the parameter space, at which point a multi-objective function, genetic algorithm (modified Non-dominated Sorting Genetic Algorithm II) determines the Pareto front for the objective functions within the parameter space. The solutions on this Pareto front represent the optimized parameter value sets for the catchment behavior that could not have been reasonably obtained through manual calibration.
Can the Equivalent Sphere Model Approximate Organ Doses in Space?
NASA Technical Reports Server (NTRS)
Lin, Zi-Wei
2007-01-01
For space radiation protection it is often useful to calculate dose or dose,equivalent in blood forming organs (BFO). It has been customary to use a 5cm equivalent sphere to. simulate the BFO dose. However, many previous studies have concluded that a 5cm sphere gives very different dose values from the exact BFO values. One study [1] . concludes that a 9 cm sphere is a reasonable approximation for BFO'doses in solar particle event environments. In this study we use a deterministic radiation transport [2] to investigate the reason behind these observations and to extend earlier studies. We take different space radiation environments, including seven galactic cosmic ray environments and six large solar particle events, and calculate the dose and dose equivalent in the skin, eyes and BFO using their thickness distribution functions from the CAM (Computerized Anatomical Man) model [3] The organ doses have been evaluated with a water or aluminum shielding of an areal density from 0 to 20 g/sq cm. We then compare with results from the equivalent sphere model and determine in which cases and at what radius parameters the equivalent sphere model is a reasonable approximation. Furthermore, we address why the equivalent sphere model is not a good approximation in some cases. For solar particle events, we find that the radius parameters for the organ dose equivalent increase significantly with the shielding thickness, and the model works marginally for BFO but is unacceptable for the eye or the skin. For galactic cosmic rays environments, the equivalent sphere model with an organ-specific constant radius parameter works well for the BFO dose equivalent, marginally well for the BFO dose and the dose equivalent of the eye or the skin, but is unacceptable for the dose of the eye or the skin. The ranges of the radius parameters are also being investigated, and the BFO radius parameters are found to be significantly, larger than 5 cm in all cases, consistent with the conclusion of an earlier study [I]. The radius parameters for the dose equivalent in GCR environments are approximately between 10 and I I cm for the BFO, 3.7 to 4.8 cm for the eye, and 3.5 to 5.6 cm for the skin; while the radius parameters are between 10 and 13 cm for the BFO dose.
NASA Technical Reports Server (NTRS)
Ebeling, Charles; Beasley, Kenneth D.
1992-01-01
The first year of research to provide NASA support in predicting operational and support parameters and costs of proposed space systems is reported. Some of the specific research objectives were (1) to develop a methodology for deriving reliability and maintainability parameters and, based upon their estimates, determine the operational capability and support costs, and (2) to identify data sources and establish an initial data base to implement the methodology. Implementation of the methodology is accomplished through the development of a comprehensive computer model. While the model appears to work reasonably well when applied to aircraft systems, it was not accurate when used for space systems. The model is dynamic and should be updated as new data become available. It is particularly important to integrate the current aircraft data base with data obtained from the Space Shuttle and other space systems since subsystems unique to a space vehicle require data not available from aircraft. This research only addressed the major subsystems on the vehicle.
Can we use the equivalent sphere model to approximate organ doses in space radiation environments?
NASA Astrophysics Data System (ADS)
Lin, Zi-Wei
For space radiation protection one often calculates the dose or dose equivalent in blood forming organs (BFO). It has been customary to use a 5cm equivalent sphere to approximate the BFO dose. However, previous studies have concluded that a 5cm sphere gives a very different dose from the exact BFO dose. One study concludes that a 9cm sphere is a reasonable approximation for the BFO dose in solar particle event (SPE) environments. In this study we investigate the reason behind these observations and extend earlier studies by studying whether BFO, eyes or the skin can be approximated by the equivalent sphere model in different space radiation environments such as solar particle events and galactic cosmic ray (GCR) environments. We take the thickness distribution functions of the organs from the CAM (Computerized Anatomical Man) model, then use a deterministic radiation transport to calculate organ doses in different space radiation environments. The organ doses have been evaluated with a water or aluminum shielding from 0 to 20 g/cm2. We then compare these exact doses with results from the equivalent sphere model and determine in which cases and at what radius parameters the equivalent sphere model is a reasonable approximation. Furthermore, we propose to use a modified equivalent sphere model with two radius parameters to represent the skin or eyes. For solar particle events, we find that the radius parameters for the organ dose equivalent increase significantly with the shielding thickness, and the model works marginally for BFO but is unacceptable for eyes or the skin. For galactic cosmic rays environments, the equivalent sphere model with one organ-specific radius parameter works well for the BFO dose equivalent, marginally well for the BFO dose and the dose equivalent of eyes or the skin, but is unacceptable for the dose of eyes or the skin. The BFO radius parameters are found to be significantly larger than 5 cm in all cases, consistent with the conclusion of an earlier study. The radius parameters for the dose equivalent in GCR environments are approximately between 10 and 11 cm for the BFO, 3.7 to 4.8 cm for eyes, and 3.5 to 5.6 cm for the skin; while the radius parameters are between 10 and 13 cm for the BFO dose. In the proposed modified equivalent sphere model, the range of each of the two radius parameters for the skin (or eyes) is much tighter than that in the equivalent sphere model with one radius parameter. Our results thus show that the equivalent sphere model works better in galactic cosmic rays environments than in solar particle events. The model works well or marginally well for BFO but usually does not work for eyes or the skin. A modified model with two radius parameters works much better in approximating the dose and dose equivalent in eyes or the skin.
NASA Astrophysics Data System (ADS)
Krenn, Julia; Zangerl, Christian; Mergili, Martin
2017-04-01
r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This strategy is best demonstrated for two input parameters, but can be extended arbitrarily. We use a set of small rock avalanches from western Austria, and some larger ones from Canada and New Zealand, to optimize the basal friction coefficient and the mass-to-drag ratio of the two-parameter friction model implemented with r.randomwalk. Thereby we repeat the optimization procedure with conservative and non-conservative assumptions of a set of complementary parameters and with different raster cell sizes. Our preliminary results indicate that the model performance in terms of AUROC achieved with broad parameter spaces is hardly surpassed by the performance achieved with narrow parameter spaces. However, broad spaces may result in very conservative or very non-conservative predictions. Therefore, guiding parameter spaces have to be (i) broad enough to avoid the risk of being off target; and (ii) narrow enough to ensure a reasonable level of conservativeness of the results. The next steps will consist in (i) extending the study to other types of mass flow processes in order to support forward calculations using r.randomwalk; and (ii) in applying the same strategy to the more complex, dynamic model r.avaflow.
Revealing the jet substructure in a compressed spectrum of new physics
NASA Astrophysics Data System (ADS)
Han, Chengcheng; Park, Myeonghun
2016-07-01
The physics beyond the Standard Model with parameters of the compressed spectrum is well motivated both in the theory side and with phenomenological reasons, especially related to dark matter phenomenology. In this letter, we propose a method to tag soft final state particles from a decaying process of a new particle in this parameter space. By taking a supersymmetric gluino search as an example, we demonstrate how the Large Hadron Collider experimental collaborations can improve sensitivity in these nontrivial search regions.
Patchy screening of the cosmic microwave background by inhomogeneous reionization
NASA Astrophysics Data System (ADS)
Gluscevic, Vera; Kamionkowski, Marc; Hanson, Duncan
2013-02-01
We derive a constraint on patchy screening of the cosmic microwave background from inhomogeneous reionization using off-diagonal TB and TT correlations in WMAP-7 temperature/polarization data. We interpret this as a constraint on the rms optical-depth fluctuation Δτ as a function of a coherence multipole LC. We relate these parameters to a comoving coherence scale, of bubble size RC, in a phenomenological model where reionization is instantaneous but occurs on a crinkly surface, and also to the bubble size in a model of “Swiss cheese” reionization where bubbles of fixed size are spread over some range of redshifts. The current WMAP data are still too weak, by several orders of magnitude, to constrain reasonable models, but forthcoming Planck and future EPIC data should begin to approach interesting regimes of parameter space. We also present constraints on the parameter space imposed by the recent results from the EDGES experiment.
NASA Astrophysics Data System (ADS)
Valach, F.; Revallo, M.; Hejda, P.; Bochníček, J.
2010-12-01
Our modern society with its advanced technology is becoming increasingly vulnerable to the Earth's system disorders originating in explosive processes on the Sun. Coronal mass ejections (CMEs) blasted into interplanetary space as gigantic clouds of ionized gas can hit Earth within a few hours or days and cause, among other effects, geomagnetic storms - perhaps the best known manifestation of solar wind interaction with Earth's magnetosphere. Solar energetic particles (SEP), accelerated to near relativistic energy during large solar storms, arrive at the Earth's orbit even in few minutes and pose serious risk to astronauts traveling through the interplanetary space. These and many other threats are the reason why experts pay increasing attention to space weather and its predictability. For research on space weather, it is typically necessary to examine a large number of parameters which are interrelated in a complex non-linear way. One way to cope with such a task is to use an artificial neural network for space weather modeling, a tool originally developed for artificial intelligence. In our contribution, we focus on practical aspects of the neural networks application to modeling and forecasting selected space weather parameters.
On the Tetragonal Forms of KMo 4O 6
NASA Astrophysics Data System (ADS)
McCarroll, W. H.; Ramanujachary, K. V.; Greenblatt, M.; Marsh, Richard E.
1995-06-01
A reexamination of the X-ray diffraction data for the tetragonal form of KMo4O6 prepared by fused salt electrolysis leads to the conclusion that the crystal structure is better described by using space group P 4/mbm and not P4¯ as previously reported. However, refinement in the new space group does not result in any significant changes in the atomic arrangement. Possible reasons for the significant difference between the c lattice parameter of this form of KMo4O6 and that prepared at high pressures are also discussed.
Numerical simulation of lava flows: Applications to the terrestrial planets
NASA Technical Reports Server (NTRS)
Zimbelman, James R.; Campbell, Bruce A.; Kousoum, Juliana; Lampkin, Derrick J.
1993-01-01
Lava flows are the visible expression of the extrusion of volcanic materials on a variety of planetary surfaces. A computer program described by Ishihara et al. appears to be well suited for application to different environments, and we have undertaken tests to evaluate their approach. Our results are somewhat mixed; the program does reproduce reasonable lava flow behavior in many situations, but we have encountered some conditions common to planetary environments for which the current program is inadequate. Here we present our initial efforts to identify the 'parameter space' for reasonable numerical simulations of lava flows.
Space Shuttle and Space Station Radio Frequency (RF) Exposure Analysis
NASA Technical Reports Server (NTRS)
Hwu, Shian U.; Loh, Yin-Chung; Sham, Catherine C.; Kroll, Quin D.
2005-01-01
This paper outlines the modeling techniques and important parameters to define a rigorous but practical procedure that can verify the compliance of RF exposure to the NASA standards for astronauts and electronic equipment. The electromagnetic modeling techniques are applied to analyze RF exposure in Space Shuttle and Space Station environments with reasonable computing time and resources. The modeling techniques are capable of taking into account the field interactions with Space Shuttle and Space Station structures. The obtained results illustrate the multipath effects due to the presence of the space vehicle structures. It's necessary to include the field interactions with the space vehicle in the analysis for an accurate assessment of the RF exposure. Based on the obtained results, the RF keep out zones are identified for appropriate operational scenarios, flight rules and necessary RF transmitter constraints to ensure a safe operating environment and mission success.
Analysis and trade-off studies of large lightweight mirror structures. [large space telescope
NASA Technical Reports Server (NTRS)
Soosaar, K.; Grin, R.; Ayer, F.
1975-01-01
A candidate mirror, hexagonally lightweighted, is analyzed under various loadings using as complete a procedure as possible. Successive simplifications are introduced and compared to an original analysis. A model which is a reasonable compromise between accuracy and cost is found and is used for making trade-off studies of the various structural parameters of the lightweighted mirror.
Reasoning from non-stationarity
NASA Astrophysics Data System (ADS)
Struzik, Zbigniew R.; van Wijngaarden, Willem J.; Castelo, Robert
2002-11-01
Complex real-world (biological) systems often exhibit intrinsically non-stationary behaviour of their temporal characteristics. We discuss local measures of scaling which can capture and reveal changes in a system's behaviour. Such measures offer increased insight into a system's behaviour and are superior to global, spectral characteristics like the multifractal spectrum. They are, however, often inadequate for fully understanding and modelling the phenomenon. We illustrate an attempt to capture complex model characteristics by analysing (multiple order) correlations in a high dimensional space of parameters of the (biological) system being studied. Both temporal information, among others local scaling information, and external descriptors/parameters, possibly influencing the system's state, are used to span the search space investigated for the presence of a (sub-)optimal model. As an example, we use fetal heartbeat monitored during labour.
Chasing a Comet with a Solar Sail
NASA Technical Reports Server (NTRS)
Stough, Robert W.; Heaton, Andrew F.; Whorton, Mark S.
2008-01-01
Solar sail propulsion systems enable a wide range of missions that require constant thrust or high delta-V over long mission times. One particularly challenging mission type is a comet rendezvous mission. This paper presents optimal low-thrust trajectory designs for a range of sailcraft performance metrics and mission transit times that enables a comet rendezvous mission. These optimal trajectory results provide a trade space which can be parameterized in terms of mission duration and sailcraft performance parameters such that a design space for a small satellite comet chaser mission is identified. These results show that a feasible space exists for a small satellite to perform a comet chaser mission in a reasonable mission time.
Key parameters design of an aerial target detection system on a space-based platform
NASA Astrophysics Data System (ADS)
Zhu, Hanlu; Li, Yejin; Hu, Tingliang; Rao, Peng
2018-02-01
To ensure flight safety of an aerial aircraft and avoid recurrence of aircraft collisions, a method of multi-information fusion is proposed to design the key parameter to realize aircraft target detection on a space-based platform. The key parameters of a detection wave band and spatial resolution using the target-background absolute contrast, target-background relative contrast, and signal-to-clutter ratio were determined. This study also presented the signal-to-interference ratio for analyzing system performance. Key parameters are obtained through the simulation of a specific aircraft. And the simulation results show that the boundary ground sampling distance is 30 and 35 m in the mid- wavelength infrared (MWIR) and long-wavelength infrared (LWIR) bands for most aircraft detection, and the most reasonable detection wavebands is 3.4 to 4.2 μm and 4.35 to 4.5 μm in the MWIR bands, and 9.2 to 9.8 μm in the LWIR bands. We also found that the direction of detection has a great impact on the detection efficiency, especially in MWIR bands.
Universal dynamical properties preclude standard clustering in a large class of biochemical data.
Gomez, Florian; Stoop, Ralph L; Stoop, Ruedi
2014-09-01
Clustering of chemical and biochemical data based on observed features is a central cognitive step in the analysis of chemical substances, in particular in combinatorial chemistry, or of complex biochemical reaction networks. Often, for reasons unknown to the researcher, this step produces disappointing results. Once the sources of the problem are known, improved clustering methods might revitalize the statistical approach of compound and reaction search and analysis. Here, we present a generic mechanism that may be at the origin of many clustering difficulties. The variety of dynamical behaviors that can be exhibited by complex biochemical reactions on variation of the system parameters are fundamental system fingerprints. In parameter space, shrimp-like or swallow-tail structures separate parameter sets that lead to stable periodic dynamical behavior from those leading to irregular behavior. We work out the genericity of this phenomenon and demonstrate novel examples for their occurrence in realistic models of biophysics. Although we elucidate the phenomenon by considering the emergence of periodicity in dependence on system parameters in a low-dimensional parameter space, the conclusions from our simple setting are shown to continue to be valid for features in a higher-dimensional feature space, as long as the feature-generating mechanism is not too extreme and the dimension of this space is not too high compared with the amount of available data. For online versions of super-paramagnetic clustering see http://stoop.ini.uzh.ch/research/clustering. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Breakpoint Forcing Revisited: Phase Between Forcing and Response
NASA Astrophysics Data System (ADS)
Contardo, S.; Symonds, G.; Dufois, F.
2018-02-01
Using the breakpoint forcing model, for long wave generation in the surf zone, expressions for the phase difference between the breakpoint-forced long waves and the incident short wave groups are obtained. Contrary to assumptions made in previous studies, the breakpoint-forced long waves and incident wave groups are not in phase and outgoing breakpoint-forced long waves and incident wave groups are not π out of phase. The phase between the breakpoint-forced long wave and the incident wave group is shown to depend on beach geometry and wave group parameters. The breakpoint-forced incoming long wave lags behind the wave group, by a phase smaller than π/2. The phase lag decreases as the beach slope decreases and the group frequency increases, approaching approximately π/16 within reasonable limits of the parameter space. The phase between the breakpoint-forced outgoing long wave and the wave group is between π/2 and π and it increases as the beach slope decreases and the group frequency increases, approaching 15π/16 within reasonable limits of the parameter space. The phase between the standing long wave (composed of the incoming long wave and its reflection) and the incident wave group tends to zero when the wave group is long compared to the surf zone width. These results clarify the phase relationships in the breakpoint forcing model and provide a new base for the identification of breakpoint forcing signal from observations, laboratory experiments and numerical modeling.
NASA Astrophysics Data System (ADS)
Zhang, Chuan-Xin; Yuan, Yuan; Zhang, Hao-Wei; Shuai, Yong; Tan, He-Ping
2016-09-01
Considering features of stellar spectral radiation and sky surveys, we established a computational model for stellar effective temperatures, detected angular parameters and gray rates. Using known stellar flux data in some bands, we estimated stellar effective temperatures and detected angular parameters using stochastic particle swarm optimization (SPSO). We first verified the reliability of SPSO, and then determined reasonable parameters that produced highly accurate estimates under certain gray deviation levels. Finally, we calculated 177 860 stellar effective temperatures and detected angular parameters using data from the Midcourse Space Experiment (MSX) catalog. These derived stellar effective temperatures were accurate when we compared them to known values from literatures. This research makes full use of catalog data and presents an original technique for studying stellar characteristics. It proposes a novel method for calculating stellar effective temperatures and detecting angular parameters, and provides theoretical and practical data for finding information about radiation in any band.
NASA Astrophysics Data System (ADS)
Shiangjen, Kanokwatt; Chaijaruwanich, Jeerayut; Srisujjalertwaja, Wijak; Unachak, Prakarn; Somhom, Samerkae
2018-02-01
This article presents an efficient heuristic placement algorithm, namely, a bidirectional heuristic placement, for solving the two-dimensional rectangular knapsack packing problem. The heuristic demonstrates ways to maximize space utilization by fitting the appropriate rectangle from both sides of the wall of the current residual space layer by layer. The iterative local search along with a shift strategy is developed and applied to the heuristic to balance the exploitation and exploration tasks in the solution space without the tuning of any parameters. The experimental results on many scales of packing problems show that this approach can produce high-quality solutions for most of the benchmark datasets, especially for large-scale problems, within a reasonable duration of computational time.
Highly light-weighted ZERODUR mirrors
NASA Astrophysics Data System (ADS)
Behar-Lafenetre, Stéphanie; Lasic, Thierry; Viale, Roger; Mathieu, Jean-Claude; Ruch, Eric; Tarreau, Michel; Etcheto, Pierre
2017-11-01
Due to more and more stringent requirements for observation missions, diameter of primary mirrors for space telescopes is increasing. Difficulty is then to have a design stiff enough to be able to withstand launch loads and keep a reasonable mass while providing high opto-mechanical performance. Among the possible solutions, Thales Alenia Space France has investigated optimization of ZERODUR mirrors. Indeed this material, although fragile, is very well mastered and its characteristics well known. Moreover, its thermo-elastic properties (almost null CTE) is unequalled yet, in particular at ambient temperature. Finally, this material can be polished down to very low roughness without any coating. Light-weighting can be achieved by two different means : either optimizing manufacturing parameters or optimizing design (or both). Manufacturing parameters such as walls and optical face thickness have been improved and tested on representative breadboards defined on the basis of SAGEM-REOSC and Thales Alenia Space France expertise and realized by SAGEM-REOSC. In the frame of CNES Research and Technology activities, specific mass has been decreased down to 36 kg/m2. Moreover SNAP study dealt with a 2 m diameter primary mirror. Design has been optimized by Thales Alenia Space France while using classical manufacturing parameters - thus ensuring feasibility and costs. Mass was decreased down to 60 kg/m2 for a gravity effect of 52 nm. It is thus demonstrated that high opto-mechanical performance can be guaranteed with large highly lightweighted ZERODUR mirrors.
Effect of friction stir welding parameters on defect formation
NASA Astrophysics Data System (ADS)
Tarasov, S. Yu.; Rubtsov, V. E.; Eliseev, A. A.; Kolubaev, E. A.; Filippov, A. V.; Ivanov, A. N.
2015-10-01
Friction stir welding is a perspective method for manufacturing automotive parts, aviation and space technology. One of the major problems is the formation of welding defects and weld around the welding zone. The formation of defect is the main reason failure of the joint. A possible way to obtain defect-free welded joints is the selection of the correct welding parameters. Experimental results describing the effect of friction stir welding process parameters on the defects of welded joints on aluminum alloy AMg5M have been shown. The weld joint defects have been characterized using the non-destructive radioscopic and ultrasound phase array methods. It was shown how the type and size of defects determine the welded joint strength.
He, Yi; Xiao, Yi; Liwo, Adam; Scheraga, Harold A
2009-10-01
We explored the energy-parameter space of our coarse-grained UNRES force field for large-scale ab initio simulations of protein folding, to obtain good initial approximations for hierarchical optimization of the force field with new virtual-bond-angle bending and side-chain-rotamer potentials which we recently introduced to replace the statistical potentials. 100 sets of energy-term weights were generated randomly, and good sets were selected by carrying out replica-exchange molecular dynamics simulations of two peptides with a minimal alpha-helical and a minimal beta-hairpin fold, respectively: the tryptophan cage (PDB code: 1L2Y) and tryptophan zipper (PDB code: 1LE1). Eight sets of parameters produced native-like structures of these two peptides. These eight sets were tested on two larger proteins: the engrailed homeodomain (PDB code: 1ENH) and FBP WW domain (PDB code: 1E0L); two sets were found to produce native-like conformations of these proteins. These two sets were tested further on a larger set of nine proteins with alpha or alpha + beta structure and found to locate native-like structures of most of them. These results demonstrate that, in addition to finding reasonable initial starting points for optimization, an extensive search of parameter space is a powerful method to produce a transferable force field. Copyright 2009 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
He, Juan; Xu, Shuai; Ye, Liu
2016-05-01
We investigate the quantum correlation via measurement-induced-nonlocality (MIN) for Dirac particles in Garfinkle-Horowitz-Strominger (GHS) dilation space-time. It is shown that the physical accessible quantum correlation decreases as the dilation parameter increases monotonically. Unlike the case of scalar fields, the physical accessible correlation is not zero when the Hawking temperature is infinite owing to the Pauli exclusion principle and the differences between Fermi-Dirac and Bose-Einstein statistics. Meanwhile, the boundary of MIN related to Bell-violation is derived, which indicates that MIN is more general than quantum nonlocality captured by the violation of Bell-inequality. As a by-product, a tenable quantitative relation about MIN redistribution is obtained whatever the dilation parameter is. In addition, it is worth emphasizing that the underlying reason why the physical accessible correlation and mutual information decrease is that they are redistributed to the physical inaccessible regions.
Self-Adaptive Stepsize Search Applied to Optimal Structural Design
NASA Astrophysics Data System (ADS)
Nolle, L.; Bland, J. A.
Structural engineering often involves the design of space frames that are required to resist predefined external forces without exhibiting plastic deformation. The weight of the structure and hence the weight of its constituent members has to be as low as possible for economical reasons without violating any of the load constraints. Design spaces are usually vast and the computational costs for analyzing a single design are usually high. Therefore, not every possible design can be evaluated for real-world problems. In this work, a standard structural design problem, the 25-bar problem, has been solved using self-adaptive stepsize search (SASS), a relatively new search heuristic. This algorithm has only one control parameter and therefore overcomes the drawback of modern search heuristics, i.e. the need to first find a set of optimum control parameter settings for the problem at hand. In this work, SASS outperforms simulated-annealing, genetic algorithms, tabu search and ant colony optimization.
The extended BLMSSM with a 125 GeV Higgs boson and dark matter
NASA Astrophysics Data System (ADS)
Zhao, Shu-Min; Feng, Tai-Fu; Ning, Guo-Zhu; Chen, Jian-Bin; Zhang, Hai-Bin; Dong, Xing Xing
2018-04-01
To extend the BLMSSM, we not only add exotic Higgs superfields (Φ _{NL},φ_{NL}) to make the exotic lepton heavy, but also introduce the superfields ( Y,Y^' ) having couplings with lepton and exotic lepton at tree level. The obtained model is called as EBLMSSM, which has difference from BLMSSM especially for the exotic slepton (lepton) and exotic sneutrino (neutrino). We deduce the mass matrices and the needed couplings in this model. To confine the parameter space, the Higgs boson mass m_{h^0} and the processes h^0→ γ γ , h^0→ VV, V=(Z,W) are studied in the EBLMSSM. With the assumed parameter space, we obtain reasonable numerical results according to data on Higgs from ATLAS and CMS. As a cold dark mater candidate, the relic density for the lightest mass eigenstate of Y and Y' mixing is also studied.
A stochastic approach for model reduction and memory function design in hydrogeophysical inversion
NASA Astrophysics Data System (ADS)
Hou, Z.; Kellogg, A.; Terry, N.
2009-12-01
Geophysical (e.g., seismic, electromagnetic, radar) techniques and statistical methods are essential for research related to subsurface characterization, including monitoring subsurface flow and transport processes, oil/gas reservoir identification, etc. For deep subsurface characterization such as reservoir petroleum exploration, seismic methods have been widely used. Recently, electromagnetic (EM) methods have drawn great attention in the area of reservoir characterization. However, considering the enormous computational demand corresponding to seismic and EM forward modeling, it is usually a big problem to have too many unknown parameters in the modeling domain. For shallow subsurface applications, the characterization can be very complicated considering the complexity and nonlinearity of flow and transport processes in the unsaturated zone. It is warranted to reduce the dimension of parameter space to a reasonable level. Another common concern is how to make the best use of time-lapse data with spatial-temporal correlations. This is even more critical when we try to monitor subsurface processes using geophysical data collected at different times. The normal practice is to get the inverse images individually. These images are not necessarily continuous or even reasonably related, because of the non-uniqueness of hydrogeophysical inversion. We propose to use a stochastic framework by integrating minimum-relative-entropy concept, quasi Monto Carlo sampling techniques, and statistical tests. The approach allows efficient and sufficient exploration of all possibilities of model parameters and evaluation of their significances to geophysical responses. The analyses enable us to reduce the parameter space significantly. The approach can be combined with Bayesian updating, allowing us to treat the updated ‘posterior’ pdf as a memory function, which stores all the information up to date about the distributions of soil/field attributes/properties, then consider the memory function as a new prior and generate samples from it for further updating when more geophysical data is available. We applied this approach for deep oil reservoir characterization and for shallow subsurface flow monitoring. The model reduction approach reliably helps reduce the joint seismic/EM/radar inversion computational time to reasonable levels. Continuous inversion images are obtained using time-lapse data with the “memory function” applied in the Bayesian inversion.
NASA Technical Reports Server (NTRS)
King, James A.
1987-01-01
The goal is to explain Case-Based Reasoning as a vehicle to establish knowledge-based systems based on experimental reasoning for possible space applications. This goal will be accomplished through an examination of reasoning based on prior experience in a sample domain, and also through a presentation of proposed space applications which could utilize Case-Based Reasoning techniques.
Intrinsic measurement errors for the speed of light in vacuum
NASA Astrophysics Data System (ADS)
Braun, Daniel; Schneiter, Fabienne; Fischer, Uwe R.
2017-09-01
The speed of light in vacuum, one of the most important and precisely measured natural constants, is fixed by convention to c=299 792 458 m s-1 . Advanced theories predict possible deviations from this universal value, or even quantum fluctuations of c. Combining arguments from quantum parameter estimation theory and classical general relativity, we here establish rigorously the existence of lower bounds on the uncertainty to which the speed of light in vacuum can be determined in a given region of space-time, subject to several reasonable restrictions. They provide a novel perspective on the experimental falsifiability of predictions for the quantum fluctuations of space-time.
Binzoni, Tiziano; Torricelli, Alessandro; Giust, Remo; Sanguinetti, Bruno; Bernhard, Paul; Spinelli, Lorenzo
2014-01-01
A bone tissue phantom prototype allowing to test, in general, optical flowmeters at large interoptode spacings, such as laser-Doppler flowmetry or diffuse correlation spectroscopy, has been developed by 3D-stereolithography technique. It has been demonstrated that complex tissue vascular systems of any geometrical shape can be conceived. Absorption coefficient, reduced scattering coefficient and refractive index of the optical phantom have been measured to ensure that the optical parameters reasonably reproduce real human bone tissue in vivo. An experimental demonstration of a possible use of the optical phantom, utilizing a laser-Doppler flowmeter, is also presented. PMID:25136496
Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel
2016-07-20
A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of fieldmore » and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel
A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of fieldmore » and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.« less
Linear Space-Variant Image Restoration of Photon-Limited Images
1978-03-01
levels of performance of the wavefront seisor. The parameter ^ represents the residual rms wavefront error ^measurement noise plus ♦ttting error...known to be optimum only when the signal and noise are uncorrelated stationary random processes «nd when the noise statistics are gaussian. In the...regime of photon-Iimited imaging, the noise is non-gaussian and signaI-dependent, and it is therefore reasonable to assume that tome form of linear
Genetic Algorithm for Optimization: Preprocessing with n Dimensional Bisection and Error Estimation
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam Ali
2006-01-01
A knowledge of the appropriate values of the parameters of a genetic algorithm (GA) such as the population size, the shrunk search space containing the solution, crossover and mutation probabilities is not available a priori for a general optimization problem. Recommended here is a polynomial-time preprocessing scheme that includes an n-dimensional bisection and that determines the foregoing parameters before deciding upon an appropriate GA for all problems of similar nature and type. Such a preprocessing is not only fast but also enables us to get the global optimal solution and its reasonably narrow error bounds with a high degree of confidence.
Jiang, Wen; Cao, Ying; Yang, Lin; He, Zichang
2017-08-28
Specific emitter identification plays an important role in contemporary military affairs. However, most of the existing specific emitter identification methods haven't taken into account the processing of uncertain information. Therefore, this paper proposes a time-space domain information fusion method based on Dempster-Shafer evidence theory, which has the ability to deal with uncertain information in the process of specific emitter identification. In this paper, radars will generate a group of evidence respectively based on the information they obtained, and our main task is to fuse the multiple groups of evidence to get a reasonable result. Within the framework of recursive centralized fusion model, the proposed method incorporates a correlation coefficient, which measures the relevance between evidence and a quantum mechanical approach, which is based on the parameters of radar itself. The simulation results of an illustrative example demonstrate that the proposed method can effectively deal with uncertain information and get a reasonable recognition result.
Space-based laser-driven MHD generator: Feasibility study
NASA Technical Reports Server (NTRS)
Choi, S. H.
1986-01-01
The feasibility of a laser-driven MHD generator, as a candidate receiver for a space-based laser power transmission system, was investigated. On the basis of reasonable parameters obtained in the literature, a model of the laser-driven MHD generator was developed with the assumptions of a steady, turbulent, two-dimensional flow. These assumptions were based on the continuous and steady generation of plasmas by the exposure of the continuous wave laser beam thus inducing a steady back pressure that enables the medium to flow steadily. The model considered here took the turbulent nature of plasmas into account in the two-dimensional geometry of the generator. For these conditions with the plasma parameters defining the thermal conductivity, viscosity, electrical conductivity for the plasma flow, a generator efficiency of 53.3% was calculated. If turbulent effects and nonequilibrium ionization are taken into account, the efficiency is 43.2%. The study shows that the laser-driven MHD system has potential as a laser power receiver for space applications because of its high energy conversion efficiency, high energy density and relatively simple mechanism as compared to other energy conversion cycles.
Determination of Thermal State of Charge in Solar Heat Receivers
NASA Technical Reports Server (NTRS)
Glakpe, E. K.; Cannon, J. N.; Hall, C. A., III; Grimmett, I. W.
1996-01-01
The research project at Howard University seeks to develop analytical and numerical capabilities to study heat transfer and fluid flow characteristics, and the prediction of the performance of solar heat receivers for space applications. Specifically, the study seeks to elucidate the effects of internal and external thermal radiation, geometrical and applicable dimensionless parameters on the overall heat transfer in space solar heat receivers. Over the last year, a procedure for the characterization of the state-of-charge (SOC) in solar heat receivers for space applications has been developed. By identifying the various factors that affect the SOC, a dimensional analysis is performed resulting in a number of dimensionless groups of parameters. Although not accomplished during the first phase of the research, data generated from a thermal simulation program can be used to determine values of the dimensionless parameters and the state-of-charge and thereby obtain a correlation for the SOC. The simulation program selected for the purpose is HOTTube, a thermal numerical computer code based on a transient time-explicit, axisymmetric model of the total solar heat receiver. Simulation results obtained with the computer program are presented the minimum and maximum insolation orbits. In the absence of any validation of the code with experimental data, results from HOTTube appear reasonable qualitatively in representing the physical situations modeled.
Interpolation/extrapolation technique with application to hypervelocity impact of space debris
NASA Technical Reports Server (NTRS)
Rule, William K.
1992-01-01
A new technique for the interpolation/extrapolation of engineering data is described. The technique easily allows for the incorporation of additional independent variables, and the most suitable data in the data base is automatically used for each prediction. The technique provides diagnostics for assessing the reliability of the prediction. Two sets of predictions made for known 5-degree-of-freedom, 15-parameter functions using the new technique produced an average coefficient of determination of 0.949. Here, the technique is applied to the prediction of damage to the Space Station from hypervelocity impact of space debris. A new set of impact data is presented for this purpose. Reasonable predictions for bumper damage were obtained, but predictions of pressure wall and multilayer insulation damage were poor.
New method for rekindling the nonlinear solitary waves in Maxwellian complex space plasma
NASA Astrophysics Data System (ADS)
Das, G. C.; Sarma, Ridip
2018-04-01
Our interest is to study the nonlinear wave phenomena in complex plasma constituents with Maxwellian electrons and ions. The main reason for this consideration is to exhibit the effects of dust charge fluctuations on acoustic modes evaluated by the use of a new method. A special method (G'/G) has been developed to yield the coherent features of nonlinear waves augmented through the derivation of a Korteweg-de Vries equation and found successfully the different nature of solitons recognized in space plasmas. Evolutions have shown with the input of appropriate typical plasma parameters to support our theoretical observations in space plasmas. All conclusions are in good accordance with the actual occurrences and could be of interest to further the investigations in experiments and satellite observations in space. In this paper, we present not only the model that exhibited nonlinear solitary wave propagation but also a new mathematical method to the execution.
Mathematical Model of Three Species Food Chain Interaction with Mixed Functional Response
NASA Astrophysics Data System (ADS)
Ws, Mada Sanjaya; Mohd, Ismail Bin; Mamat, Mustafa; Salleh, Zabidin
In this paper, we study mathematical model of ecology with a tritrophic food chain composed of a classical Lotka-Volterra functional response for prey and predator, and a Holling type-III functional response for predator and super predator. There are two equilibrium points of the system. In the parameter space, there are passages from instability to stability, which are called Hopf bifurcation points. For the first equilibrium point, it is possible to find bifurcation points analytically and to prove that the system has periodic solutions around these points. Furthermore the dynamical behaviors of this model are investigated. Models for biologically reasonable parameter values, exhibits stable, unstable periodic and limit cycles. The dynamical behavior is found to be very sensitive to parameter values as well as the parameters of the practical life. Computer simulations are carried out to explain the analytical findings.
Zhang, Nan; Zhou, Peiheng; Cheng, Dengmu; Weng, Xiaolong; Xie, Jianliang; Deng, Longjiang
2013-04-01
We present the simulation, fabrication, and characterization of a dual-band metamaterial absorber in the mid-infrared regime. Two pairs of circular-patterned metal-dielectric stacks are employed to excite the dual-band absorption peaks. Dielectric characteristics of the dielectric spacing layer determine energy dissipation in each resonant stack, i.e., dielectric or ohmic loss. By controlling material parameters, both two mechanisms are introduced into our structure. Up to 98% absorption is obtained at 9.03 and 13.32 μm in the simulation, which is in reasonable agreement with experimental results. The proposed structure holds promise for various applications, e.g., thermal radiation modulators and multicolor infrared focal plane arrays.
Characterization and modeling of radiation effects NASA/MSFC semiconductor devices
NASA Technical Reports Server (NTRS)
Kerns, D. V., Jr.; Cook, K. B., Jr.
1978-01-01
A literature review of the near-Earth trapped radiation of the Van Allen Belts, the radiation within the solar system resulting from the solar wind, and the cosmic radiation levels of deep space showed that a reasonable simulation of space radiation, particularly the Earth orbital environment, could be simulated in the laboratory by proton bombardment. A 3 MeV proton accelerator was used to irradiate CMOS integrated circuits fabricated from three different processes. The drain current and output voltage for three inverters was recorded as the input voltage was swept from zero to ten volts after each successive irradiation. Device parameters were extracted. Possible damage mechanisms are discussed and recommendations for improved radiation hardness are suggested.
The structure and dynamics of tornado-like vortices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nolan, D.S.; Farrell, B.F.
The structure and dynamics of axisymmetric tornado-like vortices are explored with a numerical model of axisymmetric incompressible flow based on recently developed numerical methods. The model is first shown to compare favorably with previous results and is then used to study the effects of varying the major parameters controlling the vortex: the strength of the convective forcing, the strength of the rotational forcing, and the magnitude of the model eddy viscosity. Dimensional analysis of the model problem indicates that the results must depend on only two dimensionless parameters. The natural choices for these two parameters are a convective Reynolds numbermore » (based on the velocity scale associated with the convective forcing) and a parameter analogous to the swirl ratio in laboratory models. However, by examining sets of simulations with different model parameters it is found that a dimensionless parameter known as the vortex Reynolds number, which is the ratio of the far-field circulation to the eddy viscosity, is more effective than the convention swirl ratio for predicting the structure of the vortex. The parameter space defined by the choices for model parameters is further explored with large sets of numerical simulations. For much of this parameter space it is confirmed that the vortex structure and time-dependent behavior depend strongly on the vortex Reynolds number and only weakly on the convective Reynolds number. The authors also find that for higher convective Reynolds numbers, the maximum possible wind speed increases, and the rotational forcing necessary to achieve that wind speed decreases. Physical reasoning is used to explain this behavior, and implications for tornado dynamics are discussed.« less
Broken bridges: a counter-example of the ER=EPR conjecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Pisin; Wu, Chih-Hung; Yeom, Dong-han, E-mail: pisinchen@phys.ntu.edu.tw, E-mail: b02202007@ntu.edu.tw, E-mail: innocent.yeom@gmail.com
In this paper, we provide a counter-example to the ER=EPR conjecture. In an anti-de Sitter space, we construct a pair of maximally entangled but separated black holes. Due to the vacuum decay of the anti-de Sitter background toward a deeper vacuum, these two parts can be trapped by bubbles. If these bubbles are reasonably large, then within the scrambling time, there should appear an Einstein-Rosen bridge between the two black holes. Now by tracing more details on the bubble dynamics, one can identify parameters such that one of the two bubbles either monotonically shrinks or expands. Because of the changemore » of vacuum energy, one side of the black hole would evaporate completely. Due to the shrinking of the apparent horizon, a signal of one side of the Einstein-Rosen bridge can be viewed from the opposite side. We analytically and numerically demonstrate that within a reasonable semi-classical parameter regime, such process can happen. Bubbles are a non-perturbative effect, which is the crucial reason that allows the transmission of information between the two black holes through the Einstein-Rosen bridge, even though the probability is highly suppressed. Therefore, the ER=EPR conjecture cannot be generic in its present form and its validity maybe restricted.« less
Landsat 7 - A challenge to America
NASA Astrophysics Data System (ADS)
Colvocoresses, Alden P.
Factors in favor of Landsat 7 are discussed; they include: reasonable cost, a base on which to examine global change, and the need for comprehensive and continuous satellite coverage of the earth at moderate (5-30 m) resolution, in view of various occurrences on the earth's surface, ranging from the Chernobyl disaster to deforestation to the Persian Gulf conflict. Attention is given to proposed parameters for Landsat 7 and suggested actions that should be taken by Congress, the Administration, and the public to implement this space program.
Space station dynamic modeling, disturbance accommodation, and adaptive control
NASA Technical Reports Server (NTRS)
Wang, S. J.; Ih, C. H.; Lin, Y. H.; Metter, E.
1985-01-01
Dynamic models for two space station configurations were derived. Space shuttle docking disturbances and their effects on the station and solar panels are quantified. It is shown that hard shuttle docking can cause solar panel buckling. Soft docking and berthing can substantially reduce structural loads at the expense of large shuttle and station attitude excursions. It is found predocking shuttle momentum reduction is necessary to achieve safe and routine operations. A direct model reference adaptive control is synthesized and evaluated for the station model parameter errors and plant dynamics truncations. The rigid body and the flexible modes are treated. It is shown that convergence of the adaptive algorithm can be achieved in 100 seconds with reasonable performance even during shuttle hard docking operations in which station mass and inertia are instantaneously changed by more than 100%.
NASA Technical Reports Server (NTRS)
Suit, William T.; Schiess, James R.
1988-01-01
The Discovery vehicle was found to have longitudinal and lateral aerodynamic characteristics similar to those of the Columbia and Challenger vehicles. The values of the lateral and longitudinal parameters are compared with the preflight data book. The lateral parameters showed the same trends as the data book. With the exception of C sub l sub Beta for Mach numbers greater than 15, C sub n sub delta r for Mach numbers greater than 2 and for Mach numbers less than 1.5, where the variation boundaries were not well defined, ninety percent of the extracted values of the lateral parameters fell within the predicted variations. The longitudinal parameters showed more scatter, but scattered about the preflight predictions. With the exception of the Mach 1.5 to .5 region of the flight envelope, the preflight predictions seem a reasonable representation of the Shuttle aerodynamics. The models determined accounted for ninety percent of the actual flight time histories.
NASA Astrophysics Data System (ADS)
Janidarmian, Majid; Fekr, Atena Roshan; Bokharaei, Vahhab Samadi
2011-08-01
Mapping algorithm which means which core should be linked to which router is one of the key issues in the design flow of network-on-chip. To achieve an application-specific NoC design procedure that minimizes the communication cost and improves the fault tolerant property, first a heuristic mapping algorithm that produces a set of different mappings in a reasonable time is presented. This algorithm allows the designers to identify the set of most promising solutions in a large design space, which has low communication costs while yielding optimum communication costs in some cases. Another evaluated parameter, vulnerability index, is then considered as a principle of estimating the fault-tolerance property in all produced mappings. Finally, in order to yield a mapping which considers trade-offs between these two parameters, a linear function is defined and introduced. It is also observed that more flexibility to prioritize solutions within the design space is possible by adjusting a set of if-then rules in fuzzy logic.
Bayesian estimation of dynamic matching function for U-V analysis in Japan
NASA Astrophysics Data System (ADS)
Kyo, Koki; Noda, Hideo; Kitagawa, Genshiro
2012-05-01
In this paper we propose a Bayesian method for analyzing unemployment dynamics. We derive a Beveridge curve for unemployment and vacancy (U-V) analysis from a Bayesian model based on a labor market matching function. In our framework, the efficiency of matching and the elasticities of new hiring with respect to unemployment and vacancy are regarded as time varying parameters. To construct a flexible model and obtain reasonable estimates in an underdetermined estimation problem, we treat the time varying parameters as random variables and introduce smoothness priors. The model is then described in a state space representation, enabling the parameter estimation to be carried out using Kalman filter and fixed interval smoothing. In such a representation, dynamic features of the cyclic unemployment rate and the structural-frictional unemployment rate can be accurately captured.
NASA Astrophysics Data System (ADS)
Lefebvre, Eric; Helleur, Christopher; Kashyap, Nathan
2008-03-01
Maritime surveillance of coastal regions requires operational staff to integrate a large amount of information from a variety of military and civilian sources. The diverse nature of the information sources makes complete automation difficult. The volume of vessels tracked and the number of sources makes it difficult for the limited operation centre staff to fuse all the information manually within a reasonable timeframe. In this paper, a conceptual decision space is proposed to provide a framework for automating the process of operators integrating the sources needed to maintain Maritime Domain Awareness. The decision space contains all potential pairs of ship tracks that are candidates for fusion. The location of the candidate pairs in this defined space depends on the value of the parameters used to make a decision. In the application presented, three independent parameters are used: the source detection efficiency, the geo-feasibility, and the track quality. One of three decisions is applied to each candidate track pair based on these three parameters: 1. to accept the fusion, in which case tracks are fused in one track, 2. to reject the fusion, in which case the candidate track pair is removed from the list of potential fusion, and 3. to defer the fusion, in which case no fusion occurs but the candidate track pair remains in the list of potential fusion until sufficient information is provided. This paper demonstrates in an operational setting how a proposed conceptual space is used to optimize the different thresholds for automatic fusion decision while minimizing the list of unresolved cases when the decision is left to the operator.
Perspective: Sloppiness and emergent theories in physics, biology, and beyond.
Transtrum, Mark K; Machta, Benjamin B; Brown, Kevin S; Daniels, Bryan C; Myers, Christopher R; Sethna, James P
2015-07-07
Large scale models of physical phenomena demand the development of new statistical and computational tools in order to be effective. Many such models are "sloppy," i.e., exhibit behavior controlled by a relatively small number of parameter combinations. We review an information theoretic framework for analyzing sloppy models. This formalism is based on the Fisher information matrix, which is interpreted as a Riemannian metric on a parameterized space of models. Distance in this space is a measure of how distinguishable two models are based on their predictions. Sloppy model manifolds are bounded with a hierarchy of widths and extrinsic curvatures. The manifold boundary approximation can extract the simple, hidden theory from complicated sloppy models. We attribute the success of simple effective models in physics as likewise emerging from complicated processes exhibiting a low effective dimensionality. We discuss the ramifications and consequences of sloppy models for biochemistry and science more generally. We suggest that the reason our complex world is understandable is due to the same fundamental reason: simple theories of macroscopic behavior are hidden inside complicated microscopic processes.
Estimation of primordial spectrum with post-WMAP 3-year data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shafieloo, Arman; Souradeep, Tarun
2008-07-15
In this paper we implement an improved (error-sensitive) Richardson-Lucy deconvolution algorithm on the measured angular power spectrum from the Wilkinson Microwave Anisotropy Probe (WMAP) 3 year data to determine the primordial power spectrum assuming different points in the cosmological parameter space for a flat {lambda}CDM cosmological model. We also present the preliminary results of the cosmological parameter estimation by assuming a free form of the primordial spectrum, for a reasonably large volume of the parameter space. The recovered spectrum for a considerably large number of the points in the cosmological parameter space has a likelihood far better than a 'bestmore » fit' power law spectrum up to {delta}{chi}{sub eff}{sup 2}{approx_equal}-30. We use discrete wavelet transform (DWT) for smoothing the raw recovered spectrum from the binned data. The results obtained here reconfirm and sharpen the conclusion drawn from our previous analysis of the WMAP 1st year data. A sharp cut off around the horizon scale and a bump after the horizon scale seem to be a common feature for all of these reconstructed primordial spectra. We have shown that although the WMAP 3 year data prefers a lower value of matter density for a power law form of the primordial spectrum, for a free form of the spectrum, we can get a very good likelihood to the data for higher values of matter density. We have also shown that even a flat cold dark matter model, allowing a free form of the primordial spectrum, can give a very high likelihood fit to the data. Theoretical interpretation of the results is open to the cosmology community. However, this work provides strong evidence that the data retains discriminatory power in the cosmological parameter space even when there is full freedom in choosing the primordial spectrum.« less
Visualization of International Solar-Terrestrial Physics Program (ISTP) data
NASA Technical Reports Server (NTRS)
Kessel, Ramona L.; Candey, Robert M.; Hsieh, Syau-Yun W.; Kayser, Susan
1995-01-01
The International Solar-Terrestrial Physics Program (ISTP) is a multispacecraft, multinational program whose objective is to promote further understanding of the Earth's complex plasma environment. Extensive data sharing and data analysis will be needed to ensure the success of the overall ISTP program. For this reason, there has been a special emphasis on data standards throughout ISTP. One of the key tools will be the common data format (CDF), developed, maintained, and evolved at the National Space Science Data Center (NSSDC), with the set of ISTP implementation guidelines specially designed for space physics data sets by the Space Physics Data Facility (associated with the NSSDC). The ISTP guidelines were developed to facilitate searching, plotting, merging, and subsetting of data sets. We focus here on the plotting application. A prototype software package was developed to plot key parameter (KP) data from the ISTP program at the Science Planning and Operations Facility (SPOF). The ISTP Key Parameter Visualization Tool is based on the Interactive Data Language (IDL) and is keyed to the ISTP guidelines, reading data stored in CDF. With the combination of CDF, the ISTP guidelines, and the visualization software, we can look forward to easier and more effective data sharing and use among ISTP scientists.
NASA Astrophysics Data System (ADS)
Zhuang, Bin; Wang, Yuming; Shen, Chenglong; Liu, Siqing; Wang, Jingjing; Pan, Zonghao; Li, Huimin; Liu, Rui
2017-08-01
As one of the most violent astrophysical phenomena, coronal mass ejections (CMEs) have strong potential space weather effects. However, not all Earth-directed CMEs encounter the Earth and produce geo-effects. One reason is the deflected propagation of CMEs in interplanetary space. Although there have been several case studies clearly showing such deflections, it has not yet been statistically assessed how significantly the deflected propagation would influence the CME’s arrival at Earth. We develop an integrated CME-arrival forecasting (iCAF) system, assembling the modules of CME detection, three-dimensional (3D) parameter derivation, and trajectory reconstruction to predict whether or not a CME arrives at Earth, and we assess the deflection influence on the CME-arrival forecasting. The performance of iCAF is tested by comparing the two-dimensional (2D) parameters with those in the Coordinated Data Analysis Workshop (CDAW) Data Center catalog, comparing the 3D parameters with those of the gradual cylindrical shell model, and estimating the success rate of the CME Earth-arrival predictions. It is found that the 2D parameters provided by iCAF and the CDAW catalog are consistent with each other, and the 3D parameters derived by the ice cream cone model based on single-view observations are acceptable. The success rate of the CME-arrival predictions by iCAF with deflection considered is about 82%, which is 19% higher than that without deflection, indicating the importance of the CME deflection for providing a reliable forecasting. Furthermore, iCAF is a worthwhile project since it is a completely automatic system with deflection taken into account.
Using state-space models to predict the abundance of juvenile and adult sea lice on Atlantic salmon.
Elghafghuf, Adel; Vanderstichel, Raphael; St-Hilaire, Sophie; Stryhn, Henrik
2018-04-11
Sea lice are marine parasites affecting salmon farms, and are considered one of the most costly pests of the salmon aquaculture industry. Infestations of sea lice on farms significantly increase opportunities for the parasite to spread in the surrounding ecosystem, making control of this pest a challenging issue for salmon producers. The complexity of controlling sea lice on salmon farms requires frequent monitoring of the abundance of different sea lice stages over time. Industry-based data sets of counts of lice are amenable to multivariate time-series data analyses. In this study, two sets of multivariate autoregressive state-space models were applied to Chilean sea lice data from six Atlantic salmon production cycles on five isolated farms (at least 20 km seaway distance away from other known active farms), to evaluate the utility of these models for predicting sea lice abundance over time on farms. The models were constructed with different parameter configurations, and the analysis demonstrated large heterogeneity between production cycles for the autoregressive parameter, the effects of chemotherapeutant bath treatments, and the process-error variance. A model allowing for different parameters across production cycles had the best fit and the smallest overall prediction errors. However, pooling information across cycles for the drift and observation error parameters did not substantially affect model performance, thus reducing the number of necessary parameters in the model. Bath treatments had strong but variable effects for reducing sea lice burdens, and these effects were stronger for adult lice than juvenile lice. Our multivariate state-space models were able to handle different sea lice stages and provide predictions for sea lice abundance with reasonable accuracy up to five weeks out. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hellen, Edward H.; Volkov, Evgeny
2018-09-01
We study the dynamical regimes demonstrated by a pair of identical 3-element ring oscillators (reduced version of synthetic 3-gene genetic Repressilator) coupled using the design of the 'quorum sensing (QS)' process natural for interbacterial communications. In this work QS is implemented as an additional network incorporating elements of the ring as both the source and the activation target of the fast diffusion QS signal. This version of indirect nonlinear coupling, in cooperation with the reasonable extension of the parameters which control properties of the isolated oscillators, exhibits the formation of a very rich array of attractors. Using a parameter-space defined by the individual oscillator amplitude and the coupling strength, we found the extended area of parameter-space where the identical oscillators demonstrate quasiperiodicity, which evolves to chaos via the period doubling of either resonant limit cycles or complex antiphase symmetric limit cycles with five winding numbers. The symmetric chaos extends over large parameter areas up to its loss of stability, followed by a system transition to an unexpected mode: an asymmetric limit cycle with a winding number of 1:2. In turn, after long evolution across the parameter-space, this cycle demonstrates a period doubling cascade which restores the symmetry of dynamics by formation of symmetric chaos, which nevertheless preserves the memory of the asymmetric limit cycles in the form of stochastic alternating "polarization" of the time series. All stable attractors coexist with some others, forming remarkable and complex multistability including the coexistence of torus and limit cycles, chaos and regular attractors, symmetric and asymmetric regimes. We traced the paths and bifurcations leading to all areas of chaos, and presented a detailed map of all transformations of the dynamics.
An Integrated Approach to Parameter Learning in Infinite-Dimensional Space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyd, Zachary M.; Wendelberger, Joanne Roth
The availability of sophisticated modern physics codes has greatly extended the ability of domain scientists to understand the processes underlying their observations of complicated processes, but it has also introduced the curse of dimensionality via the many user-set parameters available to tune. Many of these parameters are naturally expressed as functional data, such as initial temperature distributions, equations of state, and controls. Thus, when attempting to find parameters that match observed data, being able to navigate parameter-space becomes highly non-trivial, especially considering that accurate simulations can be expensive both in terms of time and money. Existing solutions include batch-parallel simulations,more » high-dimensional, derivative-free optimization, and expert guessing, all of which make some contribution to solving the problem but do not completely resolve the issue. In this work, we explore the possibility of coupling together all three of the techniques just described by designing user-guided, batch-parallel optimization schemes. Our motivating example is a neutron diffusion partial differential equation where the time-varying multiplication factor serves as the unknown control parameter to be learned. We find that a simple, batch-parallelizable, random-walk scheme is able to make some progress on the problem but does not by itself produce satisfactory results. After reducing the dimensionality of the problem using functional principal component analysis (fPCA), we are able to track the progress of the solver in a visually simple way as well as viewing the associated principle components. This allows a human to make reasonable guesses about which points in the state space the random walker should try next. Thus, by combining the random walker's ability to find descent directions with the human's understanding of the underlying physics, it is possible to use expensive simulations more efficiently and more quickly arrive at the desired parameter set.« less
Strength of Zerodur® for mirror applications
NASA Astrophysics Data System (ADS)
Béhar-Lafenêtre, S.; Cornillon, Laurence; Ait-Zaid, Sonia
2015-09-01
Zerodur® is a well-known glass-ceramic used for optical components because of its unequalled dimensional stability under thermal environment. In particular it has been used since decades in Thales Alenia Space's optical payloads for space telescopes, especially for mirrors. The drawback of Zerodur® is however its quite low strength, but the relatively small size of mirrors in the past had made it unnecessary to further investigate this aspect, although elementary tests have always shown higher failure strength. As performance of space telescopes is increasing, the size of mirrors increases accordingly, and an optimization of the design is necessary, mainly for mass saving. Therefore the question of the effective strength of Zerodur® has become a real issue. Thales Alenia Space has investigated the application of the Weibull law and associated size effects on Zerodur® in 2014, under CNES funding, through a thorough test campaign with a high number of samples (300) of various types. The purpose was to accurately determine the parameters of the Weibull law for Zerodur® when machined in the same conditions as mirrors. The proposed paper will discuss the obtained results, in the light of the Weibull theory. The applicability of the 2-parameter and 3-parameter (with threshold strength) laws will be compared. The expected size effect has not been evidenced therefore some investigations are led to determine the reasons of this result, from the test implementation quality to the data post-processing methodology. However this test campaign has already provided enough data to safely increase the allowable value for mirrors sizing.
Cooperation and competition between two symmetry breakings in a coupled ratchet
NASA Astrophysics Data System (ADS)
Li, Chen-Pu; Chen, Hong-Bin; Fan, Hong; Xie, Ge-Ying; Zheng, Zhi-Gang
2018-03-01
We investigate the collective mechanism of coupled Brownian motors in a flashing ratchet in the presence of coupling symmetry breaking and space symmetry breaking. The dependences of directed current on various parameters are extensively studied in terms of numerical simulations and theoretical analysis. Reversed motion can be achieved by modulating multiple parameters including the spatial asymmetry coefficient, the coupling asymmetry coefficient, the coupling free length and the coupling strength. The dynamical mechanism of these transport properties can be reasonably explained by the effective potential theory and the cooperation or competition between two symmetry breakings. Moreover, adjusting the Gaussian white noise intensity, which can induce weak reversed motion under certain condition, can optimize and manipulate the directed transport of the ratchet system.
Phase change in liquid face seals
NASA Technical Reports Server (NTRS)
Hughes, W. F.; Winowich, N. S.; Birchak, M. J.; Kennedy, W. C.
1978-01-01
A study is made of boiling (or phase change) in liquid face seals. An appropriate model is set up and approximate solutions obtained. Some practical illustrative examples are given. Major conclusions are that (1) boiling may occur more often than has been suspected particularly when the sealed liquid is near saturation conditions, (2) the temperature variation in a seal clearance region may not be very great and the main reason for boiling is the flashing which occurs as the pressure decreases through the seal clearance, and (3) there are two separate values of the parameter film-thickness/angular-velocity-squared (and associated radii where phase change takes place) which provide the same separating force under a given set of operating conditions. For a given speed seal face excursions about the larger spacing are stable, but excursions about the smaller spacing are unstable, leading to a growth to the larger spacing or a catastrophic collapse.
A Characterization of Dynamic Reasoning: Reasoning with Time as Parameter
ERIC Educational Resources Information Center
Keene, Karen Allen
2007-01-01
Students incorporate and use the implicit and explicit parameter time to support their mathematical reasoning and deepen their understandings as they participate in a differential equations class during instruction on solutions to systems of differential equations. Therefore, dynamic reasoning is defined as developing and using conceptualizations…
A Stochastic Fractional Dynamics Model of Space-time Variability of Rain
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Travis, James E.
2013-01-01
Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment.
Electrostatics of lipid bilayer bending.
Chou, T; Jarić, M V; Siggia, E D
1997-01-01
The electrostatic contribution to spontaneous membrane curvature is calculated within Poisson-Boltzmann theory under a variety of assumptions and emphasizing parameters in the physiological range. Asymmetrical surface charges can be fixed with respect to bilayer midplane area or with respect to the lipid-water area, but induce curvatures of opposite signs. Unequal screening layers on the two sides of a vesicle (e.g., multivalent cationic proteins on one side and monovalent salt on the other) also induce bending. For reasonable parameters, tubules formed by electrostatically induced bending can have radii in the 50-100-nm range, often seen in many intracellular organelles. Thus membrane associated proteins may induce curvature and subsequent budding, without themselves being intrinsically curved. Furthermore, we derive the previously unexplored effects of respecting the strict conservation of charge within the interior of a vesicle. The electrostatic component of the bending modulus is small under most of our conditions and is left as an experimental parameter. The large parameter space of conditions is surveyed in an array of graphs. Images FIGURE 1 FIGURE 10 PMID:9129807
Dynamic reasoning in a knowledge-based system
NASA Technical Reports Server (NTRS)
Rao, Anand S.; Foo, Norman Y.
1988-01-01
Any space based system, whether it is a robot arm assembling parts in space or an onboard system monitoring the space station, has to react to changes which cannot be foreseen. As a result, apart from having domain-specific knowledge as in current expert systems, a space based AI system should also have general principles of change. This paper presents a modal logic which can not only represent change but also reason with it. Three primitive operations, expansion, contraction and revision are introduced and axioms which specify how the knowledge base should change when the external world changes are also specified. Accordingly the notion of dynamic reasoning is introduced, which unlike the existing forms of reasoning, provide general principles of change. Dynamic reasoning is based on two main principles, namely minimize change and maximize coherence. A possible-world semantics which incorporates the above two principles is also discussed. The paper concludes by discussing how the dynamic reasoning system can be used to specify actions and hence form an integral part of an autonomous reasoning and planning system.
Urzhumtseva, Ludmila; Lunina, Natalia; Fokine, Andrei; Samama, Jean Pierre; Lunin, Vladimir Y; Urzhumtsev, Alexandre
2004-09-01
The connectivity-based phasing method has been demonstrated to be capable of finding molecular packing and envelopes even for difficult cases of structure determination, as well as of identifying, in favorable cases, secondary-structure elements of protein molecules in the crystal. This method uses a single set of structure factor magnitudes and general topological features of a crystallographic image of the macromolecule under study. This information is expressed through a number of parameters. Most of these parameters are easy to estimate, and the results of phasing are practically independent of these parameters when they are chosen within reasonable limits. By contrast, the correct choice for such parameters as the expected number of connected regions in the unit cell is sometimes ambiguous. To study these dependencies, numerous tests were performed with simulated data, experimental data and mixed data sets, where several reflections missed in the experiment were completed by computed data. This paper demonstrates that the procedure is able to control this choice automatically and helps in difficult cases to identify the correct number of molecules in the asymmetric unit. In addition, the procedure behaves abnormally if the space group is defined incorrectly and therefore may distinguish between the rotation and screw axes even when high-resolution data are not available.
Schweiner, Frank; Laturner, Jeanine; Main, Jörg; Wunner, Günter
2017-11-01
Until now only for specific crossovers between Poissonian statistics (P), the statistics of a Gaussian orthogonal ensemble (GOE), or the statistics of a Gaussian unitary ensemble (GUE) have analytical formulas for the level spacing distribution function been derived within random matrix theory. We investigate arbitrary crossovers in the triangle between all three statistics. To this aim we propose an according formula for the level spacing distribution function depending on two parameters. Comparing the behavior of our formula for the special cases of P→GUE, P→GOE, and GOE→GUE with the results from random matrix theory, we prove that these crossovers are described reasonably. Recent investigations by F. Schweiner et al. [Phys. Rev. E 95, 062205 (2017)2470-004510.1103/PhysRevE.95.062205] have shown that the Hamiltonian of magnetoexcitons in cubic semiconductors can exhibit all three statistics in dependence on the system parameters. Evaluating the numerical results for magnetoexcitons in dependence on the excitation energy and on a parameter connected with the cubic valence band structure and comparing the results with the formula proposed allows us to distinguish between regular and chaotic behavior as well as between existent or broken antiunitary symmetries. Increasing one of the two parameters, transitions between different crossovers, e.g., from the P→GOE to the P→GUE crossover, are observed and discussed.
Towards reasoning and coordinating action in the mental space.
Mohan, Vishwanathan; Morasso, Pietro
2007-08-01
Unlike a purely reactive system where the motor output is exclusively controlled by the actual sensory input, a cognitive system must be capable of running mental processes which virtually simulate action sequences aimed at achieving a goal. The mental process either attempts to find a feasible course of action compatible with a number of constraints (Internal, Environmental, Task Specific etc) or selects it from a repertoire of previously learned actions, according to the parameters of the task. If neither reasoning process succeeds, a typical backup strategy is to look for a tool that might allow the operator to match all the task constraints. This further necessitates having the capability to alter ones own goal structures to generate sub-goals which must be successfully accomplished in order to achieve the primary goal. In this paper, we introduce a forward/inverse motor control architecture (FMC/IMC) that relaxes an internal model of the overall kinematic chain to a virtual force field applied to the end effector, in the intended direction of movement. This is analogous to the mechanism of coordinating the motion of a wooden marionette by means of attached strings. The relaxation of the FMC/IMC pair provides a general solution for mentally simulating an action of reaching a target position taking into consideration a range of geometric constraints (range of motion in the joint space, internal and external constraints in the workspace) as well as effort-related constraints (range of torque of the actuators, etc.). In case, the forward simulation is successful, the movement is executed; otherwise the residual "error" or measure of inconsistency is taken as a starting point for breaking the action plan into a sequence of sub actions. This process is achieved using a recurrent neural network (RNN) which coordinates the overall reasoning process of framing and issuing goals to the forward inverse models, searching for alternatives tools in solution space and formation of sub-goals based on past context knowledge and present inputs. The RNN + FMC/IMC system is able to successfully reason and coordinate a diverse range of reaching and grasping sequences with/without tools. Using a simple robotic platform (5 DOF Scorbot arm + Stereo vision) we present results of reasoning and coordination of arm/tool movements (real and mental simulation) specifically directed towards solving the classical 2-stick paradigm from animal reasoning at a non linguistic level.
Determining optimal parameters in magnetic spacecraft stabilization via attitude feedback
NASA Astrophysics Data System (ADS)
Bruni, Renato; Celani, Fabio
2016-10-01
The attitude control of a spacecraft using magnetorquers can be achieved by a feedback control law which has four design parameters. However, the practical determination of appropriate values for these parameters is a critical open issue. We propose here an innovative systematic approach for finding these values: they should be those that minimize the convergence time to the desired attitude. This a particularly diffcult optimization problem, for several reasons: 1) such time cannot be expressed in analytical form as a function of parameters and initial conditions; 2) design parameters may range over very wide intervals; 3) convergence time depends also on the initial conditions of the spacecraft, which are not known in advance. To overcome these diffculties, we present a solution approach based on derivative-free optimization. These algorithms do not need to write analytically the objective function: they only need to compute it in a number of points. We also propose a fast probing technique to identify which regions of the search space have to be explored densely. Finally, we formulate a min-max model to find robust parameters, namely design parameters that minimize convergence time under the worst initial conditions. Results are very promising.
An adaptive learning control system for large flexible structures
NASA Technical Reports Server (NTRS)
Thau, F. E.
1985-01-01
The objective of the research has been to study the design of adaptive/learning control systems for the control of large flexible structures. In the first activity an adaptive/learning control methodology for flexible space structures was investigated. The approach was based on using a modal model of the flexible structure dynamics and an output-error identification scheme to identify modal parameters. In the second activity, a least-squares identification scheme was proposed for estimating both modal parameters and modal-to-actuator and modal-to-sensor shape functions. The technique was applied to experimental data obtained from the NASA Langley beam experiment. In the third activity, a separable nonlinear least-squares approach was developed for estimating the number of excited modes, shape functions, modal parameters, and modal amplitude and velocity time functions for a flexible structure. In the final research activity, a dual-adaptive control strategy was developed for regulating the modal dynamics and identifying modal parameters of a flexible structure. A min-max approach was used for finding an input to provide modal parameter identification while not exceeding reasonable bounds on modal displacement.
Rhelogical constraints on ridge formation on Icy Satellites
NASA Astrophysics Data System (ADS)
Rudolph, M. L.; Manga, M.
2010-12-01
The processes responsible for forming ridges on Europa remain poorly understood. We use a continuum damage mechanics approach to model ridge formation. The main objectives of this contribution are to constrain (1) choice of rheological parameters and (2) maximum ridge size and rate of formation. The key rheological parameters to constrain appear in the evolution equation for a damage variable (D): ˙ {D} = B <<σ >>r}(1-D){-k-α D (p)/(μ ) and in the equation relating damage accumulation to volumetric changes, Jρ 0 = δ (1-D). Similar damage evolution laws have been applied to terrestrial glaciers and to the analysis of rock mechanics experiments. However, it is reasonable to expect that, like viscosity, the rheological constants B, α , and δ depend strongly on temperature, composition, and ice grain size. In order to determine whether the damage model is appropriate for Europa’s ridges, we must find values of the unknown damage parameters that reproduce ridge topography. We perform a suite of numerical experiments to identify the region of parameter space conducive to ridge production and show the sensitivity to changes in each unknown parameter.
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2016-10-01
Multi-objective optimization of antenna structures is a challenging task owing to the high computational cost of evaluating the design objectives as well as the large number of adjustable parameters. Design speed-up can be achieved by means of surrogate-based optimization techniques. In particular, a combination of variable-fidelity electromagnetic (EM) simulations, design space reduction techniques, response surface approximation models and design refinement methods permits identification of the Pareto-optimal set of designs within a reasonable timeframe. Here, a study concerning the scalability of surrogate-assisted multi-objective antenna design is carried out based on a set of benchmark problems, with the dimensionality of the design space ranging from six to 24 and a CPU cost of the EM antenna model from 10 to 20 min per simulation. Numerical results indicate that the computational overhead of the design process increases more or less quadratically with the number of adjustable geometric parameters of the antenna structure at hand, which is a promising result from the point of view of handling even more complex problems.
Robust control with structured perturbations
NASA Technical Reports Server (NTRS)
Keel, Leehyun
1988-01-01
Two important problems in the area of control systems design and analysis are discussed. The first is the robust stability using characteristic polynomial, which is treated first in characteristic polynomial coefficient space with respect to perturbations in the coefficients of the characteristic polynomial, and then for a control system containing perturbed parameters in the transfer function description of the plant. In coefficient space, a simple expression is first given for the l(sup 2) stability margin for both monic and non-monic cases. Following this, a method is extended to reveal much larger stability region. This result has been extended to the parameter space so that one can determine the stability margin, in terms of ranges of parameter variations, of the closed loop system when the nominal stabilizing controller is given. The stability margin can be enlarged by a choice of better stabilizing controller. The second problem describes the lower order stabilization problem, the motivation of the problem is as follows. Even though the wide range of stabilizing controller design methodologies is available in both the state space and transfer function domains, all of these methods produce unnecessarily high order controllers. In practice, the stabilization is only one of many requirements to be satisfied. Therefore, if the order of a stabilizing controller is excessively high, one can normally expect to have a even higher order controller on the completion of design such as inclusion of dynamic response requirements, etc. Therefore, it is reasonable to have a lowest possible order stabilizing controller first and then adjust the controller to meet additional requirements. The algorithm for designing a lower order stabilizing controller is given. The algorithm does not necessarily produce the minimum order controller; however, the algorithm is theoretically logical and some simulation results show that the algorithm works in general.
32 CFR 644.135 - Lease authorization and approvals.
Code of Federal Regulations, 2014 CFR
2014-07-01
... approvals from the Assistant Secretary of the Army (Installations, Logistics and Financial Management) and... geographical area in which the availability of Government-owned space was surveyed, together with reasons for limiting the area. The mission is to be set forth in detail, along with the reason(s) why space in this...
32 CFR 644.135 - Lease authorization and approvals.
Code of Federal Regulations, 2012 CFR
2012-07-01
... approvals from the Assistant Secretary of the Army (Installations, Logistics and Financial Management) and... geographical area in which the availability of Government-owned space was surveyed, together with reasons for limiting the area. The mission is to be set forth in detail, along with the reason(s) why space in this...
32 CFR 644.135 - Lease authorization and approvals.
Code of Federal Regulations, 2013 CFR
2013-07-01
... approvals from the Assistant Secretary of the Army (Installations, Logistics and Financial Management) and... geographical area in which the availability of Government-owned space was surveyed, together with reasons for limiting the area. The mission is to be set forth in detail, along with the reason(s) why space in this...
32 CFR 644.135 - Lease authorization and approvals.
Code of Federal Regulations, 2011 CFR
2011-07-01
... approvals from the Assistant Secretary of the Army (Installations, Logistics and Financial Management) and... geographical area in which the availability of Government-owned space was surveyed, together with reasons for limiting the area. The mission is to be set forth in detail, along with the reason(s) why space in this...
32 CFR 644.135 - Lease authorization and approvals.
Code of Federal Regulations, 2010 CFR
2010-07-01
... approvals from the Assistant Secretary of the Army (Installations, Logistics and Financial Management) and... geographical area in which the availability of Government-owned space was surveyed, together with reasons for limiting the area. The mission is to be set forth in detail, along with the reason(s) why space in this...
Energy and momentum analysis of the deployment dynamics of nets in space
NASA Astrophysics Data System (ADS)
Botta, Eleonora M.; Sharf, Inna; Misra, Arun K.
2017-11-01
In this paper, the deployment dynamics of nets in space is investigated through a combination of analysis and numerical simulations. The considered net is deployed by ejecting several corner masses and thanks to momentum and energy transfer from those to the innermost threads of the net. In this study, the net is modeled with a lumped-parameter approach, and assumed to be symmetrical, subject to symmetrical initial conditions, and initially slack. The work-energy and momentum conservation principles are employed to carry out centroidal analysis of the net, by conceptually partitioning the net into a system of corner masses and the net proper and applying the aforementioned principles to the corresponding centers of mass. The analysis provides bounds on the values that the velocity of the center of mass of the corner masses and the velocity of the center of mass of the net proper can individually attain, as well as relationships between these and different energy contributions. The analytical results allow to identify key parameters characterizing the deployment dynamics of nets in space, which include the ratio between the mass of the corner masses and the total mass, the initial linear momentum, and the direction of the initial velocity vectors. Numerical tools are employed to validate and interpret further the analytical observations. Comparison of deployment results with and without initial velocity of the net proper suggests that more complete and lasting deployment can be achieved if the corner masses alone are ejected. A sensitivity study is performed for the key parameters identified from the energy/momentum analysis, and the outcome establishes that more lasting deployment and safer capture (i.e., characterized by higher traveled distance) can be achieved by employing reasonably lightweight corner masses, moderate shooting angles, and low shooting velocities. A comparison with current literature on tether-nets for space debris capture confirms overall agreement on the importance and effect of the relevant inertial and ejection parameters on the deployment dynamics.
Model parameter learning using Kullback-Leibler divergence
NASA Astrophysics Data System (ADS)
Lin, Chungwei; Marks, Tim K.; Pajovic, Milutin; Watanabe, Shinji; Tung, Chih-kuan
2018-02-01
In this paper, we address the following problem: For a given set of spin configurations whose probability distribution is of the Boltzmann type, how do we determine the model coupling parameters? We demonstrate that directly minimizing the Kullback-Leibler divergence is an efficient method. We test this method against the Ising and XY models on the one-dimensional (1D) and two-dimensional (2D) lattices, and provide two estimators to quantify the model quality. We apply this method to two types of problems. First, we apply it to the real-space renormalization group (RG). We find that the obtained RG flow is sufficiently good for determining the phase boundary (within 1% of the exact result) and the critical point, but not accurate enough for critical exponents. The proposed method provides a simple way to numerically estimate amplitudes of the interactions typically truncated in the real-space RG procedure. Second, we apply this method to the dynamical system composed of self-propelled particles, where we extract the parameter of a statistical model (a generalized XY model) from a dynamical system described by the Viscek model. We are able to obtain reasonable coupling values corresponding to different noise strengths of the Viscek model. Our method is thus able to provide quantitative analysis of dynamical systems composed of self-propelled particles.
Automatic Determination of the Conic Coronal Mass Ejection Model Parameters
NASA Technical Reports Server (NTRS)
Pulkkinen, A.; Oates, T.; Taktakishvili, A.
2009-01-01
Characterization of the three-dimensional structure of solar transients using incomplete plane of sky data is a difficult problem whose solutions have potential for societal benefit in terms of space weather applications. In this paper transients are characterized in three dimensions by means of conic coronal mass ejection (CME) approximation. A novel method for the automatic determination of cone model parameters from observed halo CMEs is introduced. The method uses both standard image processing techniques to extract the CME mass from white-light coronagraph images and a novel inversion routine providing the final cone parameters. A bootstrap technique is used to provide model parameter distributions. When combined with heliospheric modeling, the cone model parameter distributions will provide direct means for ensemble predictions of transient propagation in the heliosphere. An initial validation of the automatic method is carried by comparison to manually determined cone model parameters. It is shown using 14 halo CME events that there is reasonable agreement, especially between the heliocentric locations of the cones derived with the two methods. It is argued that both the heliocentric locations and the opening half-angles of the automatically determined cones may be more realistic than those obtained from the manual analysis
Li, Yi Zhe; Zhang, Ting Long; Liu, Qiu Yu; Li, Ying
2018-01-01
The ecological process models are powerful tools for studying terrestrial ecosystem water and carbon cycle at present. However, there are many parameters for these models, and weather the reasonable values of these parameters were taken, have important impact on the models simulation results. In the past, the sensitivity and the optimization of model parameters were analyzed and discussed in many researches. But the temporal and spatial heterogeneity of the optimal parameters is less concerned. In this paper, the BIOME-BGC model was used as an example. In the evergreen broad-leaved forest, deciduous broad-leaved forest and C3 grassland, the sensitive parameters of the model were selected by constructing the sensitivity judgment index with two experimental sites selected under each vegetation type. The objective function was constructed by using the simulated annealing algorithm combined with the flux data to obtain the monthly optimal values of the sensitive parameters at each site. Then we constructed the temporal heterogeneity judgment index, the spatial heterogeneity judgment index and the temporal and spatial heterogeneity judgment index to quantitatively analyze the temporal and spatial heterogeneity of the optimal values of the model sensitive parameters. The results showed that the sensitivity of BIOME-BGC model parameters was different under different vegetation types, but the selected sensitive parameters were mostly consistent. The optimal values of the sensitive parameters of BIOME-BGC model mostly presented time-space heterogeneity to different degrees which varied with vegetation types. The sensitive parameters related to vegetation physiology and ecology had relatively little temporal and spatial heterogeneity while those related to environment and phenology had generally larger temporal and spatial heterogeneity. In addition, the temporal heterogeneity of the optimal values of the model sensitive parameters showed a significant linear correlation with the spatial heterogeneity under the three vegetation types. According to the temporal and spatial heterogeneity of the optimal values, the parameters of the BIOME-BGC model could be classified in order to adopt different parameter strategies in practical application. The conclusion could help to deeply understand the parameters and the optimal values of the ecological process models, and provide a way or reference for obtaining the reasonable values of parameters in models application.
A stochastic fractional dynamics model of space-time variability of rain
NASA Astrophysics Data System (ADS)
Kundu, Prasun K.; Travis, James E.
2013-09-01
varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, which allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and time scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and on the Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to fit the second moment statistics of radar data at the smaller spatiotemporal scales. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well at these scales without any further adjustment.
Object-oriented model-driven control
NASA Technical Reports Server (NTRS)
Drysdale, A.; Mcroberts, M.; Sager, J.; Wheeler, R.
1994-01-01
A monitoring and control subsystem architecture has been developed that capitalizes on the use of modeldriven monitoring and predictive control, knowledge-based data representation, and artificial reasoning in an operator support mode. We have developed an object-oriented model of a Controlled Ecological Life Support System (CELSS). The model based on the NASA Kennedy Space Center CELSS breadboard data, tracks carbon, hydrogen, and oxygen, carbodioxide, and water. It estimates and tracks resorce-related parameters such as mass, energy, and manpower measurements such as growing area required for balance. We are developing an interface with the breadboard systems that is compatible with artificial reasoning. Initial work is being done on use of expert systems and user interface development. This paper presents an approach to defining universally applicable CELSS monitor and control issues, and implementing appropriate monitor and control capability for a particular instance: the KSC CELSS Breadboard Facility.
Magnetic field reconnection. [energy conversion in space plasma
NASA Technical Reports Server (NTRS)
Sonnerup, U. O.
1979-01-01
A reasonably detailed description is obtained of the current status of our understanding of magnetic field reconnection. The picture that emerges is of a process, simple in concept but extremely complicated and multifaceted in detail. Nonlinear MHD processes in the external flow region, governed by distant boundary conditions, are coupled to nonlinear microscopic plasma processes in the diffusion region, in a manner not clearly understood. It appears that reconnection may operate in entirely different ways for different plasma parameters and different external boundary conditions. Steady reconnection may be allowed in some cases, forbidden in others, with intermediate situations involving impulsive or pulsative events.
Calculation of Energetic Ion Tail from Ion Cyclotron Resonance Frequency Heating
NASA Astrophysics Data System (ADS)
Wang, Jianguo; Li, Youyi; Li, Jiangang
1994-04-01
The second harmonic frequency of hydrogen ion cyclotron resonance heating experiment on HT-6M tokamak was studied by adding the quasi-linear wave-ion interaction term in the two-dimensional (velocity space), time-dependent, nonlinear and multispecies Fokker-Planck equation. The temporal evolution of ion distribution function and relevant parameters were calculated and compared with experiment data. The calculation shows that the ion temperature increases, high-energy ion tail (above 5 keV) and anisotropy appear when the wave is injected to plasma. The simulations are in reasonable agreement with experiment data.
NΩ interaction from two approaches in lattice QCD
NASA Astrophysics Data System (ADS)
Etminan, Faisal; Firoozabadi, Mohammad Mehdi
2014-10-01
We compare the standard finite volume method by Lüscher with the potential method by HAL QCD collaboration, by calculating the ground state energy of N(nucleon)-Ω(Omega) system in 5 S2 channel. We employ 2+1 flavor full QCD configurations on a (1.9 fm)3×3.8 fm lattice at the lattice spacing a≃0.12 fm, whose ud(s) quark mass corresponds to mπ = 875(1) (mK = 916(1)) MeV. We have found that both methods give reasonably consistent results that there is one NΩ bound state at this parameter.
NASA Astrophysics Data System (ADS)
De Geyter, G.; Baes, M.; Fritz, J.; Camps, P.
2013-02-01
We present FitSKIRT, a method to efficiently fit radiative transfer models to UV/optical images of dusty galaxies. These images have the advantage that they have better spatial resolution compared to FIR/submm data. FitSKIRT uses the GAlib genetic algorithm library to optimize the output of the SKIRT Monte Carlo radiative transfer code. Genetic algorithms prove to be a valuable tool in handling the multi- dimensional search space as well as the noise induced by the random nature of the Monte Carlo radiative transfer code. FitSKIRT is tested on artificial images of a simulated edge-on spiral galaxy, where we gradually increase the number of fitted parameters. We find that we can recover all model parameters, even if all 11 model parameters are left unconstrained. Finally, we apply the FitSKIRT code to a V-band image of the edge-on spiral galaxy NGC 4013. This galaxy has been modeled previously by other authors using different combinations of radiative transfer codes and optimization methods. Given the different models and techniques and the complexity and degeneracies in the parameter space, we find reasonable agreement between the different models. We conclude that the FitSKIRT method allows comparison between different models and geometries in a quantitative manner and minimizes the need of human intervention and biasing. The high level of automation makes it an ideal tool to use on larger sets of observed data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Chang; Deng, Na; Wang, Haimin
Adverse space-weather effects can often be traced to solar flares, the prediction of which has drawn significant research interests. The Helioseismic and Magnetic Imager (HMI) produces full-disk vector magnetograms with continuous high cadence, while flare prediction efforts utilizing this unprecedented data source are still limited. Here we report results of flare prediction using physical parameters provided by the Space-weather HMI Active Region Patches (SHARP) and related data products. We survey X-ray flares that occurred from 2010 May to 2016 December and categorize their source regions into four classes (B, C, M, and X) according to the maximum GOES magnitude ofmore » flares they generated. We then retrieve SHARP-related parameters for each selected region at the beginning of its flare date to build a database. Finally, we train a machine-learning algorithm, called random forest (RF), to predict the occurrence of a certain class of flares in a given active region within 24 hr, evaluate the classifier performance using the 10-fold cross-validation scheme, and characterize the results using standard performance metrics. Compared to previous works, our experiments indicate that using the HMI parameters and RF is a valid method for flare forecasting with fairly reasonable prediction performance. To our knowledge, this is the first time that RF has been used to make multiclass predictions of solar flares. We also find that the total unsigned quantities of vertical current, current helicity, and flux near the polarity inversion line are among the most important parameters for classifying flaring regions into different classes.« less
Quantum and Ecosystem Entropies
NASA Astrophysics Data System (ADS)
Kirwan, A. D.
2008-06-01
Ecosystems and quantum gases share a number of superficial similarities including enormous numbers of interacting elements and the fundamental role of energy in such interactions. A theory for the synthesis of data and prediction of new phenomena is well established in quantum statistical mechanics. The premise of this paper is that the reason a comparable unifying theory has not emerged in ecology is that a proper role for entropy has yet to be assigned. To this end, a phase space entropy model of ecosystems is developed. Specification of an ecosystem phase space cell size based on microbial mass, length, and time scales gives an ecosystem uncertainty parameter only about three orders of magnitude larger than Planck’s constant. Ecosystem equilibria is specified by conservation of biomass and total metabolic energy, along with the principle of maximum entropy at equilibria. Both Bose - Einstein and Fermi - Dirac equilibrium conditions arise in ecosystems applications. The paper concludes with a discussion of some broader aspects of an ecosystem phase space.
NASA Technical Reports Server (NTRS)
Zerlaut, Gene A.; Gilligan, J. E.; Harada, Y.
1965-01-01
In a previous research program for the Jet Propulsion- Laboratory, extensive studies led to the development and specifications of three zinc oxide-pigmented thermal-control coatings. The principal objectives of this program are: improvement of the three paints (as engineering materials), determination of the validity of our accelerated space-simulation testing, and continuation of the zinc oxide photolysis studies begun in the preceding program. Specific tasks that are discussed include: improvement of potassium silicate coatings as engineering materials and elucidation of their storage and handling problems; improvement of methyl silicone coatings as engineering materials; studies of zinc oxide photolysis to establish reasons for the observed stability of zinc oxide; and determination of space-simulation parameters such as long-term stability (to 8000 ESH), effect of coating surface temperature on the rate of degradation, and validity of accelerated testing (by reciprocity and wavelength dependency studies).
Slowing of Bessel light beam group velocity
NASA Astrophysics Data System (ADS)
Alfano, Robert R.; Nolan, Daniel A.
2016-02-01
Bessel light beams experience diffraction-limited propagation. A different basic spatial property of a Bessel beam is reported and investigated. It is shown a Bessel beam is a natural waveguide causing its group velocity can be subluminal (slower than the speed of light) when the optical frequency ω approaches a critical frequency ωc. A free space dispersion relation for a Bessel beam, the dependence of its wave number on its angular frequency, is developed from which the Bessel beam's subluminal group velocity is derived. It is shown under reasonable laboratory conditions that a Bessel light beam has associated parameters that allow slowing near a critical frequency. The application of Bessel beams with 1 μm spot size to slow down 100 ps to 200 ps over 1 cm length for a natural optical buffer in free space is presented.
A situated reasoning architecture for space-based repair and replace tasks
NASA Technical Reports Server (NTRS)
Bloom, Ben; Mcgrath, Debra; Sanborn, Jim
1989-01-01
Space-based robots need low level control for collision detection and avoidance, short-term load management, fine-grained motion, and other physical tasks. In addition, higher level control is required to focus strategic decision making as missions are assigned and carried out. Reasoning and control must be responsive to ongoing changes in the environment. Research aimed at bridging the gap between high level artificial intelligence (AI) planning techniques and task-level robot programming for telerobotic systems is described. Situated reasoning is incorporated into AI and Robotics systems in order to coordinate a robot's activity within its environment. An integrated system under development in a component maintenance domain is described. It is geared towards replacing worn and/or failed Orbital Replacement Units (ORUs) designed for use aboard NASA's Space Station Freedom based on the collection of components available at a given time. High level control reasons in component space in order to maximize the number operational component-cells over time, while the task-level controls sensors and effectors, detects collisions, and carries out pick and place tasks in physical space. Situated reasoning is used throughout the system to cope with component failures, imperfect information, and unexpected events.
Acoustic energy relations in Mudejar-Gothic churches.
Zamarreño, Teófilo; Girón, Sara; Galindo, Miguel
2007-01-01
Extensive objective energy-based parameters have been measured in 12 Mudejar-Gothic churches in the south of Spain. Measurements took place in unoccupied churches according to the ISO-3382 standard. Monoaural objective measures in the 125-4000 Hz frequency range and in their spatial distributions were obtained. Acoustic parameters: clarity C80, definition D50, sound strength G and center time Ts have been deduced using impulse response analysis through a maximum length sequence measurement system in each church. These parameters spectrally averaged according to the most extended criteria in auditoria in order to consider acoustic quality were studied as a function of source-receiver distance. The experimental results were compared with predictions given by classical and other existing theoretical models proposed for concert halls and churches. An analytical semi-empirical model based on the measured values of the C80 parameter is proposed in this work for these spaces. The good agreement between predicted values and experimental data for definition, sound strength, and center time in the churches analyzed shows that the model can be used for design predictions and other purposes with reasonable accuracy.
The Researches on Reasonable Well Spacing of Gas Wells in Deep and low Permeability Gas Reservoirs
NASA Astrophysics Data System (ADS)
Bei, Yu Bei; Hui, Li; Lin, Li Dong
2018-06-01
This Gs64 gas reservoir is a condensate gas reservoir which is relatively integrated with low porosity and low permeability found in Dagang Oilfield in recent years. The condensate content is as high as 610g/m3. At present, there are few reports about the well spacing of similar gas reservoirs at home and abroad. Therefore, determining the reasonable well spacing of the gas reservoir is important for ensuring the optimal development effect and economic benefit of the gas field development. This paper discusses the reasonable well spacing of the deep and low permeability gas reservoir from the aspects of percolation mechanics, gas reservoir engineering and numerical simulation. considering there exist the start-up pressure gradient in percolation process of low permeability gas reservoir, this paper combined with productivity equation under starting pressure gradient, established the formula of gas well spacing with the formation pressure and start-up pressure gradient. The calculation formula of starting pressure gradient and well spacing of gas wells. Adopting various methods to calculate values of gas reservoir spacing are close to well testing' radius, so the calculation method is reliable, which is very important for the determination of reasonable well spacing in low permeability gas reservoirs.
Tunneling from the past horizon
NASA Astrophysics Data System (ADS)
Kang, Subeom; Yeom, Dong-han
2018-04-01
We investigate a tunneling and emission process of a thin-shell from a Schwarzschild black hole, where the shell was initially located beyond the Einstein-Rosen bridge and finally appears at the right side of the Penrose diagram. In order to obtain such a solution, we should assume that the areal radius of the black hole horizon increases after the tunneling. Hence, there is a parameter range such that the tunneling rate is exponentially enhanced, rather than suppressed. We may have two interpretations regarding this. First, such a tunneling process from the past horizon is improbable by physical reasons; second, such a tunneling is possible in principle, but in order to obtain a stable Einstein-Rosen bridge, one needs to restrict the parameter spaces. If such a process is allowed, this can be a nonperturbative contribution to Einstein-Rosen bridges as well as eternal black holes.
Efficiently Selecting the Best Web Services
NASA Astrophysics Data System (ADS)
Goncalves, Marlene; Vidal, Maria-Esther; Regalado, Alfredo; Yacoubi Ayadi, Nadia
Emerging technologies and linking data initiatives have motivated the publication of a large number of datasets, and provide the basis for publishing Web services and tools to manage the available data. This wealth of resources opens a world of possibilities to satisfy user requests. However, Web services may have similar functionality and assess different performance; therefore, it is required to identify among the Web services that satisfy a user request, the ones with the best quality. In this paper we propose a hybrid approach that combines reasoning tasks with ranking techniques to aim at the selection of the Web services that best implement a user request. Web service functionalities are described in terms of input and output attributes annotated with existing ontologies, non-functionality is represented as Quality of Services (QoS) parameters, and user requests correspond to conjunctive queries whose sub-goals impose restrictions on the functionality and quality of the services to be selected. The ontology annotations are used in different reasoning tasks to infer service implicit properties and to augment the size of the service search space. Furthermore, QoS parameters are considered by a ranking metric to classify the services according to how well they meet a user non-functional condition. We assume that all the QoS parameters of the non-functional condition are equally important, and apply the Top-k Skyline approach to select the k services that best meet this condition. Our proposal relies on a two-fold solution which fires a deductive-based engine that performs different reasoning tasks to discover the services that satisfy the requested functionality, and an efficient implementation of the Top-k Skyline approach to compute the top-k services that meet the majority of the QoS constraints. Our Top-k Skyline solution exploits the properties of the Skyline Frequency metric and identifies the top-k services by just analyzing a subset of the services that meet the non-functional condition. We report on the effects of the proposed reasoning tasks, the quality of the top-k services selected by the ranking metric, and the performance of the proposed ranking techniques. Our results suggest that the number of services can be augmented by up two orders of magnitude. In addition, our ranking techniques are able to identify services that have the best values in at least half of the QoS parameters, while the performance is improved.
NASA Astrophysics Data System (ADS)
Baumstark-Khan, C.; Hellweg, C. E.; Arenz, A.
The combined action of ionizing radiation and microgravity will continue to influence future space missions with special risks for astronauts on the Moon surface or for long duration missions to Mars Previous space flight experiments have reported additive neither sensitization nor protection as well as synergistic increased radiation effect under microgravity interactions of radiation and microgravity in different cell systems Although a direct effect of microgravity on enzymatic mechanisms can be excluded on thermo dynamical reasons modifications of cellular repair can not be excluded as such processes are under the control of cellular signal transduction systems which are controlled by environmental parameters presumably also by gravity DNA repair studies in space on bacteria yeast cells and human fibroblasts which were irradiated before flight gave contradictory results from inhibition of repair by microgravity to enhancement whereas others did not detect any influence of microgravity on repair At the Radiation Biology Department of the German Aerospace Center DLR recombinant bacterial and mammalian cell systems were developed as reporters for cellular signal transduction modulation by genotoxic environmental conditions The space experiment CERASP Cellular Responses to Radiation in Space to be performed at the International Space Station ISS will make use of such reporter cell lines thereby supplying basic information on the cellular response to radiation applied in microgravity One of the biological endpoints will be survival
Deductive Derivation and Turing-Computerization of Semiparametric Efficient Estimation
Frangakis, Constantine E.; Qian, Tianchen; Wu, Zhenke; Diaz, Ivan
2015-01-01
Summary Researchers often seek robust inference for a parameter through semiparametric estimation. Efficient semiparametric estimation currently requires theoretical derivation of the efficient influence function (EIF), which can be a challenging and time-consuming task. If this task can be computerized, it can save dramatic human effort, which can be transferred, for example, to the design of new studies. Although the EIF is, in principle, a derivative, simple numerical differentiation to calculate the EIF by a computer masks the EIF’s functional dependence on the parameter of interest. For this reason, the standard approach to obtaining the EIF relies on the theoretical construction of the space of scores under all possible parametric submodels. This process currently depends on the correctness of conjectures about these spaces, and the correct verification of such conjectures. The correct guessing of such conjectures, though successful in some problems, is a nondeductive process, i.e., is not guaranteed to succeed (e.g., is not computerizable), and the verification of conjectures is generally susceptible to mistakes. We propose a method that can deductively produce semiparametric locally efficient estimators. The proposed method is computerizable, meaning that it does not need either conjecturing, or otherwise theoretically deriving the functional form of the EIF, and is guaranteed to produce the desired estimates even for complex parameters. The method is demonstrated through an example. PMID:26237182
Deductive derivation and turing-computerization of semiparametric efficient estimation.
Frangakis, Constantine E; Qian, Tianchen; Wu, Zhenke; Diaz, Ivan
2015-12-01
Researchers often seek robust inference for a parameter through semiparametric estimation. Efficient semiparametric estimation currently requires theoretical derivation of the efficient influence function (EIF), which can be a challenging and time-consuming task. If this task can be computerized, it can save dramatic human effort, which can be transferred, for example, to the design of new studies. Although the EIF is, in principle, a derivative, simple numerical differentiation to calculate the EIF by a computer masks the EIF's functional dependence on the parameter of interest. For this reason, the standard approach to obtaining the EIF relies on the theoretical construction of the space of scores under all possible parametric submodels. This process currently depends on the correctness of conjectures about these spaces, and the correct verification of such conjectures. The correct guessing of such conjectures, though successful in some problems, is a nondeductive process, i.e., is not guaranteed to succeed (e.g., is not computerizable), and the verification of conjectures is generally susceptible to mistakes. We propose a method that can deductively produce semiparametric locally efficient estimators. The proposed method is computerizable, meaning that it does not need either conjecturing, or otherwise theoretically deriving the functional form of the EIF, and is guaranteed to produce the desired estimates even for complex parameters. The method is demonstrated through an example. © 2015, The International Biometric Society.
Fractional Transport in Strongly Turbulent Plasmas.
Isliker, Heinz; Vlahos, Loukas; Constantinescu, Dana
2017-07-28
We analyze statistically the energization of particles in a large scale environment of strong turbulence that is fragmented into a large number of distributed current filaments. The turbulent environment is generated through strongly perturbed, 3D, resistive magnetohydrodynamics simulations, and it emerges naturally from the nonlinear evolution, without a specific reconnection geometry being set up. Based on test-particle simulations, we estimate the transport coefficients in energy space for use in the classical Fokker-Planck (FP) equation, and we show that the latter fails to reproduce the simulation results. The reason is that transport in energy space is highly anomalous (strange), the particles perform Levy flights, and the energy distributions show extended power-law tails. Newly then, we motivate the use and derive the specific form of a fractional transport equation (FTE), we determine its parameters and the order of the fractional derivatives from the simulation data, and we show that the FTE is able to reproduce the high energy part of the simulation data very well. The procedure for determining the FTE parameters also makes clear that it is the analysis of the simulation data that allows us to make the decision whether a classical FP equation or a FTE is appropriate.
Investigation of Key Parameters of Rock Cracking Using the Expansion of Vermiculite Materials
Ahn, Chi-Hyung; Hu, Jong Wan
2015-01-01
The demand for the development of underground spaces has been sharply increased in lieu of saturated ground spaces because the residents of cities have steadily increased since the 1980s. The traditional widely used excavation methods (i.e., explosion and shield) have caused many problems, such as noise, vibration, extended schedule, and increased costs. The vibration-free (and explosion-free) excavation method has currently attracted attention in the construction site because of the advantage of definitively solving these issues. For such reason, a new excavation method that utilizes the expansion of vermiculite with relatively fewer defects is proposed in this study. In general, vermiculite materials are rapidly expanded in volume when they receive thermal energy. Expansion pressure can be produced by thermal expansion of vermiculite in a steel tube, and measured by laboratory tests. The experimental tests are performed with various influencing parameters in an effort to seek the optimal condition to effectively increase expansion pressure at the same temperature. Then, calibrated expansion pressure is estimated, and compared to each model. After analyzing test results for expansion pressure, it is verified that vermiculite expanded by heat can provide enough internal pressure to break hard rock during tunneling work. PMID:28793610
Investigation of Key Parameters of Rock Cracking Using the Expansion of Vermiculite Materia.
Ahn, Chi-Hyung; Hu, Jong Wan
2015-10-12
The demand for the development of underground spaces has been sharply increased in lieu of saturated ground spaces because the residents of cities have steadily increased since the 1980s. The traditional widely used excavation methods ( i.e ., explosion and shield) have caused many problems, such as noise, vibration, extended schedule, and increased costs. The vibration-free (and explosion-free) excavation method has currently attracted attention in the construction site because of the advantage of definitively solving these issues. For such reason, a new excavation method that utilizes the expansion of vermiculite with relatively fewer defects is proposed in this study. In general, vermiculite materials are rapidly expanded in volume when they receive thermal energy. Expansion pressure can be produced by thermal expansion of vermiculite in a steel tube, and measured by laboratory tests. The experimental tests are performed with various influencing parameters in an effort to seek the optimal condition to effectively increase expansion pressure at the same temperature. Then, calibrated expansion pressure is estimated, and compared to each model. After analyzing test results for expansion pressure, it is verified that vermiculite expanded by heat can provide enough internal pressure to break hard rock during tunneling work.
Fractional Transport in Strongly Turbulent Plasmas
NASA Astrophysics Data System (ADS)
Isliker, Heinz; Vlahos, Loukas; Constantinescu, Dana
2017-07-01
We analyze statistically the energization of particles in a large scale environment of strong turbulence that is fragmented into a large number of distributed current filaments. The turbulent environment is generated through strongly perturbed, 3D, resistive magnetohydrodynamics simulations, and it emerges naturally from the nonlinear evolution, without a specific reconnection geometry being set up. Based on test-particle simulations, we estimate the transport coefficients in energy space for use in the classical Fokker-Planck (FP) equation, and we show that the latter fails to reproduce the simulation results. The reason is that transport in energy space is highly anomalous (strange), the particles perform Levy flights, and the energy distributions show extended power-law tails. Newly then, we motivate the use and derive the specific form of a fractional transport equation (FTE), we determine its parameters and the order of the fractional derivatives from the simulation data, and we show that the FTE is able to reproduce the high energy part of the simulation data very well. The procedure for determining the FTE parameters also makes clear that it is the analysis of the simulation data that allows us to make the decision whether a classical FP equation or a FTE is appropriate.
Aerosol Remote Sensing From Space
NASA Astrophysics Data System (ADS)
Kokhanovsky, A.; Kinne, S.
2010-01-01
Determination of Atmospheric Aerosol Properties Using Satellite Measurements;Bad Honnef, Germany, 16-19 August 2009; Aerosol optical depth (AOD), a measure of how much light is attenuated by aerosol particles, provides scientists information about the amount and type of aerosols in the atmosphere. Recent developments in aerosol remote sensing was the theme of a workshop held in Germany. The workshop was sponsored by the Wilhelm and Else Heraeus Foundation and attracted 67 participants from 12 countries. The workshop focused on the determination (retrieval) of AOD and its spectral dependence using measurements of changes to the solar radiation back-scattered to space. The midvisible AOD is usually applied to define aerosol amount, while the size of aerosol particles is indicated by the AOD spectral dependence and is commonly expressed by the Angstrom parameter. Identical properties retrieved by different sensors, however, display significant diversity, especially over continents. A major reason for this is that the derivation of AOD requires more accurate determination of nonaerosol contributions to the sensed satellite signal than is usually available. In particular, surface reflectance data as a function of the viewing geometry and robust cloud-clearing methods are essential retrieval elements. In addition, the often needed assumptions about aerosol properties in terms of absorption and size are more reasons for the discrepancy between different AOD measurements.
On Possible Arc Inception on Low Voltage Solar Array
NASA Technical Reports Server (NTRS)
Vayner, Boris
2015-01-01
Recent analysis of spacecraft failures during the period of 1990-2013 demonstrated clearly that electrostatic discharges caused more than 8 of all registered failures and anomalies, and comprised the most costly losses (25) for operating companies and agencies. The electrostatic discharges on spacecraft surfaces are the results of differential charging above some critical (threshold) voltages. The mechanisms of differential charging are well known, and various methods have been developed to prevent a generation of significant electric fields in areas of triple junctions. For example, low bus voltages in Low Earth Orbit plasma environment and slightly conducting layer over coverglass (ITO) in Geosynchronous Orbit surroundings are believed to be quite reliable measures to prevent discharges on respective surfaces. In most cases, the vulnerable elements of spacecraft (solar arrays, diode boards, etc.) go through comprehensive ground tests in vacuum chambers. However, tests articles contain the miniscule fragments of spacecraft components such as 10-30 solar cells of many thousands deployed on spacecraft in orbit. This is one reason why manufacturing defects may not be revealed in ground tests but expose themselves in arcing on array surface in space. The other reason for ineffectiveness of discharge preventive measures is aging of all materials in harsh orbital environments. The expected life time of modern spacecraft varies within the range of five-fifteen years, and thermal cycling, radiation damages, and mechanical stresses can result in surface erosion on conductive layers and microscopic cracks in coverglass sheets and adhesive films. These possible damages may cause significant increases in local electric field strengths and subsequent discharges. The primary discharges may or may not be detrimental to spacecraft operation, but they can produce the necessary conditions for sustained arcs initiation. Multiple measures were developed to prevent sustained discharges between adjacent strings, and many ground tests were performed to determine threshold parameters (voltage and current) for sustained arcs. And again, manufacturing defects and aging in space environments may result in considerable decrease of critical threshold parameters. This paper is devoted to the analysis of possible reasons behind arcing on spacecraft with low bus voltages.
On Possible Arc Inception on Low Voltage Solar Array
NASA Technical Reports Server (NTRS)
Vayner, Boris
2015-01-01
Recent analysis of spacecraft failures during the period of 1990-2013 demonstrated clearly that electrostatic discharges caused more than 8 percent of all registered failures and anomalies, and comprised the most costly losses (25 percent) for operating companies and agencies. The electrostatic discharges on spacecraft surfaces are the results of differential charging above some critical (threshold) voltages. The mechanisms of differential charging are well known, and various methods have been developed to prevent a generation of significant electric fields in areas of triple junctions. For example, low bus voltages in Low Earth Orbit plasma environment and slightly conducting layer over cover-glass (ITO) in Geosynchronous Orbit surroundings are believed to be quite reliable measures to prevent discharges on respective surfaces. In most cases, the vulnerable elements of spacecraft (solar arrays, diode boards, etc.) go through comprehensive ground tests in vacuum chambers. However, tests articles contain the miniscule fragments of spacecraft components such as 10-30 solar cells of many thousands deployed on spacecraft in orbit. This is one reason why manufacturing defects may not be revealed in ground tests but expose themselves in arcing on array surface in space. The other reason for ineffectiveness of discharge preventive measures is aging of all materials in harsh orbital environments. The expected life time of modern spacecraft varies within the range of five-fifteen years, and thermal cycling, radiation damages, and mechanical stresses can result in surface erosion on conductive layers and microscopic cracks in cover-glass sheets and adhesive films. These possible damages may cause significant increases in local electric field strengths and subsequent discharges. The primary discharges may or may not be detrimental to spacecraft operation, but they can produce the necessary conditions for sustained arcs initiation. Multiple measures were developed to prevent sustained discharges between adjacent strings, and many ground tests were performed to determine threshold parameters (voltage and current) for sustained arcs. And again, manufacturing defects and aging in space environments may result in considerable decrease of critical threshold parameters. This paper is devoted to the analysis of possible reasons behind arcing on spacecraft with low bus voltages.
NASA Astrophysics Data System (ADS)
Bhrawy, A. H.; Zaky, M. A.
2015-01-01
In this paper, we propose and analyze an efficient operational formulation of spectral tau method for multi-term time-space fractional differential equation with Dirichlet boundary conditions. The shifted Jacobi operational matrices of Riemann-Liouville fractional integral, left-sided and right-sided Caputo fractional derivatives are presented. By using these operational matrices, we propose a shifted Jacobi tau method for both temporal and spatial discretizations, which allows us to present an efficient spectral method for solving such problem. Furthermore, the error is estimated and the proposed method has reasonable convergence rates in spatial and temporal discretizations. In addition, some known spectral tau approximations can be derived as special cases from our algorithm if we suitably choose the corresponding special cases of Jacobi parameters θ and ϑ. Finally, in order to demonstrate its accuracy, we compare our method with those reported in the literature.
NASA Technical Reports Server (NTRS)
Jones, J. J.; Winn, W. P.; Hunyady, S. J.; Moore, C. B.; Bullock, J. W.
1990-01-01
During the fall of 1988, a Schweizer airplane equipped to measure electric field and other meteorological parameters flew over Kennedy Space Center (KSC) in a program to study clouds defined in the existing launch restriction criteria. A case study is presented of a single flight over KSC on November 4, 1988. This flight was chosen for two reasons: (1) the clouds were weakly electrified, and no lightning was reported during the flight; and (2) electric field mills in the surface array at KSC indicated field strengths greater than 3 kV/m, yet the aircraft flying directly over them at an altitude of 3.4 km above sea level measured field strengths of less than 1.6 kV/m. A weather summary, sounding description, record of cloud types, and an account of electric field measurements are included.
Parametric cost estimation for space science missions
NASA Astrophysics Data System (ADS)
Lillie, Charles F.; Thompson, Bruce E.
2008-07-01
Cost estimation for space science missions is critically important in budgeting for successful missions. The process requires consideration of a number of parameters, where many of the values are only known to a limited accuracy. The results of cost estimation are not perfect, but must be calculated and compared with the estimates that the government uses for budgeting purposes. Uncertainties in the input parameters result from evolving requirements for missions that are typically the "first of a kind" with "state-of-the-art" instruments and new spacecraft and payload technologies that make it difficult to base estimates on the cost histories of previous missions. Even the cost of heritage avionics is uncertain due to parts obsolescence and the resulting redesign work. Through experience and use of industry best practices developed in participation with the Aerospace Industries Association (AIA), Northrop Grumman has developed a parametric modeling approach that can provide a reasonably accurate cost range and most probable cost for future space missions. During the initial mission phases, the approach uses mass- and powerbased cost estimating relationships (CER)'s developed with historical data from previous missions. In later mission phases, when the mission requirements are better defined, these estimates are updated with vendor's bids and "bottoms- up", "grass-roots" material and labor cost estimates based on detailed schedules and assigned tasks. In this paper we describe how we develop our CER's for parametric cost estimation and how they can be applied to estimate the costs for future space science missions like those presented to the Astronomy & Astrophysics Decadal Survey Study Committees.
NASA Technical Reports Server (NTRS)
Arnold, William R.
2015-01-01
Since last year, a number of expanded capabilities have been added to the modeler. The support the integration with thermal modeling, the program can now produce simplified thermal models with the same geometric parameters as the more detailed dynamic and even more refined stress models. The local mesh refinement and mesh improvement tools have been expanded and more user friendly. The goal is to provide a means of evaluating both monolithic and segmented mirrors to the same level of fidelity and loading conditions at reasonable man-power efforts. The paper will demonstrate most of these new capabilities.
NASA Technical Reports Server (NTRS)
Arnold, William R., Sr.
2015-01-01
Since last year, a number of expanded capabilities have been added to the modeler. The support the integration with thermal modeling, the program can now produce simplified thermal models with the same geometric parameters as the more detailed dynamic and even more refined stress models. The local mesh refinement and mesh improvement tools have been expanded and more user friendly. The goal is to provide a means of evaluating both monolithic and segmented mirrors to the same level of fidelity and loading conditions at reasonable man-power efforts. The paper will demonstrate most of these new capabilities.
A joint analysis of the Drake equation and the Fermi paradox
NASA Astrophysics Data System (ADS)
Prantzos, Nikos
2013-07-01
I propose a unified framework for a joint analysis of the Drake equation and the Fermi paradox, which enables a simultaneous, quantitative study of both of them. The analysis is based on a simplified form of the Drake equation and on a fairly simple scheme for the colonization of the Milky Way. It appears that for sufficiently long-lived civilizations, colonization of the Galaxy is the only reasonable option to gain knowledge about other life forms. This argument allows one to define a region in the parameter space of the Drake equation, where the Fermi paradox definitely holds (`Strong Fermi paradox').
Hands-on parameter search for neural simulations by a MIDI-controller.
Eichner, Hubert; Borst, Alexander
2011-01-01
Computational neuroscientists frequently encounter the challenge of parameter fitting--exploring a usually high dimensional variable space to find a parameter set that reproduces an experimental data set. One common approach is using automated search algorithms such as gradient descent or genetic algorithms. However, these approaches suffer several shortcomings related to their lack of understanding the underlying question, such as defining a suitable error function or getting stuck in local minima. Another widespread approach is manual parameter fitting using a keyboard or a mouse, evaluating different parameter sets following the users intuition. However, this process is often cumbersome and time-intensive. Here, we present a new method for manual parameter fitting. A MIDI controller provides input to the simulation software, where model parameters are then tuned according to the knob and slider positions on the device. The model is immediately updated on every parameter change, continuously plotting the latest results. Given reasonably short simulation times of less than one second, we find this method to be highly efficient in quickly determining good parameter sets. Our approach bears a close resemblance to tuning the sound of an analog synthesizer, giving the user a very good intuition of the problem at hand, such as immediate feedback if and how results are affected by specific parameter changes. In addition to be used in research, our approach should be an ideal teaching tool, allowing students to interactively explore complex models such as Hodgkin-Huxley or dynamical systems.
Hands-On Parameter Search for Neural Simulations by a MIDI-Controller
Eichner, Hubert; Borst, Alexander
2011-01-01
Computational neuroscientists frequently encounter the challenge of parameter fitting – exploring a usually high dimensional variable space to find a parameter set that reproduces an experimental data set. One common approach is using automated search algorithms such as gradient descent or genetic algorithms. However, these approaches suffer several shortcomings related to their lack of understanding the underlying question, such as defining a suitable error function or getting stuck in local minima. Another widespread approach is manual parameter fitting using a keyboard or a mouse, evaluating different parameter sets following the users intuition. However, this process is often cumbersome and time-intensive. Here, we present a new method for manual parameter fitting. A MIDI controller provides input to the simulation software, where model parameters are then tuned according to the knob and slider positions on the device. The model is immediately updated on every parameter change, continuously plotting the latest results. Given reasonably short simulation times of less than one second, we find this method to be highly efficient in quickly determining good parameter sets. Our approach bears a close resemblance to tuning the sound of an analog synthesizer, giving the user a very good intuition of the problem at hand, such as immediate feedback if and how results are affected by specific parameter changes. In addition to be used in research, our approach should be an ideal teaching tool, allowing students to interactively explore complex models such as Hodgkin-Huxley or dynamical systems. PMID:22066027
Seamless variation of isometric and anisometric dynamical integrity measures in basins's erosion
NASA Astrophysics Data System (ADS)
Belardinelli, P.; Lenci, S.; Rega, G.
2018-03-01
Anisometric integrity measures defined as improvement and generalization of two existing measures (LIM, local integrity measure, and IF, integrity factor) of the extent and compactness of basins of attraction are introduced. Non-equidistant measures make it possible to account for inhomogeneous sensitivities of the state space variables to perturbations, thus permitting a more confident and targeted identification of the safe regions. All four measures are used for a global dynamics analysis of the twin-well Duffing oscillator, which is performed by considering a nearly continuous variation of a governing control parameter, thanks to the use of parallel computation allowing reasonable CPU time. This improves literature results based on finite (and commonly large) variations of the parameter, due to computational constraints. The seamless evolution of key integrity measures highlights the fine aspects of the erosion of the safe domain with respect to the increasing forcing amplitude.
Detection of tiny amounts of fissile materials in large-sized containers with radioactive waste
NASA Astrophysics Data System (ADS)
Batyaev, V. F.; Skliarov, S. V.
2018-01-01
The paper is devoted to non-destructive control of tiny amounts of fissile materials in large-sized containers filled with radioactive waste (RAW). The aim of this work is to model an active neutron interrogation facility for detection of fissile ma-terials inside NZK type containers with RAW and determine the minimal detectable mass of U-235 as a function of various param-eters: matrix type, nonuniformity of container filling, neutron gen-erator parameters (flux, pulse frequency, pulse duration), meas-urement time. As a result the dependence of minimal detectable mass on fissile materials location inside container is shown. Nonu-niformity of the thermal neutron flux inside a container is the main reason of the space-heterogeneity of minimal detectable mass in-side a large-sized container. Our experiments with tiny amounts of uranium-235 (<1 g) confirm the detection of fissile materials in NZK containers by using active neutron interrogation technique.
OTD Observations of Continental US Ground and Cloud Flashes
NASA Technical Reports Server (NTRS)
Koshak, William
2007-01-01
Lightning optical flash parameters (e.g., radiance, area, duration, number of optical groups, and number of optical events) derived from almost five years of Optical Transient Detector (OTD) data are analyzed. Hundreds of thousands of OTD flashes occurring over the continental US are categorized according to flash type (ground or cloud flash) using US National Lightning Detection Network TM (NLDN) data. The statistics of the optical characteristics of the ground and cloud flashes are inter-compared on an overall basis, and as a function of ground flash polarity. A standard two-distribution hypothesis test is used to inter-compare the population means of a given lightning parameter for the two flash types. Given the differences in the statistics of the optical characteristics, it is suggested that statistical analyses (e.g., Bayesian Inference) of the space-based optical measurements might make it possible to successfully discriminate ground and cloud flashes a reasonable percentage of the time.
Dunham, Kylee; Grand, James B.
2016-01-01
We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.
Application of Space Shuttle photography to studies of upper ocean dynamics
NASA Technical Reports Server (NTRS)
Zheng, Quanan; Klemas, Vic; Yan, Xiao-Hai; Wang, Zongming
1995-01-01
Three studies have been conducted using space shuttle imagery to explain the dynamics behavior of internal waves in the Atlantic and Indian Oceans and to derive tide-related parameters for Delaware Bay. By interpreting space shuttle photographs taken during mission STS-40, a total of 34 internal wave packets on the continental shelf of the Middle Atlantic Bight have been recognized. Using the finite-depth theory we derived that the maximum amplitude of solitons is 5.6 m, the phase speed 0.42 m/s, and the period 23.8 min. Deep-ocean internal waves in the western equatorial Indian Ocean on photographs taken during mission STS-44 were also interpreted and analyzed. The internal waves occurred in the form of a multisoliton packet in which there are about a dozen solitons. The average wavelength of the solitons is 1.8 +/- 0.5 km. The crest lines are mostly straight and reach as long as 100 km. The distance between two adjacent packets is about 66 km. Using the deepwater soliton theory, we derived that the mean amplitude of the solitons is 25 m, the nonlinear phase speed 1.7 m/s, and the average period 18 min. For both cases, the semidiural tides are the principal generating mechanism. The tide-related parameters of Delaware Bay were derived from space shuttle time-series photographs taken during mission STS-40. The water area in the bay were measured from interpretation maps of the photographs. The corresponding tidal levels were calculated using the exposure time. From these data, an approximate function relating the water area to the tidal level at a reference point was determined. Then, the water areas of the Delaware Bay at mean high water (MHW) and mean low water (MLW), below 0 m, for the tidal zone, and the tidal flux were inferred. All parameters derived were reasonable and compared well with results of previous investigations.
Femtosecond laser for cavity preparation in enamel and dentin: ablation efficiency related factors.
Chen, H; Li, H; Sun, Yc; Wang, Y; Lü, Pj
2016-02-11
To study the effects of laser fluence (laser energy density), scanning line spacing and ablation depth on the efficiency of a femtosecond laser for three-dimensional ablation of enamel and dentin. A diode-pumped, thin-disk femtosecond laser (wavelength 1025 nm, pulse width 400 fs) was used for the ablation of enamel and dentin. The laser spot was guided in a series of overlapping parallel lines on enamel and dentin surfaces to form a three-dimensional cavity. The depth and volume of the ablated cavity was then measured under a 3D measurement microscope to determine the ablation efficiency. Different values of fluence, scanning line spacing and ablation depth were used to assess the effects of each variable on ablation efficiency. Ablation efficiencies for enamel and dentin were maximized at different laser fluences and number of scanning lines and decreased with increases in laser fluence or with increases in scanning line spacing beyond spot diameter or with increases in ablation depth. Laser fluence, scanning line spacing and ablation depth all significantly affected femtosecond laser ablation efficiency. Use of a reasonable control for each of these parameters will improve future clinical application.
NASA Astrophysics Data System (ADS)
Nguyen, Sy Dzung; Nguyen, Quoc Hung; Choi, Seung-Bok
2015-01-01
This paper presents a new algorithm for building an adaptive neuro-fuzzy inference system (ANFIS) from a training data set called B-ANFIS. In order to increase accuracy of the model, the following issues are executed. Firstly, a data merging rule is proposed to build and perform a data-clustering strategy. Subsequently, a combination of clustering processes in the input data space and in the joint input-output data space is presented. Crucial reason of this task is to overcome problems related to initialization and contradictory fuzzy rules, which usually happen when building ANFIS. The clustering process in the input data space is accomplished based on a proposed merging-possibilistic clustering (MPC) algorithm. The effectiveness of this process is evaluated to resume a clustering process in the joint input-output data space. The optimal parameters obtained after completion of the clustering process are used to build ANFIS. Simulations based on a numerical data, 'Daily Data of Stock A', and measured data sets of a smart damper are performed to analyze and estimate accuracy. In addition, convergence and robustness of the proposed algorithm are investigated based on both theoretical and testing approaches.
78 FR 28218 - Appraisal Subcommittee; Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-14
... reasonable security measures. The meeting space is intended to accommodate public attendees. However, if the space will not accommodate all requests, the ASC may refuse attendance on that reasonable basis. The use.... Park, Executive Director. [FR Doc. 2013-11375 Filed 5-13-13; 8:45 am] BILLING CODE P ...
Evaluation of gamma dose effect on PIN photodiode using analytical model
NASA Astrophysics Data System (ADS)
Jafari, H.; Feghhi, S. A. H.; Boorboor, S.
2018-03-01
The PIN silicon photodiodes are widely used in the applications which may be found in radiation environment such as space mission, medical imaging and non-destructive testing. Radiation-induced damage in these devices causes to degrade the photodiode parameters. In this work, we have used new approach to evaluate gamma dose effects on a commercial PIN photodiode (BPX65) based on an analytical model. In this approach, the NIEL parameter has been calculated for gamma rays from a 60Co source by GEANT4. The radiation damage mechanisms have been considered by solving numerically the Poisson and continuity equations with the appropriate boundary conditions, parameters and physical models. Defects caused by radiation in silicon have been formulated in terms of the damage coefficient for the minority carriers' lifetime. The gamma induced degradation parameters of the silicon PIN photodiode have been analyzed in detail and the results were compared with experimental measurements and as well as the results of ATLAS semiconductor simulator to verify and parameterize the analytical model calculations. The results showed reasonable agreement between them for BPX65 silicon photodiode irradiated by 60Co gamma source at total doses up to 5 kGy under different reverse voltages.
Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons.
Nicola, Wilten; Campbell, Sue Ann
2013-01-01
We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons.
Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons
Nicola, Wilten; Campbell, Sue Ann
2013-01-01
We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons. PMID:24416013
NASA Technical Reports Server (NTRS)
Keyes, David E.; Smooke, Mitchell D.
1987-01-01
A parallelized finite difference code based on the Newton method for systems of nonlinear elliptic boundary value problems in two dimensions is analyzed in terms of computational complexity and parallel efficiency. An approximate cost function depending on 15 dimensionless parameters is derived for algorithms based on stripwise and boxwise decompositions of the domain and a one-to-one assignment of the strip or box subdomains to processors. The sensitivity of the cost functions to the parameters is explored in regions of parameter space corresponding to model small-order systems with inexpensive function evaluations and also a coupled system of nineteen equations with very expensive function evaluations. The algorithm was implemented on the Intel Hypercube, and some experimental results for the model problems with stripwise decompositions are presented and compared with the theory. In the context of computational combustion problems, multiprocessors of either message-passing or shared-memory type may be employed with stripwise decompositions to realize speedup of O(n), where n is mesh resolution in one direction, for reasonable n.
Comparison of Cone Model Parameters for Halo Coronal Mass Ejections
NASA Astrophysics Data System (ADS)
Na, Hyeonock; Moon, Y.-J.; Jang, Soojeong; Lee, Kyoung-Sun; Kim, Hae-Yeon
2013-11-01
Halo coronal mass ejections (HCMEs) are a major cause of geomagnetic storms, hence their three-dimensional structures are important for space weather. We compare three cone models: an elliptical-cone model, an ice-cream-cone model, and an asymmetric-cone model. These models allow us to determine three-dimensional parameters of HCMEs such as radial speed, angular width, and the angle [ γ] between sky plane and cone axis. We compare these parameters obtained from three models using 62 HCMEs observed by SOHO/LASCO from 2001 to 2002. Then we obtain the root-mean-square (RMS) error between the highest measured projection speeds and their calculated projection speeds from the cone models. As a result, we find that the radial speeds obtained from the models are well correlated with one another ( R > 0.8). The correlation coefficients between angular widths range from 0.1 to 0.48 and those between γ-values range from -0.08 to 0.47, which is much smaller than expected. The reason may be the different assumptions and methods. The RMS errors between the highest measured projection speeds and the highest estimated projection speeds of the elliptical-cone model, the ice-cream-cone model, and the asymmetric-cone model are 376 km s-1, 169 km s-1, and 152 km s-1. We obtain the correlation coefficients between the location from the models and the flare location ( R > 0.45). Finally, we discuss strengths and weaknesses of these models in terms of space-weather application.
BIOME: An Ecosystem Remote Sensor Based on Imaging Interferometry
NASA Technical Reports Server (NTRS)
Peterson, David L.; Hammer, Philip; Smith, William H.; Lawless, James G. (Technical Monitor)
1994-01-01
Until recent times, optical remote sensing of ecosystem properties from space has been limited to broad band multispectral scanners such as Landsat and AVHRR. While these sensor data can be used to derive important information about ecosystem parameters, they are very limited for measuring key biogeochemical cycling parameters such as the chemical content of plant canopies. Such parameters, for example the lignin and nitrogen contents, are potentially amenable to measurements by very high spectral resolution instruments using a spectroscopic approach. Airborne sensors based on grating imaging spectrometers gave the first promise of such potential but the recent decision not to deploy the space version has left the community without many alternatives. In the past few years, advancements in high performance deep well digital sensor arrays coupled with a patented design for a two-beam interferometer has produced an entirely new design for acquiring imaging spectroscopic data at the signal to noise levels necessary for quantitatively estimating chemical composition (1000:1 at 2 microns). This design has been assembled as a laboratory instrument and the principles demonstrated for acquiring remote scenes. An airborne instrument is in production and spaceborne sensors being proposed. The instrument is extremely promising because of its low cost, lower power requirements, very low weight, simplicity (no moving parts), and high performance. For these reasons, we have called it the first instrument optimized for ecosystem studies as part of a Biological Imaging and Observation Mission to Earth (BIOME).
Cortisol, insulin and leptin during space flight and bed rest
NASA Technical Reports Server (NTRS)
Stein, T. P.; Schluter, M. D.; Leskiw, M. J.
1999-01-01
Most ground based models for studying muscle atrophy and bone loss show reasonable fidelity to the space flight situation. However there are some differences. Investigation of the reasons for these differences can provide useful information about humans during space flight and aid in the refinement of ground based models. This report discusses three such differences, the relationships between: (i) cortisol and the protein loss, (ii) cortisol and ACTH and (iii) leptin, insulin and food intake.
Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Hanson, Andrea; Reed, Erik; Cavanagh, Peter
2011-01-01
Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.
NASA Astrophysics Data System (ADS)
Wang, S.; Huang, G. H.; Baetz, B. W.; Huang, W.
2015-11-01
This paper presents a polynomial chaos ensemble hydrologic prediction system (PCEHPS) for an efficient and robust uncertainty assessment of model parameters and predictions, in which possibilistic reasoning is infused into probabilistic parameter inference with simultaneous consideration of randomness and fuzziness. The PCEHPS is developed through a two-stage factorial polynomial chaos expansion (PCE) framework, which consists of an ensemble of PCEs to approximate the behavior of the hydrologic model, significantly speeding up the exhaustive sampling of the parameter space. Multiple hypothesis testing is then conducted to construct an ensemble of reduced-dimensionality PCEs with only the most influential terms, which is meaningful for achieving uncertainty reduction and further acceleration of parameter inference. The PCEHPS is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability. A detailed comparison between the HYMOD hydrologic model, the ensemble of PCEs, and the ensemble of reduced PCEs is performed in terms of accuracy and efficiency. Results reveal temporal and spatial variations in parameter sensitivities due to the dynamic behavior of hydrologic systems, and the effects (magnitude and direction) of parametric interactions depending on different hydrological metrics. The case study demonstrates that the PCEHPS is capable not only of capturing both expert knowledge and probabilistic information in the calibration process, but also of implementing an acceleration of more than 10 times faster than the hydrologic model without compromising the predictive accuracy.
NASA Astrophysics Data System (ADS)
Ohata, Koji; Naruse, Hajime; Yokokawa, Miwa; Viparelli, Enrica
2017-11-01
Understanding of the formative conditions of fluvial bedforms is significant for both river management and geological studies. Diagrams showing bedform stability conditions have been widely used for the analyses of sedimentary structures. However, the use of discriminants to determine the boundaries of different bedforms regimes has not yet been explored. In this study, we use discriminant functions to describe formative conditions for a range of fluvial bedforms in a 3-D dimensionless parametric space. We do this by means of discriminant analysis using the Mahalanobis distance. We analyzed 3,793 available laboratory and field data and used these to produce new bedform phase diagrams. These diagrams employ three dimensionless parameters representing properties of flow hydraulics and sediment particles as their axes. The discriminant functions for bedform regimes proposed herein are quadratic functions of three dimensionless parameters and are expressed as curved surfaces in 3-D space. These empirical functions can be used to estimate paleoflow velocities from sedimentary structures. As an example of the reconstruction of hydraulic conditions, we calculated the paleoflow velocity of the 2011 Tohoku-Oki tsunami backwash flow from the sedimentary structures of the tsunami deposit. In so doing, we successfully reconstructed reasonable values of the paleoflow velocities.
A novel orbiter mission concept for venus with the EnVision proposal
NASA Astrophysics Data System (ADS)
de Oliveira, Marta R. R.; Gil, Paulo J. S.; Ghail, Richard
2018-07-01
In space exploration, planetary orbiter missions are essential to gain insight into planets as a whole, and to help uncover unanswered scientific questions. In particular, the planets closest to the Earth have been a privileged target of the world's leading space agencies. EnVision is a mission proposal designed for Venus and competing for ESA's next launch opportunity with the objective of studying Earth's closest neighbor. The main goal is to study geological and atmospheric processes, namely surface processes, interior dynamics and atmosphere, to determine the reasons behind Venus and Earth's radically different evolution despite the planets' similarities. To achieve these goals, the operational orbit selection is a fundamental element of the mission design process. The design of an orbit around Venus faces specific challenges, such as the impossibility of choosing Sun-synchronous orbits. In this paper, an innovative genetic algorithm optimization was applied to select the optimal orbit based on the parameters with more influence in the mission planning, in particular the mission duration and the coverage of sites of interest on the Venusian surface. The solution obtained is a near-polar circular orbit with an altitude of 259 km that enables the coverage of all priority targets almost two times faster than with the parameters considered before this study.
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.
A Stochastic Fractional Dynamics Model of Rainfall Statistics
NASA Astrophysics Data System (ADS)
Kundu, Prasun; Travis, James
2013-04-01
Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is designed to faithfully reflect the scale dependence and is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. The main restriction is the assumption that the statistics of the precipitation field is spatially homogeneous and isotropic and stationary in time. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of the radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment. Some data sets containing periods of non-stationary behavior that involves occasional anomalously correlated rain events, present a challenge for the model.
Altered bone turnover during spaceflight
NASA Technical Reports Server (NTRS)
Turner, R. T.; Morey, E. R.; Liu, C.; Baylink, D. J.
1982-01-01
Modifications in calcium metabolism during spaceflight were studied, using parameters that reflect bone turnover. Bone formation rate, medullary area, bone length, bone density, pore size distribution, and differential bone cell number were evaluated in growing rate both immediately after and 25 days after orbital spaceflights aboard the Soviet biological satellites Cosmos 782 and 936. The primary effect of space flight on bone turnover was a reversible inhibition of bone formation at the periosteal surface. A simultaneous increase in the length of the periosteal arrest line suggests that bone formation ceased along corresponding portions of that surface. Possible reasons include increased secretion of glucocorticoids and mechanical unloading of the skeleton due to near-weightlessness, while starvation and immobilization are excluded as causes.
Orbital Debris Characterization via Laboratory Optical Measurements
NASA Technical Reports Server (NTRS)
Cowardin, Healther
2011-01-01
Optical observations of orbital debris offer insights that differ from radar measurements (specifically the size parameter,wavelength regime,and altitude range). For example, time-dependent photometric data yield lightcurves in multiple bandpasses that aid in material identification and possible periodic orientations. These data can also be used to help identify shapes and optical properties at multiple phase angles. Capitalizing on optical data products and applying them to generate a more complete understanding of orbital objects is a key objective of NASA's Optical Measurement Program, and the primary reason for the creation of the Optical Measurements Center(OMC). The OMC attempts to emulate space-based illumination conditions using equipment and techniques that parallel telescopic observations and source-target-sensor orientations.
Describing the geographic spread of dengue disease by traveling waves.
Maidana, Norberto Aníbal; Yang, Hyun Mo
2008-09-01
Dengue is a human disease transmitted by the mosquito Aedes aegypti. For this reason geographical regions infested by this mosquito species are under the risk of dengue outbreaks. In this work, we propose a mathematical model to study the spatial dissemination of dengue using a system of partial differential reaction-diffusion equations. With respect to the human and mosquito populations, we take into account their respective subclasses of infected and uninfected individuals. The dynamics of the mosquito population considers only two subpopulations: the winged form (mature female mosquitoes), and an aquatic population (comprising eggs, larvae and pupae). We disregard the long-distance movement by transportation facilities, for which reason the diffusion is considered restricted only to the winged form. The human population is considered homogeneously distributed in space, in order to describe localized dengue dissemination during a short period of epidemics. The cross-infection is modeled by the law of mass action. A threshold value as a function of the model's parameters is obtained, which determines the rate of dengue dissemination and the risk of dengue outbreaks. Assuming that an area was previously colonized by the mosquitoes, the rate of disease dissemination is determined as a function of the model's parameters. This rate of dissemination of dengue disease is determined by applying the traveling wave solutions to the corresponding system of partial differential equations.
Tresadern, Gary; Agrafiotis, Dimitris K
2009-12-01
Stochastic proximity embedding (SPE) and self-organizing superimposition (SOS) are two recently introduced methods for conformational sampling that have shown great promise in several application domains. Our previous validation studies aimed at exploring the limits of these methods and have involved rather exhaustive conformational searches producing a large number of conformations. However, from a practical point of view, such searches have become the exception rather than the norm. The increasing popularity of virtual screening has created a need for 3D conformational search methods that produce meaningful answers in a relatively short period of time and work effectively on a large scale. In this work, we examine the performance of these algorithms and the effects of different parameter settings at varying levels of sampling. Our goal is to identify search protocols that can produce a diverse set of chemically sensible conformations and have a reasonable probability of sampling biologically active space within a small number of trials. Our results suggest that both SPE and SOS are extremely competitive in this regard and produce very satisfactory results with as few as 500 conformations per molecule. The results improve even further when the raw conformations are minimized with a molecular mechanics force field to remove minor imperfections and any residual strain. These findings provide additional evidence that these methods are suitable for many everyday modeling tasks, both high- and low-throughput.
CONSTRAINTS ON THE SYNCHROTRON EMISSION MECHANISM IN GAMMA-RAY BURSTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beniamini, Paz; Piran, Tsvi, E-mail: paz.beniamini@mail.huji.ac.il, E-mail: tsvi.piran@mail.huji.ac.il
2013-05-20
We reexamine the general synchrotron model for gamma-ray bursts' (GRBs') prompt emission and determine the regime in the parameter phase space in which it is viable. We characterize a typical GRB pulse in terms of its peak energy, peak flux, and duration and use the latest Fermi observations to constrain the high-energy part of the spectrum. We solve for the intrinsic parameters at the emission region and find the possible parameter phase space for synchrotron emission. Our approach is general and it does not depend on a specific energy dissipation mechanism. Reasonable synchrotron solutions are found with energy ratios ofmore » 10{sup -4} < {epsilon}{sub B}/{epsilon}{sub e} < 10, bulk Lorentz factor values of 300 < {Gamma} < 3000, typical electrons' Lorentz factor values of 3 Multiplication-Sign 10{sup 3} < {gamma}{sub e} < 10{sup 5}, and emission radii of the order 10{sup 15} cm < R < 10{sup 17} cm. Most remarkable among those are the rather large values of the emission radius and the electron's Lorentz factor. We find that soft (with peak energy less than 100 keV) but luminous (isotropic luminosity of 1.5 Multiplication-Sign 10{sup 53}) pulses are inefficient. This may explain the lack of strong soft bursts. In cases when most of the energy is carried out by the kinetic energy of the flow, such as in the internal shocks, the synchrotron solution requires that only a small fraction of the electrons are accelerated to relativistic velocities by the shocks. We show that future observations of very high energy photons from GRBs by CTA could possibly determine all parameters of the synchrotron model or rule it out altogether.« less
NASA Technical Reports Server (NTRS)
Allen, J.
1977-01-01
The feasibility of space colonization depends partly on the answer to the practical question whether construction workers can exist and work in zero-g for the time necessary to build the colony framework to the point just prior to spinning it into its artificial-g mode. Based on definitive Skylab experience, there seems to be every reason to believe that workers in zero-g can perform their construction tasks with the same skill as under 1-g conditions. Attention is also given to basic reasons and motivations for the conduction of space flights and the establishment of space colonies.
Planned development of the space shuttle vehicle
NASA Technical Reports Server (NTRS)
1972-01-01
Information pertaining to the planned development of the space shuttle vehicle is presented. The package contains: (1) President's statement; (2) Dr. Fletcher's statement; (3) space shuttle fact sheet; (4) important reasons for the space shuttle.
ATS displays: A reasoning visualization tool for expert systems
NASA Technical Reports Server (NTRS)
Selig, William John; Johannes, James D.
1990-01-01
Reasoning visualization is a useful tool that can help users better understand the inherently non-sequential logic of an expert system. While this is desirable in most all expert system applications, it is especially so for such critical systems as those destined for space-based operations. A hierarchical view of the expert system reasoning process and some characteristics of these various levels is presented. Also presented are Abstract Time Slice (ATS) displays, a tool to visualize the plethora of interrelated information available at the host inferencing language level of reasoning. The usefulness of this tool is illustrated with some examples from a prototype potable water expert system for possible use aboard Space Station Freedom.
NASA Astrophysics Data System (ADS)
Kuvshinov, A. V.
2016-12-01
Electrical conductivity is one of the characteristic physical parameters of materials making up Earth's interior which is sensitive to variations of temperature, chemical composition, water content, and partial melt. As a consequence, estimating lithosphere and upper mantle (LUM) electrical conductivity structure is a potentially strong tool for mapping their chemistry, mineralogy and physical structure thus presenting a complementary method to seismic studies that focus on LUM elastic properties. Global electromagnetic (EM) studies, which provide information on LUM electrical conductivity, have attracted increasing interest during the last decade, mainly for three reasons. A primary reason is the recent growth in the amount of EM data available, especially from low-Earth orbiting magnetic satellite missions (Oersted, CHAMP, SAC-C, and Swarm). A second reason is the great interest in the characterization of the three-dimensional properties of Earth's interior on a global scale. Finally, the interest has also resulted from the significant methodological progress made during the last years in EM data analysis, forward modelling and inversion. In this talk I will summarize advances and challenges in EM data interpretation, and present recent global and regional models of LUM conductivity derived from satellite and ground-based data. I will also discuss possible topics for future research.
NASA Technical Reports Server (NTRS)
Santiago-Perez, Julio
1988-01-01
The frequency and intensity of thunderstorms around the Kennedy Space Center (KSC) has affected scheduled launch, landing, and other ground operations for many years. In order to protect against and provide safe working facilities, KSC has performed and hosted several studies on lightning phenomena. For the reasons mentioned above, KSC has established the Atmospheric Science Field Laboratory (ASFL). At these facilities KSC launches wire-towing rockets into thunderstorms to trigger natural lightning to the launch site. A program named Rocket Triggered Lightning Program (RTLP) is being conducted at the ASFL. This report calls for two of the experiments conducted in the summer 1988 Rocket Triggered Lightning Program. One experiment suspended an electric field mill over the launching areas from a balloon about 500 meters high to measure the space charges over the launching area. The other was to connect a waveform recorder to a nearby distribution power line to record currents and voltages wave forms induced by natural and triggered lightning.
NASA Astrophysics Data System (ADS)
Santiago-Perez, Julio
1988-10-01
The frequency and intensity of thunderstorms around the Kennedy Space Center (KSC) has affected scheduled launch, landing, and other ground operations for many years. In order to protect against and provide safe working facilities, KSC has performed and hosted several studies on lightning phenomena. For the reasons mentioned above, KSC has established the Atmospheric Science Field Laboratory (ASFL). At these facilities KSC launches wire-towing rockets into thunderstorms to trigger natural lightning to the launch site. A program named Rocket Triggered Lightning Program (RTLP) is being conducted at the ASFL. This report calls for two of the experiments conducted in the summer 1988 Rocket Triggered Lightning Program. One experiment suspended an electric field mill over the launching areas from a balloon about 500 meters high to measure the space charges over the launching area. The other was to connect a waveform recorder to a nearby distribution power line to record currents and voltages wave forms induced by natural and triggered lightning.
Estimating macroporosity in a forest watershed by use of a tension infiltrometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watson, K.W.; Luxmoore, R.J.
The ability to obtain sufficient field hydrologic data at reasonable cost can be an important limiting factor in applying transport models. A procedure is described for using ponded-flow- and tension-infiltration measurements to calculate transport parameters in a forest watershed. Thirty infiltration measurements were taken under ponded-flow conditions and at 3, 6, and 15 cm (H/sub 2/O) tension. It was assumed from capillarity theory that pores > 0.1-, 0.05-, and 0.02-cm diam, respectively, were excluded from the transport process during the tension infiltration measurements. Under ponded flow, 73% of the flux was conducted through macropores (i.e., pores > 0.1-cm diam.). Anmore » estimated 96% of the water flux was transmitted through only 0.32% of the soil volume. In general the larger the total water flux the larger the macropore contribution to total water flux. The Shapiro-Wilk normality test indicated that water flux through both matrix pore space and macropores was log-normally distributed in space.« less
Direct multiple path magnetospheric propagation - A fundamental property of nonducted VLF waves
NASA Technical Reports Server (NTRS)
Sonwalkar, V. S.; Bell, T. F.; Helliwell, R. A.; Inan, U. S.
1984-01-01
An elongation of 20-200 ms, attributed to closely spaced multiple propagation paths between the satellite and the ground, is noted in well defined pulses observed by the ISEE 1 satellite in nonducted whistler mode signals from the Siple Station VLF transmitter. Electric field measurements show a 2 to 10 dB amplitude variation in the observed amplitude fading pattern which is also consistent with direct multiple path propagation. The results obtained for two cases, one outside and one inside the plasmapause, establish that the direct signals transmitted from the ground arrive almost simultaneously at any point in the magnetosphere along two or more closely spaced direct ray paths. It is also shown that multiple paths can be explained by assuming field-aligned irregularities, and the implications of these results for nonducted wave-particle interaction in the magnetosphere are discussed. For reasonable parameters of nonducted, multiple path propagation, a cyclotron-resonant electron will experience a wave Doppler broadening of a few tens to a few hundreds of Hz.
Applying Authentic Data Analysis in Learning Earth Atmosphere
NASA Astrophysics Data System (ADS)
Johan, H.; Suhandi, A.; Samsudin, A.; Wulan, A. R.
2017-09-01
The aim of this research was to develop earth science learning material especially earth atmosphere supported by science research with authentic data analysis to enhance reasoning through. Various earth and space science phenomenon require reasoning. This research used experimental research with one group pre test-post test design. 23 pre-service physics teacher participated in this research. Essay test was conducted to get data about reason ability. Essay test was analyzed quantitatively. Observation sheet was used to capture phenomena during learning process. The results showed that student’s reasoning ability improved from unidentified and no reasoning to evidence based reasoning and inductive/deductive rule-based reasoning. Authentic data was considered using Grid Analysis Display System (GrADS). Visualization from GrADS facilitated students to correlate the concepts and bring out real condition of nature in classroom activity. It also helped student to reason the phenomena related to earth and space science concept. It can be concluded that applying authentic data analysis in learning process can help to enhance students reasoning. This study is expected to help lecture to bring out result of geoscience research in learning process and facilitate student understand concepts.
Laboratory Characterization and Modeling of a Near-Infrared Enhanced Photomultiplier Tube
NASA Technical Reports Server (NTRS)
Biswas, A.; Farr, W. H.
2003-01-01
The photon-starved channel for optical communications from deep space requires the development of detector technology that can achieve photon-counting sensitivities with high bandwidth. In this article, a near-infrared enhanced photomultiplier tube (PMT) with a quantum e.ciency of 0.08 at a 1.06- m wavelength is characterized in the laboratory. A Polya distribution model is used to compute the probability distribution function of the emitted secondary photoelectrons from the PMT. The model is compared with measured pulse-height distributions with reasonable agreement. The model accounts for realistic device parameters, such as the individual dynode stage gains and a shape parameter that is representative of the spatial uniformity of response across the photocathode and dynodes. Bit-error rate (BER) measurements also are presented for 4- and 8-pulse-position modulation (PPM) modulation schemes with data rates of 20 to 30 Mb/s. A BER of 10-2 is obtained for a mean of 8 detected photons.
Whistler waves with electron temperature anisotropy and non-Maxwellian distribution functions
NASA Astrophysics Data System (ADS)
Malik, M. Usman; Masood, W.; Qureshi, M. N. S.; Mirza, Arshad M.
2018-05-01
The previous works on whistler waves with electron temperature anisotropy narrated the dependence on plasma parameters, however, they did not explore the reasons behind the observed differences. A comparative analysis of the whistler waves with different electron distributions has not been made to date. This paper attempts to address both these issues in detail by making a detailed comparison of the dispersion relations and growth rates of whistler waves with electron temperature anisotropy for Maxwellian, Cairns, kappa and generalized (r, q) distributions by varying the key plasma parameters for the problem under consideration. It has been found that the growth rate of whistler instability is maximum for flat-topped distribution whereas it is minimum for the Maxwellian distribution. This work not only summarizes and complements the previous work done on the whistler waves with electron temperature anisotropy but also provides a general framework to understand the linear propagation of whistler waves with electron temperature anisotropy that is applicable in all regions of space plasmas where the satellite missions have indicated their presence.
Use of partial AUC to demonstrate bioequivalence of Zolpidem Tartrate Extended Release formulations.
Lionberger, Robert A; Raw, Andre S; Kim, Stephanie H; Zhang, Xinyuan; Yu, Lawrence X
2012-04-01
FDA's bioequivalence recommendation for Zolpidem Tartrate Extended Release Tablets is the first to use partial AUC (pAUC) metrics for determining bioequivalence of modified-release dosage forms. Modeling and simulation studies were performed to aid in understanding the need for pAUC measures and also the proper pAUC truncation times. Deconvolution techniques, In Vitro/In Vivo Correlations, and the CAT (Compartmental Absorption and Transit) model were used to predict the PK profiles for zolpidem. Models were validated using in-house data submitted to the FDA. Using dissolution profiles expressed by the Weibull model as input for the CAT model, dissolution spaces were derived for simulated test formulations. The AUC(0-1.5) parameter was indicative of IR characteristics of early exposure and effectively distinguished among formulations that produced different pharmacodynamic effects. The AUC(1.5-t) parameter ensured equivalence with respect to the sustained release phase of Ambien CR. The variability of AUC(0-1.5) is higher than other PK parameters, but is reasonable for use in an equivalence test. In addition to the traditional PK parameters of AUCinf and Cmax, AUC(0-1.5) and AUC(1.5-t) are recommended to provide bioequivalence measures with respect to label indications for Ambien CR: onset of sleep and sleep maintenance.
Fink, Reinhold F
2010-11-07
A rigorous perturbation theory is proposed, which has the same second order energy as the spin-component-scaled Møller-Plesset second order (SCS-MP2) method of Grimme [J. Chem. Phys. 118, 9095 (2003)]. This upgrades SCS-MP2 to a systematically improvable, true wave-function-based method. The perturbation theory is defined by an unperturbed Hamiltonian, Ĥ(0), that contains the ordinary Fock operator and spin operators Ŝ(2) that act either on the occupied or the virtual orbital spaces. Two choices for Ĥ(0) are discussed and the importance of a spin-pure Ĥ((0)) is underlined. Like the SCS-MP2 approach, the theory contains two parameters (c(os) and c(ss)) that scale the opposite-spin and the same-spin contributions to the second order perturbation energy. It is shown that these parameters can be determined from theoretical considerations by a Feenberg scaling approach or a fit of the wave functions from the perturbation theory to the exact one from a full configuration interaction calculation. The parameters c(os)=1.15 and c(ss)=0.75 are found to be optimal for a reasonable test set of molecules. The meaning of these parameters and the consequences following from a well defined improved MP method are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seljak, Uroš, E-mail: useljak@berkeley.edu
On large scales a nonlinear transformation of matter density field can be viewed as a biased tracer of the density field itself. A nonlinear transformation also modifies the redshift space distortions in the same limit, giving rise to a velocity bias. In models with primordial nongaussianity a nonlinear transformation generates a scale dependent bias on large scales. We derive analytic expressions for the large scale bias, the velocity bias and the redshift space distortion (RSD) parameter β, as well as the scale dependent bias from primordial nongaussianity for a general nonlinear transformation. These biases can be expressed entirely in termsmore » of the one point distribution function (PDF) of the final field and the parameters of the transformation. The analysis shows that one can view the large scale bias different from unity and primordial nongaussianity bias as a consequence of converting higher order correlations in density into 2-point correlations of its nonlinear transform. Our analysis allows one to devise nonlinear transformations with nearly arbitrary bias properties, which can be used to increase the signal in the large scale clustering limit. We apply the results to the ionizing equilibrium model of Lyman-α forest, in which Lyman-α flux F is related to the density perturbation δ via a nonlinear transformation. Velocity bias can be expressed as an average over the Lyman-α flux PDF. At z = 2.4 we predict the velocity bias of -0.1, compared to the observed value of −0.13±0.03. Bias and primordial nongaussianity bias depend on the parameters of the transformation. Measurements of bias can thus be used to constrain these parameters, and for reasonable values of the ionizing background intensity we can match the predictions to observations. Matching to the observed values we predict the ratio of primordial nongaussianity bias to bias to have the opposite sign and lower magnitude than the corresponding values for the highly biased galaxies, but this depends on the model parameters and can also vanish or change the sign.« less
NASA Astrophysics Data System (ADS)
Kwok, H. L.
2005-08-01
Mobility in single-grain and polycrystalline organic field-effect transistors (OFETs) is of interest because it affects the performance of these devices. While reasonable values of the hole mobility has been measured in pentacene OFETs, relatively speaking, our understanding of the detailed transport mechanisms is somewhat weak and there is a lack of precise knowledge on the effects of the materials parameters such as the site spacing, the localization length, the rms width of the density of states (DOS), the escape frequency, etc. This work attempts to analyze the materials parameters of pentacene OFETs extracted from data reported in the literature. In this work, we developed a model for the mobility parameter from first principle and extracted the relevant materials parameters. According to our analyses, the transport mechanisms in the OFETs are fairly complex and the electrical properties are dominated by the properties of the trap states. As observed, the single-grain OFETs having smaller values of the rms widths of the DOS (in comparison with the polycrystalline OFETs) also had higher hole mobilities. Our results showed that increasing the gate bias could have a similar but smaller effect. Potentially, increasing the escape frequency is a more effective way to raise the hole mobility and this parameter appears to be affected by changes in the molecular structure and in the degree of "disorder".
Comparison of three-dimensional parameters of Halo CMEs using three cone models
NASA Astrophysics Data System (ADS)
Na, H.; Moon, Y.; Jang, S.; Lee, K.
2012-12-01
Halo coronal mass ejections (HCMEs) are a major cause of geomagnetic storms and their three dimensional structures are important for space weather. In this study, we compare three cone models: an elliptical cone model, an ice-cream cone model, and an asymmetric cone model. These models allow us to determine the three dimensional parameters of HCMEs such as radial speed, angular width, and the angle (γ) between sky plane and cone axis. We compare these parameters obtained from three models using 62 well-observed HCMEs observed by SOHO/LASCO from 2001 to 2002. Then we obtain the root mean square error (RMS error) between maximum measured projection speeds and their calculated projection speeds from the cone models. As a result, we find that the radial speeds obtained from the models are well correlated with one another (R > 0.84). The correlation coefficients between angular widths are ranges from 0.04 to 0.53 and those between γ values are from -0.15 to 0.47, which are much smaller than expected. The reason may be due to different assumptions and methods. The RMS errors between the maximum measured projection speeds and the maximum estimated projection speeds of the elliptical cone model, the ice-cream cone model, and the asymmetric cone model are 213 km/s, 254 km/s, and 267 km/s, respectively. And we obtain the correlation coefficients between the location from the models and the flare location (R > 0.75). Finally, we discuss strengths and weaknesses of these models in terms of space weather application.
Analytical progress in the theory of vesicles under linear flow
NASA Astrophysics Data System (ADS)
Farutin, Alexander; Biben, Thierry; Misbah, Chaouqi
2010-06-01
Vesicles are becoming a quite popular model for the study of red blood cells. This is a free boundary problem which is rather difficult to handle theoretically. Quantitative computational approaches constitute also a challenge. In addition, with numerical studies, it is not easy to scan within a reasonable time the whole parameter space. Therefore, having quantitative analytical results is an essential advance that provides deeper understanding of observed features and can be used to accompany and possibly guide further numerical development. In this paper, shape evolution equations for a vesicle in a shear flow are derived analytically with precision being cubic (which is quadratic in previous theories) with regard to the deformation of the vesicle relative to a spherical shape. The phase diagram distinguishing regions of parameters where different types of motion (tank treading, tumbling, and vacillating breathing) are manifested is presented. This theory reveals unsuspected features: including higher order terms and harmonics (even if they are not directly excited by the shear flow) is necessary, whatever the shape is close to a sphere. Not only does this theory cure a quite large quantitative discrepancy between previous theories and recent experiments and numerical studies, but also it reveals a phenomenon: the VB mode band in parameter space, which is believed to saturate after a moderate shear rate, exhibits a striking widening beyond a critical shear rate. The widening results from excitation of fourth-order harmonic. The obtained phase diagram is in a remarkably good agreement with recent three-dimensional numerical simulations based on the boundary integral formulation. Comparison of our results with experiments is systematically made.
Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo
Herckenrath, Daan; Langevin, Christian D.; Doherty, John
2011-01-01
Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of generating calibration-constrained parameter fields approximately doubled. Predictive uncertainty variance computed through the NSMC method was compared with that computed through linear analysis. The results were in good agreement, with the NSMC method estimate showing a slightly smaller range of prediction uncertainty than was calculated by the linear method. Copyright 2011 by the American Geophysical Union.
Dynamics of a neuron model in different two-dimensional parameter-spaces
NASA Astrophysics Data System (ADS)
Rech, Paulo C.
2011-03-01
We report some two-dimensional parameter-space diagrams numerically obtained for the multi-parameter Hindmarsh-Rose neuron model. Several different parameter planes are considered, and we show that regardless of the combination of parameters, a typical scenario is preserved: for all choice of two parameters, the parameter-space presents a comb-shaped chaotic region immersed in a large periodic region. We also show that exist regions close these chaotic region, separated by the comb teeth, organized themselves in period-adding bifurcation cascades.
Transformation to equivalent dimensions—a new methodology to study earthquake clustering
NASA Astrophysics Data System (ADS)
Lasocki, Stanislaw
2014-05-01
A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.
NASA Technical Reports Server (NTRS)
West, V. R.; Parker, J. F., Jr.
1973-01-01
The study examines data on episodes of decompression sickness, particularly from recent Navy work in which the event occurred under multiple stress conditions, to determine the extent to which decompression sickness might be predicted on the basis of personal characteristics such as age, weight, and physical condition. Such information should ultimately be useful for establishing medical selection criteria to screen individuals prior to participation inactivities involving extensive changes in ambient pressure, including those encountered in space operations. The main conclusions were as follows. There is a definite and positive relationship between increasing age and weight and the likelihood of decompression sickness. However, for predictive purposes, the relationship is low. To reduce the risk of bends, particularly for older individuals, strenuous exercise should be avoided immediately after ambient pressure changes. Temperatures should be kept at the low end of the comfort zone. For space activities, pressure changes of over 6-7 psi should be avoided. Prospective participants in future missions such as the Space Shuttle should not be excluded on the basis of age, certainly to age 60, if their general condition is reasonably good and they are not grossly obese. (Modified author abstract)
Numerical simulation of the geodynamo reaches Earth's core dynamical regime
NASA Astrophysics Data System (ADS)
Aubert, J.; Gastine, T.; Fournier, A.
2016-12-01
Numerical simulations of the geodynamo have been successful at reproducing a number of static (field morphology) and kinematic (secular variation patterns, core surface flows and westward drift) features of Earth's magnetic field, making them a tool of choice for the analysis and retrieval of geophysical information on Earth's core. However, classical numerical models have been run in a parameter regime far from that of the real system, prompting the question of whether we do get "the right answers for the wrong reasons", i.e. whether the agreement between models and nature simply occurs by chance and without physical relevance in the dynamics. In this presentation, we show that classical models succeed in describing the geodynamo because their large-scale spatial structure is essentially invariant as one progresses along a well-chosen path in parameter space to Earth's core conditions. This path is constrained by the need to enforce the relevant force balance (MAC or Magneto-Archimedes-Coriolis) and preserve the ratio of the convective overturn and magnetic diffusion times. Numerical simulations performed along this path are shown to be spatially invariant at scales larger than that where the magnetic energy is ohmically dissipated. This property enables the definition of large-eddy simulations that show good agreement with direct numerical simulations in the range where both are feasible, and that can be computed at unprecedented values of the control parameters, such as an Ekman number E=10-8. Combining direct and large-eddy simulations, large-scale invariance is observed over half the logarithmic distance in parameter space between classical models and Earth. The conditions reached at this mid-point of the path are furthermore shown to be representative of the rapidly-rotating, asymptotic dynamical regime in which Earth's core resides, with a MAC force balance undisturbed by viscosity or inertia, the enforcement of a Taylor state and strong-field dynamo action. We conclude that numerical modelling has advanced to a stage where it is possible to use models correctly representing the statics, kinematics and now the dynamics of the geodynamo. This opens the way to a better analysis of the geomagnetic field in the time and space domains.
Artificial neural networks and approximate reasoning for intelligent control in space
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1991-01-01
A method is introduced for learning to refine the control rules of approximate reasoning-based controllers. A reinforcement-learning technique is used in conjunction with a multi-layer neural network model of an approximate reasoning-based controller. The model learns by updating its prediction of the physical system's behavior. The model can use the control knowledge of an experienced operator and fine-tune it through the process of learning. Some of the space domains suitable for applications of the model such as rendezvous and docking, camera tracking, and tethered systems control are discussed.
NASA Technical Reports Server (NTRS)
Anderton, D. A.
1985-01-01
The official start of a bold new space program, essential to maintain the United States' leadership in space was signaled by a Presidential directive to move aggressively again into space by proceeding with the development of a space station. Development concepts for a permanently manned space station are discussed. Reasons for establishing an inhabited space station are given. Cost estimates and timetables are also cited.
GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no
2013-11-10
We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3)more » better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sippel, K.; Boehlein, S; Sakai, Y
Mycoplasma genitalium is a human pathogen that is associated with nongonococcal urethritis in men and cervicitis in women. The cloning, expression, purification and crystallization of the protein MG289 from M. genitalium strain G37 are reported here. Crystals of MG289 diffracted X-rays to 2.8 {angstrom} resolution. The crystals belonged to the orthorhombic space group P2{sub 1}2{sub 1}2{sub 1}, with unit-cell parameters a = 49.7, b = 90.9, c = 176.1 {angstrom}. The diffraction data after processing had an overall R{sub merge} of 8.7%. The crystal structure of Cypl, the ortholog of MG289 from M. hyorhinis, has recently been determined, providing amore » reasonable phasing model; molecular replacement is currently under way.« less
Galerkin approximation for inverse problems for nonautonomous nonlinear distributed systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Reich, Simeon; Rosen, I. G.
1988-01-01
An abstract framework and convergence theory is developed for Galerkin approximation for inverse problems involving the identification of nonautonomous nonlinear distributed parameter systems. A set of relatively easily verified conditions is provided which are sufficient to guarantee the existence of optimal solutions and their approximation by a sequence of solutions to a sequence of approximating finite dimensional identification problems. The approach is based on the theory of monotone operators in Banach spaces and is applicable to a reasonably broad class of nonlinear distributed systems. Operator theoretic and variational techniques are used to establish a fundamental convergence result. An example involving evolution systems with dynamics described by nonstationary quasilinear elliptic operators along with some applications are presented and discussed.
NASA Technical Reports Server (NTRS)
McIlraith, Sheila; Biswas, Gautam; Clancy, Dan; Gupta, Vineet
2005-01-01
This paper reports on an on-going Project to investigate techniques to diagnose complex dynamical systems that are modeled as hybrid systems. In particular, we examine continuous systems with embedded supervisory controllers that experience abrupt, partial or full failure of component devices. We cast the diagnosis problem as a model selection problem. To reduce the space of potential models under consideration, we exploit techniques from qualitative reasoning to conjecture an initial set of qualitative candidate diagnoses, which induce a smaller set of models. We refine these diagnoses using parameter estimation and model fitting techniques. As a motivating case study, we have examined the problem of diagnosing NASA's Sprint AERCam, a small spherical robotic camera unit with 12 thrusters that enable both linear and rotational motion.
NASA Astrophysics Data System (ADS)
Banica, M. C.; Chun, J.; Scheuermann, T.; Weigand, B.; Wolfersdorf, J. v.
2009-01-01
Scramjet powered vehicles can decrease costs for access to space but substantial obstacles still exist in their realization. For example, experiments in the relevant Mach number regime are difficult to perform and flight testing is expensive. Therefore, numerical methods are often employed for system layout but they require validation against experimental data. Here, we validate the commercial code CFD++ against experimental results for hydrogen combustion in the supersonic combustion facility of the Institute of Aerospace Thermodynamics (ITLR) at the Universität Stuttgart. Fuel is injected through a lobed a strut injector, which provides rapid mixing. Our numerical data shows reasonable agreement with experiments. We further investigate effects of varying equivalence ratios on several important performance parameters.
Determination of Earth rotation by the combination of data from different space geodetic systems
NASA Technical Reports Server (NTRS)
Archinal, Brent Allen
1987-01-01
Formerly, Earth Rotation Parameters (ERP), i.e., polar motion and UTI-UTC values, have been determined using data from only one observational system at a time, or by the combination of parameters previously obtained in such determinations. The question arises as to whether a simultaneous solution using data from several sources would provide an improved determination of such parameters. To pursue this reasoning, fifteen days of observations have been simulated using realistic networks of Lunar Laser Ranging (LLR), Satellite Laser Ranging (SLR) to Lageos, and Very Long Baseline Interferometry (VLBI) stations. A comparison has been done of the accuracy and precision of the ERP obtained from: (1) the individual system solutions, (2) the weighted means of those values, (3) all of the data by means of the combination of the normal equations obtained in 1, and (4) a grand solution with all the data. These simulations show that solutions done by the normal equation combination and grand solution methods provide the best or nearly the best ERP for all the periods considered, but that weighted mean solutions provide nearly the same accuracy and precision. VLBI solutions also provide similar accuracies.
Indoctrination and the Space of Reasons
ERIC Educational Resources Information Center
Hanks, Chris
2008-01-01
The "paradox of indoctrination" has proven to be a persistent problem in discussions of the cultivation of autonomy through education. In short, if indoctrination means instilling beliefs without reasons, and if children lack the rational capacity to evaluate reasons, how can that capacity be cultivated without indoctrination? Some educational…
An Assessment of the Technology of Automated Rendezvous and Capture in Space
NASA Technical Reports Server (NTRS)
Polites, M. E.
1998-01-01
This paper presents the results of a study to assess the technology of automated rendezvous and capture (AR&C) in space. The outline of the paper is as follows. First, the history of manual and automated rendezvous and capture and rendezvous and dock is presented. Next, the need for AR&C in space is established. Then, today's technology and ongoing technology efforts related to AR&C in space are reviewed. In light of these, AR&C systems are proposed that meet NASA's future needs, but can be developed in a reasonable amount of time with a reasonable amount of money. Technology plans for developing these systems are presented; cost and schedule are included.
Secure and Efficient Signature Scheme Based on NTRU for Mobile Payment
NASA Astrophysics Data System (ADS)
Xia, Yunhao; You, Lirong; Sun, Zhe; Sun, Zhixin
2017-10-01
Mobile payment becomes more and more popular, however the traditional public-key encryption algorithm has higher requirements for hardware which is not suitable for mobile terminals of limited computing resources. In addition, these public-key encryption algorithms do not have the ability of anti-quantum computing. This paper researches public-key encryption algorithm NTRU for quantum computation through analyzing the influence of parameter q and k on the probability of generating reasonable signature value. Two methods are proposed to improve the probability of generating reasonable signature value. Firstly, increase the value of parameter q. Secondly, add the authentication condition that meet the reasonable signature requirements during the signature phase. Experimental results show that the proposed signature scheme can realize the zero leakage of the private key information of the signature value, and increase the probability of generating the reasonable signature value. It also improve rate of the signature, and avoid the invalid signature propagation in the network, but the scheme for parameter selection has certain restrictions.
Barber, Jared; Tanase, Roxana; Yotov, Ivan
2016-06-01
Several Kalman filter algorithms are presented for data assimilation and parameter estimation for a nonlinear diffusion model of epithelial cell migration. These include the ensemble Kalman filter with Monte Carlo sampling and a stochastic collocation (SC) Kalman filter with structured sampling. Further, two types of noise are considered -uncorrelated noise resulting in one stochastic dimension for each element of the spatial grid and correlated noise parameterized by the Karhunen-Loeve (KL) expansion resulting in one stochastic dimension for each KL term. The efficiency and accuracy of the four methods are investigated for two cases with synthetic data with and without noise, as well as data from a laboratory experiment. While it is observed that all algorithms perform reasonably well in matching the target solution and estimating the diffusion coefficient and the growth rate, it is illustrated that the algorithms that employ SC and KL expansion are computationally more efficient, as they require fewer ensemble members for comparable accuracy. In the case of SC methods, this is due to improved approximation in stochastic space compared to Monte Carlo sampling. In the case of KL methods, the parameterization of the noise results in a stochastic space of smaller dimension. The most efficient method is the one combining SC and KL expansion. Copyright © 2016 Elsevier Inc. All rights reserved.
Shielding in biology and biophysics: Methodology, dosimetry, interpretation
NASA Astrophysics Data System (ADS)
Vladimirsky, B. M.; Temuryants, N. A.
2016-12-01
An interdisciplinary review of the publications on the shielding of organisms by different materials is presented. The authors show that some discrepancies between the results of different researchers might be attributed to methodological reasons, including purely biological (neglect of rhythms) and technical (specific features of the design or material of the screen) ones. In some cases, an important factor is the instability of control indices due to the variations in space weather. According to the modern concept of biological exposure to microdoses, any isolation of a biological object by any material necessarily leads to several simultaneous changes in environmental parameters, and this undermines the principle of "all other conditions being equal" in the classical differential scheme of an experiment. The shielding effects of water solution are universally recognized and their influence is to be observed for all organisms. Data on the exposure of living organisms to weak combined magnetic fields and on the influence of space weather enabled the development of theoretical models generally explaining the effect of shielding for bioorganisms. Ferromagnetic shielding results in changes of both the static magnetic field and the field of radio waves within the area protected by the screen. When screens are nonmagnetic, changes are due to the isolation from the radio waves. In both cases, some contribution to the fluctuations of measured parameters can be made by variations in the level of ionizing radiation.
Kalman filter estimation of human pilot-model parameters
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Roland, V. R.
1975-01-01
The parameters of a human pilot-model transfer function are estimated by applying the extended Kalman filter to the corresponding retarded differential-difference equations in the time domain. Use of computer-generated data indicates that most of the parameters, including the implicit time delay, may be reasonably estimated in this way. When applied to two sets of experimental data obtained from a closed-loop tracking task performed by a human, the Kalman filter generated diverging residuals for one of the measurement types, apparently because of model assumption errors. Application of a modified adaptive technique was found to overcome the divergence and to produce reasonable estimates of most of the parameters.
Towards adjoint-based inversion of time-dependent mantle convection with nonlinear viscosity
NASA Astrophysics Data System (ADS)
Li, Dunzhu; Gurnis, Michael; Stadler, Georg
2017-04-01
We develop and study an adjoint-based inversion method for the simultaneous recovery of initial temperature conditions and viscosity parameters in time-dependent mantle convection from the current mantle temperature and historic plate motion. Based on a realistic rheological model with temperature-dependent and strain-rate-dependent viscosity, we formulate the inversion as a PDE-constrained optimization problem. The objective functional includes the misfit of surface velocity (plate motion) history, the misfit of the current mantle temperature, and a regularization for the uncertain initial condition. The gradient of this functional with respect to the initial temperature and the uncertain viscosity parameters is computed by solving the adjoint of the mantle convection equations. This gradient is used in a pre-conditioned quasi-Newton minimization algorithm. We study the prospects and limitations of the inversion, as well as the computational performance of the method using two synthetic problems, a sinking cylinder and a realistic subduction model. The subduction model is characterized by the migration of a ridge toward a trench whereby both plate motions and subduction evolve. The results demonstrate: (1) for known viscosity parameters, the initial temperature can be well recovered, as in previous initial condition-only inversions where the effective viscosity was given; (2) for known initial temperature, viscosity parameters can be recovered accurately, despite the existence of trade-offs due to ill-conditioning; (3) for the joint inversion of initial condition and viscosity parameters, initial condition and effective viscosity can be reasonably recovered, but the high dimension of the parameter space and the resulting ill-posedness may limit recovery of viscosity parameters.
NASA Astrophysics Data System (ADS)
Totz, Sonja; Eliseev, Alexey V.; Petri, Stefan; Flechsig, Michael; Caesar, Levke; Petoukhov, Vladimir; Coumou, Dim
2018-02-01
We present and validate a set of equations for representing the atmosphere's large-scale general circulation in an Earth system model of intermediate complexity (EMIC). These dynamical equations have been implemented in Aeolus 1.0, which is a statistical-dynamical atmosphere model (SDAM) and includes radiative transfer and cloud modules (Coumou et al., 2011; Eliseev et al., 2013). The statistical dynamical approach is computationally efficient and thus enables us to perform climate simulations at multimillennia timescales, which is a prime aim of our model development. Further, this computational efficiency enables us to scan large and high-dimensional parameter space to tune the model parameters, e.g., for sensitivity studies.Here, we present novel equations for the large-scale zonal-mean wind as well as those for planetary waves. Together with synoptic parameterization (as presented by Coumou et al., 2011), these form the mathematical description of the dynamical core of Aeolus 1.0.We optimize the dynamical core parameter values by tuning all relevant dynamical fields to ERA-Interim reanalysis data (1983-2009) forcing the dynamical core with prescribed surface temperature, surface humidity and cumulus cloud fraction. We test the model's performance in reproducing the seasonal cycle and the influence of the El Niño-Southern Oscillation (ENSO). We use a simulated annealing optimization algorithm, which approximates the global minimum of a high-dimensional function.With non-tuned parameter values, the model performs reasonably in terms of its representation of zonal-mean circulation, planetary waves and storm tracks. The simulated annealing optimization improves in particular the model's representation of the Northern Hemisphere jet stream and storm tracks as well as the Hadley circulation.The regions of high azonal wind velocities (planetary waves) are accurately captured for all validation experiments. The zonal-mean zonal wind and the integrated lower troposphere mass flux show good results in particular in the Northern Hemisphere. In the Southern Hemisphere, the model tends to produce too-weak zonal-mean zonal winds and a too-narrow Hadley circulation. We discuss possible reasons for these model biases as well as planned future model improvements and applications.
Prediction and typicality in multiverse cosmology
NASA Astrophysics Data System (ADS)
Azhar, Feraz
2014-02-01
In the absence of a fundamental theory that precisely predicts values for observable parameters, anthropic reasoning attempts to constrain probability distributions over those parameters in order to facilitate the extraction of testable predictions. The utility of this approach has been vigorously debated of late, particularly in light of theories that claim we live in a multiverse, where parameters may take differing values in regions lying outside our observable horizon. Within this cosmological framework, we investigate the efficacy of top-down anthropic reasoning based on the weak anthropic principle. We argue contrary to recent claims that it is not clear one can either dispense with notions of typicality altogether or presume typicality, in comparing resulting probability distributions with observations. We show in a concrete, top-down setting related to dark matter, that assumptions about typicality can dramatically affect predictions, thereby providing a guide to how errors in reasoning regarding typicality translate to errors in the assessment of predictive power. We conjecture that this dependence on typicality is an integral feature of anthropic reasoning in broader cosmological contexts, and argue in favour of the explicit inclusion of measures of typicality in schemes invoking anthropic reasoning, with a view to extracting predictions from multiverse scenarios.
Automated Rendezvous and Capture in Space: A Technology Assessment
NASA Technical Reports Server (NTRS)
Polites, Michael E.
1998-01-01
This paper presents the results of a study to assess the technology of automated rendezvous and capture (AR&C) in space. The outline of the paper is as follows: First, the history of manual and automated rendezvous and capture and rendezvous and dock is presented. Next, the need for AR&C in space is reviewed. In light of these, AR&C systems are proposed that meet NASA's future needs, but can be developed in a reasonable amount of time with a reasonable amount of money. Technology plans for developing these systems are presented; cost and schedule are included.
Reasoning in people with obsessive-compulsive disorder.
Simpson, Jane; Cove, Jennifer; Fineberg, Naomi; Msetfi, Rachel M; J Ball, Linden
2007-11-01
The aim of this study was to investigate the inductive and deductive reasoning abilities of people with obsessive-compulsive disorder (OCD). Following previous research, it was predicted that people with OCD would show different abilities on inductive reasoning tasks but similar abilities to controls on deductive reasoning tasks. A two-group comparison was used with both groups matched on a range of demographic variables. Where appropriate, unmatched variables were entered into the analyses as covariates. Twenty-three people with OCD and 25 control participants were assessed on two tasks: an inductive reasoning task (the 20-questions task) and a deductive reasoning task (a syllogistic reasoning task with a content-neutral and content-emotional manipulation). While no group differences emerged on several of the parameters of the inductive reasoning task, the OCD group did differ on one, and arguably the most important, parameter by asking fewer correct direct-hypothesis questions. The syllogistic reasoning task results were analysed using both correct response and conclusion acceptance data. While no main effects of group were evident, significant interactions indicated important differences in the way the OCD group reasoned with content neutral and emotional syllogisms. It was argued that the OCD group's patterns of response on both tasks were characterized by the need for more information, states of uncertainty, and doubt and postponement of a final decision.
Parameter redundancy in discrete state-space and integrated models.
Cole, Diana J; McCrea, Rachel S
2016-09-01
Discrete state-space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state-space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state-space models using discrete analogues of methods for continuous state-space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
TEXSYS. [a knowledge based system for the Space Station Freedom thermal control system test-bed
NASA Technical Reports Server (NTRS)
Bull, John
1990-01-01
The Systems Autonomy Demonstration Project has recently completed a major test and evaluation of TEXSYS, a knowledge-based system (KBS) which demonstrates real-time control and FDIR for the Space Station Freedom thermal control system test-bed. TEXSYS is the largest KBS ever developed by NASA and offers a unique opportunity for the study of technical issues associated with the use of advanced KBS concepts including: model-based reasoning and diagnosis, quantitative and qualitative reasoning, integrated use of model-based and rule-based representations, temporal reasoning, and scale-up performance issues. TEXSYS represents a major achievement in advanced automation that has the potential to significantly influence Space Station Freedom's design for the thermal control system. An overview of the Systems Autonomy Demonstration Project, the thermal control system test-bed, the TEXSYS architecture, preliminary test results, and thermal domain expert feedback are presented.
A programmable optical few wavelength source for flexgrid optical networks
NASA Astrophysics Data System (ADS)
Imran, M.; Fresi, F.; Meloni, G.; Bhowmik, B. B.; Sambo, N.; Potì, L.
2016-07-01
Multi-wavelength (MW) sources will probably replace discrete lasers or laser arrays in next generation multi-carrier transponders (e.g., 1 Tb/s), currently called multi-flow transponders or sliceable bandwidth variable transponders (SBVTs). We present design and experimental demonstration of a few wavelength (FW) source suitable for SBVTs in a flexgrid scenario. We refer to FW instead of MW since for an SBVT just few subcarriers are required (e.g., eight). The proposed FW source does not require optical filtering for subcarrier modulation. The design exploits frequency shifting in IQ modulators by using single side band suppressed carrier modulation. A reasonable number of lines can be provided depending on the chosen architecture, tunable in the whole C-band. The scheme is also capable of providing symmetric (equally spaced) and asymmetric subcarrier spacing arbitrarily tunable from 6.25 GHz to 37.5 GHz. The control on the number of subcarriers (increase/decrease depending on line rate) provides flexibility to the SBVT, being the spacing dependent on transmission parameters such as line rate or modulation format. Transmission performance has been tested and compared with an array of standard lasers considering a 480 Gb/s transmission for different carrier spacing. Additionally, an integrable solution based on complementary frequency shifter is also presented to improve scalability and costs. The impact on transceiver techno-economics and network performance is also discussed.
Revisiting Your Outdoor Environment: Reasons to Reshape, Enrich, Redevelop the Outdoor Space.
ERIC Educational Resources Information Center
Mauffette, Anne Gillain
1998-01-01
Provides suggestions for designing effective outdoor space. Focuses on advocating for space, designing spaces based on children's characteristics and preferences, integrating the outdoors in educational planning, including children in decision making and work, knowing about injury prevention, providing adult models who love the outdoors, and…
Unities in Inductive Reasoning. Technical Report No. 18.
ERIC Educational Resources Information Center
Sternberg, Robert J.; Gardner, Michael K.
Two experiments were performed to study inductive reasoning as a set of thought processes that operates on the structure, as opposed to the content, of organized memory. The content of the reasoning consisted of inductions concerning the names of mammals, assumed to occupy a Euclidean space of three dimensions (size, ferocity, and humanness) in…
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Provision of suitable space for employee interviews... REQUESTS FOR HEALTH HAZARD EVALUATIONS § 85.8 Provision of suitable space for employee interviews and... space, if such space is reasonably available, to NIOSH to conduct private interviews with, and...
NASA Astrophysics Data System (ADS)
Mitroo, Dhruv; Sun, Yujian; Combest, Daniel P.; Kumar, Purushottam; Williams, Brent J.
2018-03-01
Oxidation flow reactors (OFRs) have been developed to achieve high degrees of oxidant exposures over relatively short space times (defined as the ratio of reactor volume to the volumetric flow rate). While, due to their increased use, attention has been paid to their ability to replicate realistic tropospheric reactions by modeling the chemistry inside the reactor, there is a desire to customize flow patterns. This work demonstrates the importance of decoupling tracer signal of the reactor from that of the tubing when experimentally obtaining these flow patterns. We modeled the residence time distributions (RTDs) inside the Washington University Potential Aerosol Mass (WU-PAM) reactor, an OFR, for a simple set of configurations by applying the tank-in-series (TIS) model, a one-parameter model, to a deconvolution algorithm. The value of the parameter, N, is close to unity for every case except one having the highest space time. Combined, the results suggest that volumetric flow rate affects mixing patterns more than use of our internals. We selected results from the simplest case, at 78 s space time with one inlet and one outlet, absent of baffles and spargers, and compared the experimental F curve to that of a computational fluid dynamics (CFD) simulation. The F curves, which represent the cumulative time spent in the reactor by flowing material, match reasonably well. We value that the use of a small aspect ratio reactor such as the WU-PAM reduces wall interactions; however sudden apertures introduce disturbances in the flow, and suggest applying the methodology of tracer testing described in this work to investigate RTDs in OFRs to observe the effect of modified inlets, outlets and use of internals prior to application (e.g., field deployment vs. laboratory study).
Berghuijs, Herman N. C.; Yin, Xinyou; Ho, Q. Tri; Verboven, Pieter; Nicolaï, Bart M.
2017-01-01
The rate of photosynthesis depends on the CO2 partial pressure near Rubisco, Cc, which is commonly calculated by models using the overall mesophyll resistance. Such models do not explain the difference between the CO2 level in the intercellular air space and Cc mechanistically. This problem can be overcome by reaction-diffusion models for CO2 transport, production and fixation in leaves. However, most reaction-diffusion models are complex and unattractive for procedures that require a large number of runs, like parameter optimisation. This study provides a simpler reaction-diffusion model. It is parameterized by both leaf physiological and leaf anatomical data. The anatomical data consisted of the thickness of the cell wall, cytosol and stroma, and the area ratios of mesophyll exposed to the intercellular air space to leaf surfaces and exposed chloroplast to exposed mesophyll surfaces. The model was used directly to estimate photosynthetic parameters from a subset of the measured light and CO2 response curves; the remaining data were used for validation. The model predicted light and CO2 response curves reasonably well for 15 days old tomato (cv. Admiro) leaves, if (photo)respiratory CO2 release was assumed to take place in the inner cytosol or in the gaps between the chloroplasts. The model was also used to calculate the fraction of CO2 produced by (photo)respiration that is re-assimilated in the stroma, and this fraction ranged from 56 to 76%. In future research, the model should be further validated to better understand how the re-assimilation of (photo)respired CO2 is affected by environmental conditions and physiological parameters. PMID:28880924
Berghuijs, Herman N C; Yin, Xinyou; Ho, Q Tri; Retta, Moges A; Verboven, Pieter; Nicolaï, Bart M; Struik, Paul C
2017-01-01
The rate of photosynthesis depends on the CO2 partial pressure near Rubisco, Cc, which is commonly calculated by models using the overall mesophyll resistance. Such models do not explain the difference between the CO2 level in the intercellular air space and Cc mechanistically. This problem can be overcome by reaction-diffusion models for CO2 transport, production and fixation in leaves. However, most reaction-diffusion models are complex and unattractive for procedures that require a large number of runs, like parameter optimisation. This study provides a simpler reaction-diffusion model. It is parameterized by both leaf physiological and leaf anatomical data. The anatomical data consisted of the thickness of the cell wall, cytosol and stroma, and the area ratios of mesophyll exposed to the intercellular air space to leaf surfaces and exposed chloroplast to exposed mesophyll surfaces. The model was used directly to estimate photosynthetic parameters from a subset of the measured light and CO2 response curves; the remaining data were used for validation. The model predicted light and CO2 response curves reasonably well for 15 days old tomato (cv. Admiro) leaves, if (photo)respiratory CO2 release was assumed to take place in the inner cytosol or in the gaps between the chloroplasts. The model was also used to calculate the fraction of CO2 produced by (photo)respiration that is re-assimilated in the stroma, and this fraction ranged from 56 to 76%. In future research, the model should be further validated to better understand how the re-assimilation of (photo)respired CO2 is affected by environmental conditions and physiological parameters.
Alós, Josep; Palmer, Miquel; Balle, Salvador; Arlinghaus, Robert
2016-01-01
State-space models (SSM) are increasingly applied in studies involving biotelemetry-generated positional data because they are able to estimate movement parameters from positions that are unobserved or have been observed with non-negligible observational error. Popular telemetry systems in marine coastal fish consist of arrays of omnidirectional acoustic receivers, which generate a multivariate time-series of detection events across the tracking period. Here we report a novel Bayesian fitting of a SSM application that couples mechanistic movement properties within a home range (a specific case of random walk weighted by an Ornstein-Uhlenbeck process) with a model of observational error typical for data obtained from acoustic receiver arrays. We explored the performance and accuracy of the approach through simulation modelling and extensive sensitivity analyses of the effects of various configurations of movement properties and time-steps among positions. Model results show an accurate and unbiased estimation of the movement parameters, and in most cases the simulated movement parameters were properly retrieved. Only in extreme situations (when fast swimming speeds are combined with pooling the number of detections over long time-steps) the model produced some bias that needs to be accounted for in field applications. Our method was subsequently applied to real acoustic tracking data collected from a small marine coastal fish species, the pearly razorfish, Xyrichtys novacula. The Bayesian SSM we present here constitutes an alternative for those used to the Bayesian way of reasoning. Our Bayesian SSM can be easily adapted and generalized to any species, thereby allowing studies in freely roaming animals on the ecological and evolutionary consequences of home ranges and territory establishment, both in fishes and in other taxa. PMID:27119718
NASA Astrophysics Data System (ADS)
Kalanov, Temur Z.
2003-04-01
A new theory of space is suggested. It represents the new point of view which has arisen from the critical analysis of the foundations of physics (in particular the theory of relativity and quantum mechanics), mathematics, cosmology and philosophy. The main idea following from the analysis is that the concept of movement represents a key to understanding of the essence of space. The starting-point of the theory is represented by the following philosophical (dialectical materialistic) principles. (a) The principle of the materiality (of the objective reality) of the Nature: the Nature (the Universe) is a system (a set) of material objects (particles, bodies, fields); each object has properties, features, and the properties, the features are inseparable characteristics of material object and belong only to material object. (b) The principle of the existence of material object: an object exists as the objective reality, and movement is a form of existence of object. (c) The principle (definition) of movement of object: the movement is change (i.e. transition of some states into others) in general; the movement determines a direction, and direction characterizes the movement. (d) The principle of existence of time: the time exists as the parameter of the system of reference. These principles lead to the following statements expressing the essence of space. (1) There is no space in general, and there exist space only as a form of existence of the properties and features of the object. It means that the space is a set of the measures of the object (the measure is the philosophical category meaning unity of the qualitative and quantitative determinacy of the object). In other words, the space of the object is a set of the states of the object. (2) The states of the object are manifested only in a system of reference. The main informational property of the unitary system researched physical object + system of reference is that the system of reference determines (measures, calculates) the parameters of the subsystem researched physical object (for example, the coordinates of the object M); the parameters characterize the system of reference (for example, the system of coordinates S). (3) Each parameter of the object is its measure. Total number of the mutually independent parameters of the object is called dimension of the space of the object. (4) The set of numerical values (i.e. the range, the spectrum) of each parameter is the subspace of the object. (The coordinate space, the momentum space and the energy space are examples of the subspaces of the object). (5) The set of the parameters of the object is divided into two non intersecting (opposite) classes: the class of the internal parameters and the class of the non internal (i.e. external) parameters. The class of the external parameters is divided into two non intersecting (opposite) subclasses: the subclass of the absolute parameters (characterizing the form, the sizes of the object) and the subclass of the non absolute (relative) parameters (characterizing the position, the coordinates of the object). (6) Set of the external parameters forms the external space of object. It is called geometrical space of object. (7) Since a macroscopic object has three mutually independent sizes, the dimension of its external absolute space is equal to three. Consequently, the dimension of its external relative space is also equal to three. Thus, the total dimension of the external space of the macroscopic object is equal to six. (8) In general case, the external absolute space (i.e. the form, the sizes) and the external relative space (i.e. the position, the coordinates) of any object are mutually dependent because of influence of a medium. The geometrical space of such object is called non Euclidean space. If the external absolute space and the external relative space of some object are mutually independent, then the external relative space of such object is the homogeneous and isotropic geometrical space. It is called Euclidean space of the object. Consequences: (i) the question of true geometry of the Universe is incorrect; (ii) the theory of relativity has no physical meaning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banerjee, Robyn, E-mail: robynbanerjee@gmail.com; Chakraborty, Santam; Nygren, Ian
Purpose: To determine whether volumes based on contours of the peritoneal space can be used instead of individual small bowel loops to predict for grade ≥3 acute small bowel toxicity in patients with rectal cancer treated with neoadjuvant chemoradiation therapy. Methods and Materials: A standardized contouring method was developed for the peritoneal space and retrospectively applied to the radiation treatment plans of 67 patients treated with neoadjuvant chemoradiation therapy for rectal cancer. Dose-volume histogram (DVH) data were extracted and analyzed against patient toxicity. Receiver operating characteristic analysis and logistic regression were carried out for both contouring methods. Results: Grade ≥3more » small bowel toxicity occurred in 16% (11/67) of patients in the study. A highly significant dose-volume relationship between small bowel irradiation and acute small bowel toxicity was supported by the use of both small bowel loop and peritoneal space contouring techniques. Receiver operating characteristic analysis demonstrated that, for both contouring methods, the greatest sensitivity for predicting toxicity was associated with the volume receiving between 15 and 25 Gy. Conclusion: DVH analysis of peritoneal space volumes accurately predicts grade ≥3 small bowel toxicity in patients with rectal cancer receiving neoadjuvant chemoradiation therapy, suggesting that the contours of the peritoneal space provide a reasonable surrogate for the contours of individual small bowel loops. The study finds that a small bowel V15 less than 275 cc and a peritoneal space V15 less than 830 cc are associated with a less than 10% risk of grade ≥3 acute toxicity.« less
Murray, Chuck
2007-04-01
When entering into office-space lease agreements with hospitals, physician practice administrators need to pay close attention to the federal antikick-back statute and the Stark law. Compliance with these regulations calls for adherence to fair market value and commercial reasonableness--blurry terms open to interpretation. This article provides you with a framework for defining fair market value and commercial reasonableness in regard to real-estate transactions with hospitals.
NASA Astrophysics Data System (ADS)
Atanasov, Victor
2017-07-01
We extend the superconductor's free energy to include an interaction of the order parameter with the curvature of space-time. This interaction leads to geometry dependent coherence length and Ginzburg-Landau parameter which suggests that the curvature of space-time can change the superconductor's type. The curvature of space-time doesn't affect the ideal diamagnetism of the superconductor but acts as chemical potential. In a particular circumstance, the geometric field becomes order-parameter dependent, therefore the superconductor's order parameter dynamics affects the curvature of space-time and electrical or internal quantum mechanical energy can be channelled into the curvature of space-time. Experimental consequences are discussed.
Guidelines for the Selection of Near-Earth Thermal Environment Parameters for Spacecraft Design
NASA Technical Reports Server (NTRS)
Anderson, B. J.; Justus, C. G.; Batts, G. W.
2001-01-01
Thermal analysis and design of Earth orbiting systems requires specification of three environmental thermal parameters: the direct solar irradiance, Earth's local albedo, and outgoing longwave radiance (OLR). In the early 1990s data sets from the Earth Radiation Budget Experiment were analyzed on behalf of the Space Station Program to provide an accurate description of these parameters as a function of averaging time along the orbital path. This information, documented in SSP 30425 and, in more generic form in NASA/TM-4527, enabled the specification of the proper thermal parameters for systems of various thermal response time constants. However, working with the engineering community and SSP-30425 and TM-4527 products over a number of years revealed difficulties in interpretation and application of this material. For this reason it was decided to develop this guidelines document to help resolve these issues of practical application. In the process, the data were extensively reprocessed and a new computer code, the Simple Thermal Environment Model (STEM) was developed to simplify the process of selecting the parameters for input into extreme hot and cold thermal analyses and design specifications. In the process, greatly improved values for the cold case OLR values for high inclination orbits were derived. Thermal parameters for satellites in low, medium, and high inclination low-Earth orbit and with various system thermal time constraints are recommended for analysis of extreme hot and cold conditions. Practical information as to the interpretation and application of the information and an introduction to the STEM are included. Complete documentation for STEM is found in the user's manual, in preparation.
NASA Astrophysics Data System (ADS)
Shi, Yu-Fang; Ma, Yi-Yi; Song, Ping-Ping
2018-03-01
System Reliability Theory is a research hotspot of management science and system engineering in recent years, and construction reliability is useful for quantitative evaluation of project management level. According to reliability theory and target system of engineering project management, the defination of construction reliability appears. Based on fuzzy mathematics theory and language operator, value space of construction reliability is divided into seven fuzzy subsets and correspondingly, seven membership function and fuzzy evaluation intervals are got with the operation of language operator, which provides the basis of corresponding method and parameter for the evaluation of construction reliability. This method is proved to be scientific and reasonable for construction condition and an useful attempt for theory and method research of engineering project system reliability.
A system definition study for the Advanced Meteorological Temperature Sounder (AMTS)
NASA Technical Reports Server (NTRS)
1977-01-01
The functional requirements of Exhibit A (11) were used as the baseline for the conceptual design of a fixed grating out of plane multidetector spectrometer for the Space Shuttle application. Because the grating instrument would be large and the 28 element detector array would be difficult to cool radiatively from a free flying spacecraft and because increasing the spectral resolution of the grating instrument would be difficult in an instrument of reasonable size, a parallel study of a Nichelson interferometer spectrometer was undertaken. This type of instrument offers compact size, fewer detectors to cool, and the possibility of increased spectral resolution. The design and performance parameters of both the grating and interferometer approaches are described. The tradeoffs involved in comparing the two systems for sounding applications are discussed.
NASA Technical Reports Server (NTRS)
Fried, D. L.
1975-01-01
Laser scintillation data obtained by the NASA Goddard Space Flight Center balloon flight no. 5 from White Sands Missile Range on 19 October 1973 are analyzed. The measurement data, taken with various size receiver apertures, were related to predictions of aperture averaging theory, and it is concluded that the data are in reasonable agreement with theory. The following parameters are assigned to the vertical distribution of the strength of turbulence during the period of the measurements (daytime), for lambda = 0.633 microns, and the source at the zenith; the aperture averaging length is d sub o = 0.125 m, and the log-amplitude variance is (beta sub l)2 = 0.084 square nepers. This corresponds to a normalized point intensity variance of 0.40.
NASA Technical Reports Server (NTRS)
Schroeder, Lyle C.; Bailey, M. C.; Harrington, Richard F.; Kendall, Bruce M.; Campbell, Thomas G.
1994-01-01
High-spatial-resolution microwave radiometer sensing from space with reasonable swath widths and revisit times favors large aperture systems. However, with traditional precision antenna design, the size and weight requirements for such systems are in conflict with the need to emphasize small launch vehicles. This paper describes tradeoffs between the science requirements, basic operational parameters, and expected sensor performance for selected satellite radiometer concepts utilizing novel lightweight compactly packaged real apertures. Antenna, feed, and radiometer subsystem design and calibration are presented. Preliminary results show that novel lightweight real aperture coupled with state-of-the-art radiometer designs are compatible with small launch systems, and hold promise for high-resolution earth science measurements of sea ice, precipitation, soil moisture, sea surface temperature, and ocean wind speeds.
Probing the standard model and beyond with CP violation and particle cosmology
NASA Astrophysics Data System (ADS)
Savastio, Michael Paul
We discuss topics related to CP violation and particle cosmology. First, we present some developments in improving the extraction of the CP violating parameter gamma from the decay B+/- → DK+/- followed by the subsequent decay D → KS pi +pi--. The mixing of the final state kaon is an additional CP violating effect which should be taken into account in the extraction of gamma, and we discuss how this should be done. We also discuss the optimization of phase space binning needed to extract gamma from these decays in a model independent way. Next, we discuss some cosmological constraints on R-parity violating, Minimally Flavor Violating (MFV) Supersymmetry (SUSY). Finally, we show that oribtally excited dark matter cannot persist over cosmic timescales for various model independent reasons.
Design and analysis of control system for VCSEL of atomic interference magnetometer
NASA Astrophysics Data System (ADS)
Zhang, Xiao-nan; Sun, Xiao-jie; Kou, Jun; Yang, Feng; Li, Jie; Ren, Zhang; Wei, Zong-kang
2016-11-01
Magnetic field detection is an important means of deep space environment exploration. Benefit from simple structure and low power consumption, atomic interference magnetometer become one of the most potential detector payloads. Vertical Cavity Surface Emitting Laser (VCSEL) is usually used as a light source in atomic interference magnetometer and its frequency stability directly affects the stability and sensitivity of magnetometer. In this paper, closed-loop control strategy of VCSEL was designed and analysis, the controller parameters were selected and the feedback error algorithm was optimized as well. According to the results of experiments that were performed on the hardware-in-the-loop simulation platform, the designed closed-loop control system is reasonable and it is able to effectively improve the laser frequency stability during the actual work of the magnetometer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, Bismarck L., E-mail: bismarck_luiz@yahoo.com.br; Garcia, Amauri; Spinelli, José E.
Low temperature soldering technology encompasses Sn–Bi based alloys as reference materials for joints since such alloys may be molten at temperatures less than 180 °C. Despite the relatively high strength of these alloys, segregation problems and low ductility are recognized as potential disadvantages. Thus, for low-temperature applications, Bi–Sn eutectic or near-eutectic compositions with or without additions of alloying elements are considered interesting possibilities. In this context, additions of third elements such as Cu and Ag may be an alternative in order to reach sounder solder joints. The length scale of the phases and their proportions are known to be themore » most important factors affecting the final wear, mechanical and corrosions properties of ternary Sn–Bi–(Cu,Ag) alloys. In spite of this promising outlook, studies emphasizing interrelations of microstructure features and solidification thermal parameters regarding these multicomponent alloys are rare in the literature. In the present investigation Sn–Bi–(Cu,Ag) alloys were directionally solidified (DS) under transient heat flow conditions. A complete characterization is performed including experimental cooling thermal parameters, segregation (XRF), optical and scanning electron microscopies, X-ray diffraction (XRD) and length scale of the microstructural phases. Experimental growth laws relating dendritic spacings to solidification thermal parameters have been proposed with emphasis on the effects of Ag and Cu. The theoretical predictions of the Rappaz-Boettinger model are shown to be slightly above the experimental scatter of secondary dendritic arm spacings for both ternary Sn–Bi–Cu and Sn–Bi–Ag alloys examined. - Highlights: • Dendritic growth prevailed for the ternary Sn–Bi–Cu and Sn–Bi–Ag solder alloys. • Bi precipitates within Sn-rich dendrites were shown to be unevenly distributed. • Morphology and preferential region for the Ag{sub 3}Sn growth depend on Ag content and Ṫ{sub L}. • Rappaz-Boettinger model reasonably estimated the experimental scatter of λ{sub 2}.« less
42 CFR 85a.6 - Provision of suitable space for employee interviews and examinations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Provision of suitable space for employee interviews... HEALTH INVESTIGATIONS OF PLACES OF EMPLOYMENT § 85a.6 Provision of suitable space for employee interviews... such space is reasonably available, to NIOSH to conduct private interviews with, and medical...
Navigating the Decision Space: Shared Medical Decision Making as Distributed Cognition.
Lippa, Katherine D; Feufel, Markus A; Robinson, F Eric; Shalin, Valerie L
2017-06-01
Despite increasing prominence, little is known about the cognitive processes underlying shared decision making. To investigate these processes, we conceptualize shared decision making as a form of distributed cognition. We introduce a Decision Space Model to identify physical and social influences on decision making. Using field observations and interviews, we demonstrate that patients and physicians in both acute and chronic care consider these influences when identifying the need for a decision, searching for decision parameters, making actionable decisions Based on the distribution of access to information and actions, we then identify four related patterns: physician dominated; physician-defined, patient-made; patient-defined, physician-made; and patient-dominated decisions. Results suggests that (a) decision making is necessarily distributed between physicians and patients, (b) differential access to information and action over time requires participants to transform a distributed task into a shared decision, and (c) adverse outcomes may result from failures to integrate physician and patient reasoning. Our analysis unifies disparate findings in the medical decision-making literature and has implications for improving care and medical training.
Aruga, Yasuhiro; Kozuka, Masaya
2016-04-01
Needle-shaped precipitates in an aged Al-0.62Mg-0.93Si (mass%) alloy were identified using a compositional threshold method, an isoconcentration surface, in atom probe tomography (APT). The influence of thresholds on the morphological and compositional characteristics of the precipitates was investigated. Utilizing optimum parameters for the concentration space, a reliable number density of the precipitates is obtained without dependence on the elemental concentration threshold in comparison with evaluation by transmission electron microscopy (TEM). It is suggested that careful selection of the concentration space in APT can lead to a reasonable average Mg/Si ratio for the precipitates. It was found that the maximum length and maximum diameter of the precipitates are affected by the elemental concentration threshold. Adjustment of the concentration threshold gives better agreement with the precipitate dimensions measured by TEM. © The Author 2015. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
The role of service areas in the optimization of FSS orbital and frequency assignments
NASA Technical Reports Server (NTRS)
Levis, C. A.; Wang, C. W.; Yamamura, Y.; Reilly, C. H.; Gonsalvez, D. J.
1985-01-01
A relationship is derived, on a single-entry interference basis, for the minimum allowable spacing between two satellites as a function of electrical parameters and service-area geometries. For circular beams, universal curves relate the topocentric satellite spacing angle to the service-area separation angle measured at the satellite. The corresponding geocentric spacing depends only weakly on the mean longitude of the two satellites, and this is true also for alliptical antenna beams. As a consequence, if frequency channels are preassigned, the orbital assignment synthesis of a satellite system can be formulated as a mixed-integer programming (MIP) problem or approximated by a linear programming (LP) problem, with the interference protection requirements enforced by constraints while some linear function is optimized. Possible objective-function choices are discussed and explicit formulations are presented for the choice of the sum of the absolute deviations of the orbital locations from some prescribed ideal location set. A test problem is posed consisting of six service areas, each served by one satellite, all using elliptical antenna beams and the same frequency channels. Numerical results are given for the three ideal location prescriptions for both the MIP and LP formulations. The resulting scenarios also satisfy reasonable aggregate interference protection requirements.
Ultrasonics and space instrumentation
NASA Technical Reports Server (NTRS)
1987-01-01
The design topic selected was an outgrowth of the experimental design work done in the Fluid Behavior in Space experiment, which relies on the measurement of minute changes of the pressure and temperature to obtain reasonably accurate volume determinations. An alternative method of volume determination is the use of ultrasonic imaging. An ultrasonic wave system is generated by wall mounted transducer arrays. The interior liquid configuration causes reflection and refraction of the pattern so that analysis of the received wave system provides a description of the configuration and hence volume. Both continuous and chirp probe beams were used in a laboratory experiment simulating a surface wetting propellant. The hardware included a simulated tank with gaseous voids, transmitting and receiving transducers, transmitters, receivers, computer interface, and computer. Analysis software was developed for image generation and interpretation of results. Space instrumentation was pursued in support of a number of experiments under development for GAS flights. The program included thirty undergraduate students pursuing major qualifying project work under the guidance of eight faculty supported by a teaching assistant. Both mechanical and electrical engineering students designed and built several microprocessor systems to measure parameters such as temperature, acceleration, pressure, velocity, and circulation in order to determine combustion products, vortex formation, gas entrainment, EMR emissions from thunderstorms, and milli-g-accelerations due to crew motions.
Human capabilities in space. [man machine interaction
NASA Technical Reports Server (NTRS)
Nicogossian, A. E.
1984-01-01
Man's ability to live and perform useful work in space was demonstrated throughout the history of manned space flight. Current planning envisions a multi-functional space station. Man's unique abilities to respond to the unforeseen and to operate at a level of complexity exceeding any reasonable amount of previous planning distinguish him from present day machines. His limitations, however, include his inherent inability to survive without protection, his limited strength, and his propensity to make mistakes when performing repetitive and monotonous tasks. By contrast, an automated system does routine and delicate tasks, exerts force smoothly and precisely, stores, and recalls large amounts of data, and performs deductive reasoning while maintaining a relative insensitivity to the environment. The establishment of a permanent presence of man in space demands that man and machines be appropriately combined in spaceborne systems. To achieve this optimal combination, research is needed in such diverse fields as artificial intelligence, robotics, behavioral psychology, economics, and human factors engineering.
Interesting viewpoints to those who will put Ada into practice
NASA Technical Reports Server (NTRS)
Carlsson, Arne
1986-01-01
Ada will most probably be used as the programming language for computers in the NASA Space Station. It is reasonable to suppose that Ada will be used for at least embedded computers, because the high software costs for these embedded computers were the reason why Ada activities were initiated about ten years ago. The on-board computers are designed for use in space applications, where maintenance by man is impossible. All manipulation of such computers has to be performed in an autonomous way or remote with commands from the ground. In a manned Space Station some maintenance work can be performed by service people on board, but there are still a lot of applications, which require autonomous computers, for example, vital Space Station functions and unmanned orbital transfer vehicles. Those aspect which have come out of the analysis of Ada characteristics together with the experience of requirements for embedded on-board computers in space applications are examined.
Tableau Calculus for the Logic of Comparative Similarity over Arbitrary Distance Spaces
NASA Astrophysics Data System (ADS)
Alenda, Régis; Olivetti, Nicola
The logic CSL (first introduced by Sheremet, Tishkovsky, Wolter and Zakharyaschev in 2005) allows one to reason about distance comparison and similarity comparison within a modal language. The logic can express assertions of the kind "A is closer/more similar to B than to C" and has a natural application to spatial reasoning, as well as to reasoning about concept similarity in ontologies. The semantics of CSL is defined in terms of models based on different classes of distance spaces and it generalizes the logic S4 u of topological spaces. In this paper we consider CSL defined over arbitrary distance spaces. The logic comprises a binary modality to represent comparative similarity and a unary modality to express the existence of the minimum of a set of distances. We first show that the semantics of CSL can be equivalently defined in terms of preferential models. As a consequence we obtain the finite model property of the logic with respect to its preferential semantic, a property that does not hold with respect to the original distance-space semantics. Next we present an analytic tableau calculus based on its preferential semantics. The calculus provides a decision procedure for the logic, its termination is obtained by imposing suitable blocking restrictions.
On Replacing "Quantum Thinking" with Counterfactual Reasoning
NASA Astrophysics Data System (ADS)
Narens, Louis
The probability theory used in quantum mechanics is currently being employed by psychologists to model the impact of context on decision. Its event space consists of closed subspaces of a Hilbert space, and its probability function sometimes violate the law of the finite additivity of probabilities. Results from the quantum mechanics literature indicate that such a "Hilbert space probability theory" cannot be extended in a useful way to standard, finitely additive, probability theory by the addition of new events with specific probabilities. This chapter presents a new kind of probability theory that shares many fundamental algebraic characteristics with Hilbert space probability theory but does extend to standard probability theory by adjoining new events with specific probabilities. The new probability theory arises from considerations about how psychological experiments are related through counterfactual reasoning.
EDITORIAL: Interrelationship between plasma phenomena in the laboratory and in space
NASA Astrophysics Data System (ADS)
Koepke, Mark
2008-07-01
The premise of investigating basic plasma phenomena relevant to space is that an alliance exists between both basic plasma physicists, using theory, computer modelling and laboratory experiments, and space science experimenters, using different instruments, either flown on different spacecraft in various orbits or stationed on the ground. The intent of this special issue on interrelated phenomena in laboratory and space plasmas is to promote the interpretation of scientific results in a broader context by sharing data, methods, knowledge, perspectives, and reasoning within this alliance. The desired outcomes are practical theories, predictive models, and credible interpretations based on the findings and expertise available. Laboratory-experiment papers that explicitly address a specific space mission or a specific manifestation of a space-plasma phenomenon, space-observation papers that explicitly address a specific laboratory experiment or a specific laboratory result, and theory or modelling papers that explicitly address a connection between both laboratory and space investigations were encouraged. Attention was given to the utility of the references for readers who seek further background, examples, and details. With the advent of instrumented spacecraft, the observation of waves (fluctuations), wind (flows), and weather (dynamics) in space plasmas was approached within the framework provided by theory with intuition provided by the laboratory experiments. Ideas on parallel electric field, magnetic topology, inhomogeneity, and anisotropy have been refined substantially by laboratory experiments. Satellite and rocket observations, theory and simulations, and laboratory experiments have contributed to the revelation of a complex set of processes affecting the accelerations of electrons and ions in the geospace plasma. The processes range from meso-scale of several thousands of kilometers to micro-scale of a few meters to kilometers. Papers included in this special issue serve to synthesise our current understanding of processes related to the coupling and feedback at disparate scales. Categories of topics included here are (1) ionospheric physics and (2) Alfvén-wave physics, both of which are related to the particle acceleration responsible for auroral displays, (3) whistler-mode triggering mechanism, which is relevant to radiation-belt dynamics, (4) plasmoid encountering a barrier, which has applications throughout the realm of space and astrophysical plasmas, and (5) laboratory investigations of the entire magnetosphere or the plasma surrounding the magnetosphere. The papers are ordered from processes that take place nearest the Earth to processes that take place at increasing distances from Earth. Many advances in understanding space plasma phenomena have been linked to insight derived from theoretical modeling and/or laboratory experiments. Observations from space-borne instruments are typically interpreted using theoretical models developed to predict the properties and dynamics of space and astrophysical plasmas. The usefulness of customized laboratory experiments for providing confirmation of theory by identifying, isolating, and studying physical phenomena efficiently, quickly, and economically has been demonstrated in the past. The benefits of laboratory experiments to investigating space-plasma physics are their reproducibility, controllability, diagnosability, reconfigurability, and affordability compared to a satellite mission or rocket campaign. Certainly, the plasma being investigated in a laboratory device is quite different from that being measured by a spaceborne instrument; nevertheless, laboratory experiments discover unexpected phenomena, benchmark theoretical models, develop physical insight, establish observational signatures, and pioneer diagnostic techniques. Explicit reference to such beneficial laboratory contributions is occasionally left out of the citations in the space-physics literature in favor of theory-paper counterparts and, thus, the scientific support that laboratory results can provide to the development of space-relevant theoretical models is often under-recognized. It is unrealistic to expect the dimensional parameters corresponding to space plasma to be matchable in the laboratory. However, a laboratory experiment is considered well designed if the subset of parameters relevant to a specific process shares the same phenomenological regime as the subset of analogous space parameters, even if less important parameters are mismatched. Regime boundaries are assigned by normalizing a dimensional parameter to an appropriate reference or scale value to make it dimensionless and noting the values at which transitions occur in the physical behavior or approximations. An example of matching regimes for cold-plasma waves is finding a 45° diagonal line on the log--log CMA diagram along which lie both a laboratory-observed wave and a space-observed wave. In such a circumstance, a space plasma and a lab plasma will support the same kind of modes if the dimensionless parameters are scaled properly (Bellan 2006 Fundamentals of Plasma Physics (Cambridge: Cambridge University Press) p 227). The plasma source, configuration geometry, and boundary conditions associated with a specific laboratory experiment are characteristic elements that affect the plasma and plasma processes that are being investigated. Space plasma is not exempt from an analogous set of constraining factors that likewise influence the phenomena that occur. Typically, each morphologically distinct region of space has associated with it plasma that is unique by virtue of the various mechanisms responsible for the plasma's presence there, as if the plasma were produced by a unique source. Boundary effects that typically constrain the possible parameter values to lie within one or more restricted ranges are inescapable in laboratory plasma. The goal of a laboratory experiment is to examine the relevant physics within these ranges and extrapolate the results to space conditions that may or may not be subject to any restrictions on the values of the plasma parameters. The interrelationship between laboratory and space plasma experiments has been cultivated at a low level and the potential scientific benefit in this area has yet to be realized. The few but excellent examples of joint papers, joint experiments, and directly relevant cross-disciplinary citations are a direct result of the emphasis placed on this interrelationship two decades ago. Building on this special issue Plasma Physics and Controlled Fusion plans to create a dedicated webpage to highlight papers directly relevant to this field published either in the recent past or in the future. It is hoped that this resource will appeal to the readership in the laboratory-experiment and space-plasma communities and improve the cross-fertilization between them.
NASA Technical Reports Server (NTRS)
Krasteva, Denitza T.
1998-01-01
Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.
A Sensitivity Analysis of fMRI Balloon Model.
Zayane, Chadia; Laleg-Kirati, Taous Meriem
2015-01-01
Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.
A Structured Reasoning Space for Design of Complex, Socio-Technical Systems
2006-10-01
gather during midmorning at an identified location. The human intelligence sources indicate that the meeting will commence at 10 a.m. and finish ...refueling station. 21 Typ-s of .uppr-s1on capawility Soppe.oion lob. (.-a .h~po d" Coton ) Olotano.. O otk-lo n k brWg..t.. Tbo AabNity rl.o1 Loft...designer or analyst could use the reasoning space to consider whether current and alternative configurations can result in the plan being finished within
Cognition versus Constitution of Objects: From Kant to Modern Physics
NASA Astrophysics Data System (ADS)
Mittelstaedt, Peter
2009-07-01
Classical mechanics in phase space as well as quantum mechanics in Hilbert space lead to states and observables but not to objects that may be considered as carriers of observable quantities. However, in both cases objects can be constituted as new entities by means of invariance properties of the theories in question. We show, that this way of reasoning has a long history in physics and philosophy and that it can be traced back to the transcendental arguments in Kant’s critique of pure reason.
Simulation study on electric field intensity above train roof
NASA Astrophysics Data System (ADS)
Fan, Yizhe; Li, Huawei; Yang, Shasha
2018-04-01
In order to understand the distribution of electric field in the space above the train roof accurately and select the installation position of the detection device reasonably, in this paper, the 3D model of pantograph-catenary is established by using SolidWorks software, and the spatial electric field distribution of pantograph-catenary model is simulated based on Comsol software. According to the electric field intensity analysis within the 0.4m space above train roof, we give a reasonable installation of the detection device.
14 CFR 1214.813 - Computation of sharing and pricing parameters.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Computation of sharing and pricing parameters. 1214.813 Section 1214.813 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION SPACE FLIGHT Reimbursement for Spacelab Services § 1214.813 Computation of sharing and pricing...
NASA Astrophysics Data System (ADS)
Luminari, Nicola; Airiau, Christophe; Bottaro, Alessandro
2017-11-01
In the description of the homogenized flow through a porous medium saturated by a fluid, the apparent permeability tensor is one of the most important parameters to evaluate. In this work we compute numerically the apparent permeability tensor for a 3D porous medium constituted by rigid cylinder using the VANS (Volume-Averaged Navier-Stokes) theory. Such a tensor varies with the Reynolds number, the mean pressure gradient orientation and the porosity. A database is created exploring the space of the above parameters. Including the two Euler angles that define the mean pressure gradient is extremely important to capture well possible 3D effects. Based on the database, a kriging interpolation metamodel is used to obtain an estimate of all the tensor components for any input parameters. Preliminary results of the flow in a porous channel based on the metamodel and the VANS closure are shown; the use of such a reduced order model together with a numerical code based on the equations at the macroscopic scale permit to maintain the computational times to within reasonable levels. The authors acknowledge the IDEX Foundation of the University of Toulouse 570 for the financial support Granted to the last author under the project Attractivity Chairs.
Quasi-dynamic earthquake fault systems with rheological heterogeneity
NASA Astrophysics Data System (ADS)
Brietzke, G. B.; Hainzl, S.; Zoeller, G.; Holschneider, M.
2009-12-01
Seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates, such models cannot allow for physical statements of the described seismicity. In contrary such empirical stochastic models, physics based earthquake fault systems models allow for a physical reasoning and interpretation of the produced seismicity and system dynamics. Recently different fault system earthquake simulators based on frictional stick-slip behavior have been used to study effects of stress heterogeneity, rheological heterogeneity, or geometrical complexity on earthquake occurrence, spatial and temporal clustering of earthquakes, and system dynamics. Here we present a comparison of characteristics of synthetic earthquake catalogs produced by two different formulations of quasi-dynamic fault system earthquake simulators. Both models are based on discretized frictional faults embedded in an elastic half-space. While one (1) is governed by rate- and state-dependent friction with allowing three evolutionary stages of independent fault patches, the other (2) is governed by instantaneous frictional weakening with scheduled (and therefore causal) stress transfer. We analyze spatial and temporal clustering of events and characteristics of system dynamics by means of physical parameters of the two approaches.
Emulation: A fast stochastic Bayesian method to eliminate model space
NASA Astrophysics Data System (ADS)
Roberts, Alan; Hobbs, Richard; Goldstein, Michael
2010-05-01
Joint inversion of large 3D datasets has been the goal of geophysicists ever since the datasets first started to be produced. There are two broad approaches to this kind of problem, traditional deterministic inversion schemes and more recently developed Bayesian search methods, such as MCMC (Markov Chain Monte Carlo). However, using both these kinds of schemes has proved prohibitively expensive, both in computing power and time cost, due to the normally very large model space which needs to be searched using forward model simulators which take considerable time to run. At the heart of strategies aimed at accomplishing this kind of inversion is the question of how to reliably and practicably reduce the size of the model space in which the inversion is to be carried out. Here we present a practical Bayesian method, known as emulation, which can address this issue. Emulation is a Bayesian technique used with considerable success in a number of technical fields, such as in astronomy, where the evolution of the universe has been modelled using this technique, and in the petroleum industry where history matching is carried out of hydrocarbon reservoirs. The method of emulation involves building a fast-to-compute uncertainty-calibrated approximation to a forward model simulator. We do this by modelling the output data from a number of forward simulator runs by a computationally cheap function, and then fitting the coefficients defining this function to the model parameters. By calibrating the error of the emulator output with respect to the full simulator output, we can use this to screen out large areas of model space which contain only implausible models. For example, starting with what may be considered a geologically reasonable prior model space of 10000 models, using the emulator we can quickly show that only models which lie within 10% of that model space actually produce output data which is plausibly similar in character to an observed dataset. We can thus much more tightly constrain the input model space for a deterministic inversion or MCMC method. By using this technique jointly on several datasets (specifically seismic, gravity, and magnetotelluric (MT) describing the same region), we can include in our modelling uncertainties in the data measurements, the relationships between the various physical parameters involved, as well as the model representation uncertainty, and at the same time further reduce the range of plausible models to several percent of the original model space. Being stochastic in nature, the output posterior parameter distributions also allow our understanding of/beliefs about a geological region can be objectively updated, with full assessment of uncertainties, and so the emulator is also an inversion-type tool in it's own right, with the advantage (as with any Bayesian method) that our uncertainties from all sources (both data and model) can be fully evaluated.
INDUCTIVE SYSTEM HEALTH MONITORING WITH STATISTICAL METRICS
NASA Technical Reports Server (NTRS)
Iverson, David L.
2005-01-01
Model-based reasoning is a powerful method for performing system monitoring and diagnosis. Building models for model-based reasoning is often a difficult and time consuming process. The Inductive Monitoring System (IMS) software was developed to provide a technique to automatically produce health monitoring knowledge bases for systems that are either difficult to model (simulate) with a computer or which require computer models that are too complex to use for real time monitoring. IMS processes nominal data sets collected either directly from the system or from simulations to build a knowledge base that can be used to detect anomalous behavior in the system. Machine learning and data mining techniques are used to characterize typical system behavior by extracting general classes of nominal data from archived data sets. In particular, a clustering algorithm forms groups of nominal values for sets of related parameters. This establishes constraints on those parameter values that should hold during nominal operation. During monitoring, IMS provides a statistically weighted measure of the deviation of current system behavior from the established normal baseline. If the deviation increases beyond the expected level, an anomaly is suspected, prompting further investigation by an operator or automated system. IMS has shown potential to be an effective, low cost technique to produce system monitoring capability for a variety of applications. We describe the training and system health monitoring techniques of IMS. We also present the application of IMS to a data set from the Space Shuttle Columbia STS-107 flight. IMS was able to detect an anomaly in the launch telemetry shortly after a foam impact damaged Columbia's thermal protection system.
Ni, Xuan; Yang, Rui; Wang, Wen-Xu; Lai, Ying-Cheng; Grebogi, Celso
2010-12-01
Microscopic models based on evolutionary games on spatially extended scales have recently been developed to address the fundamental issue of species coexistence. In this pursuit almost all existing works focus on the relevant dynamical behaviors originated from a single but physically reasonable initial condition. To gain comprehensive and global insights into the dynamics of coexistence, here we explore the basins of coexistence and extinction and investigate how they evolve as a basic parameter of the system is varied. Our model is cyclic competitions among three species as described by the classical rock-paper-scissors game, and we consider both discrete lattice and continuous space, incorporating species mobility and intraspecific competitions. Our results reveal that, for all cases considered, a basin of coexistence always emerges and persists in a substantial part of the parameter space, indicating that coexistence is a robust phenomenon. Factors such as intraspecific competition can, in fact, promote coexistence by facilitating the emergence of the coexistence basin. In addition, we find that the extinction basins can exhibit quite complex structures in terms of the convergence time toward the final state for different initial conditions. We have also developed models based on partial differential equations, which yield basin structures that are in good agreement with those from microscopic stochastic simulations. To understand the origin and emergence of the observed complicated basin structures is challenging at the present due to the extremely high dimensional nature of the underlying dynamical system. © 2010 American Institute of Physics.
NASA Astrophysics Data System (ADS)
Hong, Sungwook E.; Zoe, Heeseung; Ahn, Kyungjin
2017-11-01
We study the impact of thermal inflation on the formation of cosmological structures and present astrophysical observables which can be used to constrain and possibly probe the thermal inflation scenario. These are dark matter halo abundance at high redshifts, satellite galaxy abundance in the Milky Way, and fluctuation in the 21-cm radiation background before the epoch of reionization. The thermal inflation scenario leaves a characteristic signature on the matter power spectrum by boosting the amplitude at a specific wave number determined by the number of e-foldings during thermal inflation (N_{bc}), and strongly suppressing the amplitude for modes at smaller scales. For a reasonable range of parameter space, one of the consequences is the suppression of minihalo formation at high redshifts and that of satellite galaxies in the Milky Way. While this effect is substantial, it is degenerate with other cosmological or astrophysical effects. The power spectrum of the 21-cm background probes this impact more directly, and its observation may be the best way to constrain the thermal inflation scenario due to the characteristic signature in the power spectrum. The Square Kilometre Array (SKA) in phase 1 (SKA1) has sensitivity large enough to achieve this goal for models with N_{bc} ≳ 26 if a 10000-hr observation is performed. The final phase SKA, with anticipated sensitivity about an order of magnitude higher, seems more promising and will cover a wider parameter space.
Constructionism and the space of reasons
NASA Astrophysics Data System (ADS)
Mackrell, Kate; Pratt, Dave
2017-12-01
Constructionism, best known as the framework for action underpinning Seymour Papert's work with Logo, has stressed the importance of engaging students in creating their own products. Noss and Hoyles have argued that such activity enables students to participate increasingly in a web of connections to further their activity. Ainley and Pratt have elaborated that learning is best facilitated when the student is engaged in a purposeful activity that leads to appreciation of the power of mathematical ideas. Constructionism gives prominence to how the learner's logical reasoning and emotion-driven reasons for engagement are inseparable. We argue that the dependence of constructionism upon the orienting framework of constructivism fails to provide sufficient theoretical underpinning for these ideas. We therefore propose an alternative orienting framework, in which learning takes place through initiation into the space of reasons, such that a person's thoughts, actions and feelings are increasingly open to critique and justification. We argue that knowing as responsiveness to reasons encompasses not only the powerful ideas of mathematics and disciplinary knowledge of modes of enquiry but also the extralogical, such as in feelings of the aesthetic, control, excitement, elegance and efficiency. We discuss the implication that mathematics educators deeply consider the learner's reasons for purposeful activity and design settings in which these reasons can be made public and open to critique.
Image quality, space-qualified UV interference filters
NASA Technical Reports Server (NTRS)
Mooney, Thomas A.
1992-01-01
The progress during the contract period is described. The project involved fabrication of image quality, space-qualified bandpass filters in the 200-350 nm spectral region. Ion-assisted deposition (IAD) was applied to produce stable, reasonably durable filter coatings on space compatible UV substrates. Thin film materials and UV transmitting substrates were tested for resistance to simulated space effects.
Calibration Laboratory Capabilities Listing as of April 2009
NASA Technical Reports Server (NTRS)
Kennedy, Gary W.
2009-01-01
This document reviews the Calibration Laboratory capabilities for various NASA centers (i.e., Glenn Research Center and Plum Brook Test Facility Kennedy Space Center Marshall Space Flight Center Stennis Space Center and White Sands Test Facility.) Some of the parameters reported are: Alternating current, direct current, dimensional, mass, force, torque, pressure and vacuum, safety, and thermodynamics parameters. Some centers reported other parameters.
A state-based approach to trend recognition and failure prediction for the Space Station Freedom
NASA Technical Reports Server (NTRS)
Nelson, Kyle S.; Hadden, George D.
1992-01-01
A state-based reasoning approach to trend recognition and failure prediction for the Altitude Determination, and Control System (ADCS) of the Space Station Freedom (SSF) is described. The problem domain is characterized by features (e.g., trends and impending failures) that develop over a variety of time spans, anywhere from several minutes to several years. Our state-based reasoning approach, coupled with intelligent data screening, allows features to be tracked as they develop in a time-dependent manner. That is, each state machine has the ability to encode a time frame for the feature it detects. As features are detected, they are recorded and can be used as input to other state machines, creating a hierarchical feature recognition scheme. Furthermore, each machine can operate independently of the others, allowing simultaneous tracking of features. State-based reasoning was implemented in the trend recognition and the prognostic modules of a prototype Space Station Freedom Maintenance and Diagnostic System (SSFMDS) developed at Honeywell's Systems and Research Center.
NASA Technical Reports Server (NTRS)
Rash, James L. (Editor); Dent, Carolyn P. (Editor)
1989-01-01
Theoretical and implementation aspects of AI systems for space applications are discussed in reviews and reports. Sections are devoted to planning and scheduling, fault isolation and diagnosis, data management, modeling and simulation, and development tools and methods. Particular attention is given to a situated reasoning architecture for space repair and replace tasks, parallel plan execution with self-processing networks, the electrical diagnostics expert system for Spacelab life-sciences experiments, diagnostic tolerance for missing sensor data, the integration of perception and reasoning in fast neural modules, a connectionist model for dynamic control, and applications of fuzzy sets to the development of rule-based expert systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konno, Kohkichi, E-mail: kohkichi@tomakomai-ct.ac.jp; Nagasawa, Tomoaki, E-mail: nagasawa@tomakomai-ct.ac.jp; Takahashi, Rohta, E-mail: takahashi@tomakomai-ct.ac.jp
We consider the scattering of a quantum particle by two independent, successive parity-invariant point interactions in one dimension. The parameter space for the two point interactions is given by the direct product of two tori, which is described by four parameters. By investigating the effects of the two point interactions on the transmission probability of plane wave, we obtain the conditions for the parameter space under which perfect resonant transmission occur. The resonance conditions are found to be described by symmetric and anti-symmetric relations between the parameters.
Mapping an operator's perception of a parameter space
NASA Technical Reports Server (NTRS)
Pew, R. W.; Jagacinski, R. J.
1972-01-01
Operators monitored the output of two versions of the crossover model having a common random input. Their task was to make discrete, real-time adjustments of the parameters k and tau of one of the models to make its output time history converge to that of the other, fixed model. A plot was obtained of the direction of parameter change as a function of position in the (tau, k) parameter space relative to the nominal value. The plot has a great deal of structure and serves as one form of representation of the operator's perception of the parameter space.
Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas
2013-01-01
Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.
Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas
2013-01-01
Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum. PMID:23766941
14 CFR 1230.111 - Criteria for IRB approval of research.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Criteria for IRB approval of research. 1230.111 Section 1230.111 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION PROTECTION OF... knowledge that may reasonably be expected to result. In evaluating risks and benefits, the IRB should...
14 CFR 1212.401 - Filing statements of dispute.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Filing statements of dispute. 1212.401 Section 1212.401 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION PRIVACY ACT-NASA... shall: (1) Be in writing; (2) Set forth reasons for the individual's disagreement with NASA's refusal to...
14 CFR 1212.401 - Filing statements of dispute.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Filing statements of dispute. 1212.401 Section 1212.401 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION PRIVACY ACT-NASA... shall: (1) Be in writing; (2) Set forth reasons for the individual's disagreement with NASA's refusal to...
14 CFR 1214.117 - Launch and orbit parameters for a standard launch.
Code of Federal Regulations, 2013 CFR
2013-01-01
...) Launch from Kennedy Space Center (KSC) into the customer's choice of two standard mission orbits: 160 NM... 14 Aeronautics and Space 5 2013-01-01 2013-01-01 false Launch and orbit parameters for a standard launch. 1214.117 Section 1214.117 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION...
14 CFR 1214.117 - Launch and orbit parameters for a standard launch.
Code of Federal Regulations, 2012 CFR
2012-01-01
...) Launch from Kennedy Space Center (KSC) into the customer's choice of two standard mission orbits: 160 NM... 14 Aeronautics and Space 5 2012-01-01 2012-01-01 false Launch and orbit parameters for a standard launch. 1214.117 Section 1214.117 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION...
14 CFR 1214.117 - Launch and orbit parameters for a standard launch.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) Launch from Kennedy Space Center (KSC) into the customer's choice of two standard mission orbits: 160 NM... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Launch and orbit parameters for a standard launch. 1214.117 Section 1214.117 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION...
NASA Astrophysics Data System (ADS)
Jia, Bing
2014-03-01
A comb-shaped chaotic region has been simulated in multiple two-dimensional parameter spaces using the Hindmarsh—Rose (HR) neuron model in many recent studies, which can interpret almost all of the previously simulated bifurcation processes with chaos in neural firing patterns. In the present paper, a comb-shaped chaotic region in a two-dimensional parameter space was reproduced, which presented different processes of period-adding bifurcations with chaos with changing one parameter and fixed the other parameter at different levels. In the biological experiments, different period-adding bifurcation scenarios with chaos by decreasing the extra-cellular calcium concentration were observed from some neural pacemakers at different levels of extra-cellular 4-aminopyridine concentration and from other pacemakers at different levels of extra-cellular caesium concentration. By using the nonlinear time series analysis method, the deterministic dynamics of the experimental chaotic firings were investigated. The period-adding bifurcations with chaos observed in the experiments resembled those simulated in the comb-shaped chaotic region using the HR model. The experimental results show that period-adding bifurcations with chaos are preserved in different two-dimensional parameter spaces, which provides evidence of the existence of the comb-shaped chaotic region and a demonstration of the simulation results in different two-dimensional parameter spaces in the HR neuron model. The results also present relationships between different firing patterns in two-dimensional parameter spaces.
Parameter-space metric of semicoherent searches for continuous gravitational waves
NASA Astrophysics Data System (ADS)
Pletsch, Holger J.
2010-08-01
Continuous gravitational-wave (CW) signals such as emitted by spinning neutron stars are an important target class for current detectors. However, the enormous computational demand prohibits fully coherent broadband all-sky searches for prior unknown CW sources over wide ranges of parameter space and for yearlong observation times. More efficient hierarchical “semicoherent” search strategies divide the data into segments much shorter than one year, which are analyzed coherently; then detection statistics from different segments are combined incoherently. To optimally perform the incoherent combination, understanding of the underlying parameter-space structure is requisite. This problem is addressed here by using new coordinates on the parameter space, which yield the first analytical parameter-space metric for the incoherent combination step. This semicoherent metric applies to broadband all-sky surveys (also embedding directed searches at fixed sky position) for isolated CW sources. Furthermore, the additional metric resolution attained through the combination of segments is studied. From the search parameters (sky position, frequency, and frequency derivatives), solely the metric resolution in the frequency derivatives is found to significantly increase with the number of segments.
Evidence for inflation in an axion landscape
NASA Astrophysics Data System (ADS)
Nath, Pran; Piskunov, Maksim
2018-03-01
We discuss inflation models within supersymmetry and supergravity frameworks with a landscape of chiral superfields and one U(1) shift symmetry which is broken by non-perturbative symmetry breaking terms in the superpotential. We label the pseudo scalar component of the chiral fields axions and their real parts saxions. Thus in the models only one combination of axions will be a pseudo-Nambu-Goldstone-boson which will act as the inflaton. The proposed models constitute consistent inflation for the following reasons: the inflation potential arises dynamically with stabilized saxions, the axion decay constant can lie in the sub-Planckian region, and consistency with the Planck data is achieved. The axion landscape consisting of m axion pairs is assumed with the axions in each pair having opposite charges. A fast roll-slow roll splitting mechanism for the axion potential is proposed which is realized with a special choice of the axion basis. In this basis the 2 m coupled equations split into 2 m - 1 equations which enter in the fast roll and there is one unique linear combination of the 2 m fields which controls the slow roll and thus the power spectrum of curvature and tensor perturbations. It is shown that a significant part of the parameter space exists where inflation is successful, i.e., N pivot = [50, 60], the spectral index n s of curvature perturbations, and the ratio r of the power spectrum of tensor perturbations and curvature perturbations, lie in the experimentally allowed regions given by the Planck experiment. Further, it is shown that the model allows for a significant region of the parameter space where the effective axion decay constant can lie in the sub-Planckian domain. An analysis of the tensor spectral index n t is also given and the future experimental data which constraints n t will further narrow down the parameter space of the proposed inflationary models. Topics of further interest include implications of the model for gravitational waves and non-Gaussianities in the curvature perturbations. Also of interest is embedding of the model in strings which are expected to possess a large axionic landscape.
NASA Astrophysics Data System (ADS)
Tuller, Markus; Or, Dani
2001-05-01
Many models for hydraulic conductivity of partially saturated porous media rely on oversimplified representation of the pore space as a bundle of cylindrical capillaries and disregard flow in liquid films. Recent progress in modeling liquid behavior in angular pores of partially saturated porous media offers an alternative framework. We assume that equilibrium liquid-vapor interfaces provide well-defined and stable boundaries for slow laminar film and corner flow regimes in pore space comprised of angular pores connected to slit-shaped spaces. Knowledge of liquid configuration in the assumed geometry facilitates calculation of average liquid velocities in films and corners and enables derivation of pore-scale hydraulic conductivity as a function of matric potential. The pore-scale model is statistically upscaled to represent hydraulic conductivity for a sample of porous medium. Model parameters for the analytical sample-scale expressions are estimated from measured liquid retention data and other measurable medium properties. Model calculations illustrate the important role of film flow, whose contribution dominates capillary flow (in full pores and corners) at relatively high matric potentials (approximately -100 to -300 J kg-1, or -1 to 3 bars). The crossover region between film and capillary flow is marked by a significant change in the slope of the hydraulic conductivity function as often observed in measurements. Model predictions are compared with the widely applied van Genuchten-Mualem model and yield reasonable agreement with measured retention and hydraulic conductivity data over a wide range of soil textural classes.
Joint sparsity based heterogeneous data-level fusion for target detection and estimation
NASA Astrophysics Data System (ADS)
Niu, Ruixin; Zulch, Peter; Distasio, Marcello; Blasch, Erik; Shen, Dan; Chen, Genshe
2017-05-01
Typical surveillance systems employ decision- or feature-level fusion approaches to integrate heterogeneous sensor data, which are sub-optimal and incur information loss. In this paper, we investigate data-level heterogeneous sensor fusion. Since the sensors monitor the common targets of interest, whose states can be determined by only a few parameters, it is reasonable to assume that the measurement domain has a low intrinsic dimensionality. For heterogeneous sensor data, we develop a joint-sparse data-level fusion (JSDLF) approach based on the emerging joint sparse signal recovery techniques by discretizing the target state space. This approach is applied to fuse signals from multiple distributed radio frequency (RF) signal sensors and a video camera for joint target detection and state estimation. The JSDLF approach is data-driven and requires minimum prior information, since there is no need to know the time-varying RF signal amplitudes, or the image intensity of the targets. It can handle non-linearity in the sensor data due to state space discretization and the use of frequency/pixel selection matrices. Furthermore, for a multi-target case with J targets, the JSDLF approach only requires discretization in a single-target state space, instead of discretization in a J-target state space, as in the case of the generalized likelihood ratio test (GLRT) or the maximum likelihood estimator (MLE). Numerical examples are provided to demonstrate that the proposed JSDLF approach achieves excellent performance with near real-time accurate target position and velocity estimates.
Preliminary results from a four-working space, double-acting piston, Stirling engine controls model
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Lorenzo, C. F.
1980-01-01
A four working space, double acting piston, Stirling engine simulation is being developed for controls studies. The development method is to construct two simulations, one for detailed fluid behavior, and a second model with simple fluid behaviour but containing the four working space aspects and engine inertias, validate these models separately, then upgrade the four working space model by incorporating the detailed fluid behaviour model for all four working spaces. The single working space (SWS) model contains the detailed fluid dynamics. It has seven control volumes in which continuity, energy, and pressure loss effects are simulated. Comparison of the SWS model with experimental data shows reasonable agreement in net power versus speed characteristics for various mean pressure levels in the working space. The four working space (FWS) model was built to observe the behaviour of the whole engine. The drive dynamics and vehicle inertia effects are simulated. To reduce calculation time, only three volumes are used in each working space and the gas temperature are fixed (no energy equation). Comparison of the FWS model predicted power with experimental data shows reasonable agreement. Since all four working spaces are simulated, the unique capabilities of the model are exercised to look at working fluid supply transients, short circuit transients, and piston ring leakage effects.
NASA Astrophysics Data System (ADS)
Wells, J. R.; Kim, J. B.
2011-12-01
Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that multiple acceptable parameter sets exist. Further we expect to demonstrate that the multiple parameter sets produce significantly divergent future forecasts in NEP, C storage, and ET and runoff; and thereby identify a highly important source of DGVM uncertainty
Application of Calspan pitch rate control system to the Space Shuttle for approach and landing
NASA Technical Reports Server (NTRS)
Weingarten, N. C.; Chalk, C. R.
1983-01-01
A pitch rate control system designed for use in the shuttle during approach and landing was analyzed and compared with a revised control system developed by NASA and the existing OFT control system. The design concept control system uses filtered pitch rate feedback with proportional plus integral paths in the forward loop. Control system parameters were designed as a function of flight configuration. Analysis included time and frequency domain techniques. Results indicate that both the Calspan and NASA systems significantly improve the flying qualities of the shuttle over the OFT. Better attitude and flight path control and less time delay are the primary reasons. The Calspan system is preferred because of reduced time delay and simpler mechanization. Further testing of the improved flight control systems in an in-flight simulator is recommended.
Comparison of conditional sampling and averaging techniques in a turbulent boundary layer
NASA Astrophysics Data System (ADS)
Subramanian, C. S.; Rajagopalan, S.; Antonia, R. A.; Chambers, A. J.
1982-10-01
A rake of cold wires was used in a slightly heated boundary layer to identify coherent temperature fronts. An X-wire/cold-wire arrangement was used simultaneously with the rake to provide measurements of the longitudinal and normal velocity fluctuations and temperature fluctuations. Conditional averages of these parameters and their products were obtained by application of conditional techniques (VITA, HOLE, BT, RA1, and RA3) based on the detection of temperature fronts using information obtained at only one point in space. It is found that none of the one-point detection techniques is in good quantitative agreement with the rake detection technique, the largest correspondence being 51%. Despite the relatively poor correspondence between the conditional techniques, these techniques, with the exception of HOLE, produce conditional averages that are in reasonable qualitative agreement with those deduced using the rake.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawano, Toshihiko
2015-11-10
This theoretical treatment of low-energy compound nucleus reactions begins with the Bohr hypothesis, with corrections, and various statistical theories. The author investigates the statistical properties of the scattering matrix containing a Gaussian Orthogonal Ensemble (GOE) Hamiltonian in the propagator. The following conclusions are reached: For all parameter values studied, the numerical average of MC-generated cross sections coincides with the result of the Verbaarschot, Weidenmueller, Zirnbauer triple-integral formula. Energy average and ensemble average agree reasonably well when the width I is one or two orders of magnitude larger than the average resonance spacing d. In the strong-absorption limit, the channel degree-of-freedommore » ν a is 2. The direct reaction increases the inelastic cross sections while the elastic cross section is reduced.« less
Origin of orbital debris impacts on LDEF's trailing surfaces
NASA Technical Reports Server (NTRS)
Kessler, Donald J.
1993-01-01
A model was developed to determine the origin of orbital impacts measured on the training surfaces of LDEF. The model calculates the expected debris impact crater distribution around LDEF as a function of debris orbital parameters. The results show that only highly elliptical, low inclination orbits could be responsible for these impacts. The most common objects left in this type of orbit are orbital transfer stages used by the U.S. and ESA to place payloads into geosynchronous orbit. Objects in this type of orbit are difficult to catalog by the U.S. Space Command; consequently there are independent reasons to believe that the catalog does not adequately represent this population. This analysis concludes that the relative number of cataloged objects with highly elliptical, low inclination orbits must be increased by a factor of 20 to be consistent with the LDEF data.
NASA Technical Reports Server (NTRS)
Emrich, Bill
2006-01-01
A simple method of estimating vehicle parameters appropriate for interplanetary travel can provide a useful tool for evaluating the suitability of particular propulsion systems to various space missions. Although detailed mission analyses for interplanetary travel can be quite complex, it is possible to derive hirly simple correlations which will provide reasonable trip time estimates to the planets. In the present work, it is assumed that a constant thrust propulsion system propels a spacecraft on a round trip mission having equidistant outbound and inbound legs in which the spacecraft accelerates during the first portion of each leg of the journey and decelerates during the last portion of each leg of the journey. Comparisons are made with numerical calculations from low thrust trajectory codes to estimate the range of applicability of the simplified correlations.
Ensemble Kalman filter inference of spatially-varying Manning's n coefficients in the coastal ocean
NASA Astrophysics Data System (ADS)
Siripatana, Adil; Mayo, Talea; Knio, Omar; Dawson, Clint; Maître, Olivier Le; Hoteit, Ibrahim
2018-07-01
Ensemble Kalman (EnKF) filtering is an established framework for large scale state estimation problems. EnKFs can also be used for state-parameter estimation, using the so-called "Joint-EnKF" approach. The idea is simply to augment the state vector with the parameters to be estimated and assign invariant dynamics for the time evolution of the parameters. In this contribution, we investigate the efficiency of the Joint-EnKF for estimating spatially-varying Manning's n coefficients used to define the bottom roughness in the Shallow Water Equations (SWEs) of a coastal ocean model. Observation System Simulation Experiments (OSSEs) are conducted using the ADvanced CIRCulation (ADCIRC) model, which solves a modified form of the Shallow Water Equations. A deterministic EnKF, the Singular Evolutive Interpolated Kalman (SEIK) filter, is used to estimate a vector of Manning's n coefficients defined at the model nodal points by assimilating synthetic water elevation data. It is found that with reasonable ensemble size (O (10)) , the filter's estimate converges to the reference Manning's field. To enhance performance, we have further reduced the dimension of the parameter search space through a Karhunen-Loéve (KL) expansion. We have also iterated on the filter update step to better account for the nonlinearity of the parameter estimation problem. We study the sensitivity of the system to the ensemble size, localization scale, dimension of retained KL modes, and number of iterations. The performance of the proposed framework in term of estimation accuracy suggests that a well-tuned Joint-EnKF provides a promising robust approach to infer spatially varying seabed roughness parameters in the context of coastal ocean modeling.
14 CFR § 1214.117 - Launch and orbit parameters for a standard launch.
Code of Federal Regulations, 2014 CFR
2014-01-01
... flights: (1) Launch from Kennedy Space Center (KSC) into the customer's choice of two standard mission... 14 Aeronautics and Space 5 2014-01-01 2014-01-01 false Launch and orbit parameters for a standard launch. § 1214.117 Section § 1214.117 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE...
Parameter estimation uncertainty: Comparing apples and apples?
NASA Astrophysics Data System (ADS)
Hart, D.; Yoon, H.; McKenna, S. A.
2012-12-01
Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Performance Analysis of Sensor Systems for Space Situational Awareness
NASA Astrophysics Data System (ADS)
Choi, Eun-Jung; Cho, Sungki; Jo, Jung Hyun; Park, Jang-Hyun; Chung, Taejin; Park, Jaewoo; Jeon, Hocheol; Yun, Ami; Lee, Yonghui
2017-12-01
With increased human activity in space, the risk of re-entry and collision between space objects is constantly increasing. Hence, the need for space situational awareness (SSA) programs has been acknowledged by many experienced space agencies. Optical and radar sensors, which enable the surveillance and tracking of space objects, are the most important technical components of SSA systems. In particular, combinations of radar systems and optical sensor networks play an outstanding role in SSA programs. At present, Korea operates the optical wide field patrol network (OWL-Net), the only optical system for tracking space objects. However, due to their dependence on weather conditions and observation time, it is not reasonable to use optical systems alone for SSA initiatives, as they have limited operational availability. Therefore, the strategies for developing radar systems should be considered for an efficient SSA system using currently available technology. The purpose of this paper is to analyze the performance of a radar system in detecting and tracking space objects. With the radar system investigated, the minimum sensitivity is defined as detection of a 1-m2 radar cross section (RCS) at an altitude of 2,000 km, with operating frequencies in the L, S, C, X or Ku-band. The results of power budget analysis showed that the maximum detection range of 2,000 km, which includes the low earth orbit (LEO) environment, can be achieved with a transmission power of 900 kW, transmit and receive antenna gains of 40 dB and 43 dB, respectively, a pulse width of 2 ms, and a signal processing gain of 13.3 dB, at a frequency of 1.3 GHz. We defined the key parameters of the radar following a performance analysis of the system. This research can thus provide guidelines for the conceptual design of radar systems for national SSA initiatives.
The importance of exploring the asteroid belt.
Papagiannis, M D
1983-01-01
Following life's innate tendency to expand into every available space, technological civilizations will inevitably colonize the entire galaxy establishing space habitats around all its well-behaved stars. The most reasonable place in our solar system to test this possibility is the asteroid belt, which is an ideal source of raw materials for space colonies.
Forecasts of non-Gaussian parameter spaces using Box-Cox transformations
NASA Astrophysics Data System (ADS)
Joachimi, B.; Taylor, A. N.
2011-09-01
Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.
Proceedings of the NASA Conference on Space Telerobotics, volume 1
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo (Editor); Seraji, Homayoun (Editor)
1989-01-01
The theme of the Conference was man-machine collaboration in space. Topics addressed include: redundant manipulators; man-machine systems; telerobot architecture; remote sensing and planning; navigation; neural networks; fundamental AI research; and reasoning under uncertainty.
Student Moon Observations and Spatial-Scientific Reasoning
ERIC Educational Resources Information Center
Cole, Merryn; Wilhelm, Jennifer; Yang, Hongwei
2015-01-01
Relationships between sixth grade students' moon journaling and students' spatial-scientific reasoning after implementation of an Earth/Space unit were examined. Teachers used the project-based Realistic Explorations in Astronomical Learning curriculum. We used a regression model to analyze the relationship between the students' Lunar Phases…
40 CFR 60.58c - Reporting and recordkeeping requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
....57c(d), the owner or operator shall maintain all operating parameter data collected; (xvii) For...) Identification of calendar days for which data on emission rates or operating parameters specified under... operating parameters not measured, reasons for not obtaining the data, and a description of corrective...
Simulating Scenes In Outer Space
NASA Technical Reports Server (NTRS)
Callahan, John D.
1989-01-01
Multimission Interactive Picture Planner, MIP, computer program for scientifically accurate and fast, three-dimensional animation of scenes in deep space. Versatile, reasonably comprehensive, and portable, and runs on microcomputers. New techniques developed to perform rapidly calculations and transformations necessary to animate scenes in scientifically accurate three-dimensional space. Written in FORTRAN 77 code. Primarily designed to handle Voyager, Galileo, and Space Telescope. Adapted to handle other missions.
Sensitivity of Dynamical Systems to Banach Space Parameters
2005-02-13
We consider general nonlinear dynamical systems in a Banach space with dependence on parameters in a second Banach space. An abstract theoretical ... framework for sensitivity equations is developed. An application to measure dependent delay differential systems arising in a class of HIV models is presented.
Tethered Satellites as Enabling Platforms for an Operational Space Weather Monitoring System
NASA Technical Reports Server (NTRS)
Krause, L. Habash; Gilchrist, B. E.; Bilen, S.; Owens, J.; Voronka, N.; Furhop, K.
2013-01-01
Space weather nowcasting and forecasting models require assimilation of near-real time (NRT) space environment data to improve the precision and accuracy of operational products. Typically, these models begin with a climatological model to provide "most probable distributions" of environmental parameters as a function of time and space. The process of NRT data assimilation gently pulls the climate model closer toward the observed state (e.g. via Kalman smoothing) for nowcasting, and forecasting is achieved through a set of iterative physics-based forward-prediction calculations. The issue of required space weather observatories to meet the spatial and temporal requirements of these models is a complex one, and we do not address that with this poster. Instead, we present some examples of how tethered satellites can be used to address the shortfalls in our ability to measure critical environmental parameters necessary to drive these space weather models. Examples include very long baseline electric field measurements, magnetized ionospheric conductivity measurements, and the ability to separate temporal from spatial irregularities in environmental parameters. Tethered satellite functional requirements will be presented for each space weather parameter considered in this study.
NASA Astrophysics Data System (ADS)
Kang, Jai Young
2005-12-01
The objectives of this study are to perform extensive analysis on internal mass motion for a wider parameter space and to provide suitable design criteria for a broader applicability for the class of spinning space vehicles. In order to examine the stability criterion determined by a perturbation method, some numerical simulations will be performed and compared at various parameter points. In this paper, Ince-Strutt diagram for determination of stable-unstable regions of the internal mass motion of the spinning thrusting space vehicle in terms of design parameters will be obtained by an analytical method. Also, phase trajectories of the motion will be obtained for various parameter values and their characteristics are compared.
Space Industrialization. Volume 2: Opportunities, Markets and Programs
NASA Technical Reports Server (NTRS)
1978-01-01
The nature of space industrialization and the reasons for its promotion are examined. Increases in space industry activities to be anticipated from 1980 to 2010 are assessed. A variety of future scenarios against which space industrialization could evolve were developed and the various industrial opportunities that might constitute that evolution were defined. The needs and markets of industry activities were quantitatively and qualitatively assessed and messed. The various hardware requirements vs. time (space industry programs) as space industrialization evolves are derived and analyzed.
Irvine, Katherine N.; Warber, Sara L.; Devine-Wright, Patrick; Gaston, Kevin J.
2013-01-01
With increasing interest in the use of urban green space to promote human health, there is a need to understand the extent to which park users conceptualize these places as a resource for health and well-being. This study sought to examine park users’ own reasons for and benefits from green space usage and compare these with concepts and constructs in existing person-environment-health theories and models of health. Conducted in 13 public green spaces in Sheffield, UK, we undertook a qualitative content analysis of 312 park users’ responses to open-ended interview questions and identified a breadth, depth and salience of visit motivators and derived effects. Findings highlight a discrepancy between reasons for visiting and derived effects from the use of urban green space. Motivations emphasized walking, green space qualities, and children. Derived effects highlighted relaxation, positive emotions within the self and towards the place, and spiritual well-being. We generate a taxonomy of motivations and derived effects that could facilitate operationalization within empirical research and articulate a conceptual framework linking motivators to outcomes for investigating green space as a resource for human health and well-being. PMID:23340602
Irvine, Katherine N; Warber, Sara L; Devine-Wright, Patrick; Gaston, Kevin J
2013-01-22
With increasing interest in the use of urban green space to promote human health, there is a need to understand the extent to which park users conceptualize these places as a resource for health and well-being. This study sought to examine park users' own reasons for and benefits from green space usage and compare these with concepts and constructs in existing person-environment-health theories and models of health. Conducted in 13 public green spaces in Sheffield, UK, we undertook a qualitative content analysis of 312 park users' responses to open-ended interview questions and identified a breadth, depth and salience of visit motivators and derived effects. Findings highlight a discrepancy between reasons for visiting and derived effects from the use of urban green space. Motivations emphasized walking, green space qualities, and children. Derived effects highlighted relaxation, positive emotions within the self and towards the place, and spiritual well-being. We generate a taxonomy of motivations and derived effects that could facilitate operationalization within empirical research and articulate a conceptual framework linking motivators to outcomes for investigating green space as a resource for human health and well-being.
Mechanical Characteristics Analysis of Surrounding Rock on Anchor Bar Reinforcement
NASA Astrophysics Data System (ADS)
Gu, Shuan-cheng; Zhou, Pan; Huang, Rong-bin
2018-03-01
Through the homogenization method, the composite of rock and anchor bar is considered as the equivalent material of continuous, homogeneous, isotropic and strength parameter enhancement, which is defined as reinforcement body. On the basis of elasticity, the composite and the reinforcement are analyzed, Based on strengthening theory of surrounding rock and displacement equivalent conditions, the expression of reinforcement body strength parameters and mechanical parameters is deduced. The example calculation shows that the theoretical results are close to the results of the Jia-mei Gao[9], however, closer to the results of FLAC3D numerical simulation, it is proved that the model and surrounding rock reinforcement body theory are reasonable. the model is easy to analyze and calculate, provides a new way for determining reasonable bolt support parameters, can also provides reference for the stability analysis of underground cavern bolting support.
Stroet, Martin; Koziara, Katarzyna B; Malde, Alpeshkumar K; Mark, Alan E
2017-12-12
A general method for parametrizing atomic interaction functions is presented. The method is based on an analysis of surfaces corresponding to the difference between calculated and target data as a function of alternative combinations of parameters (parameter space mapping). The consideration of surfaces in parameter space as opposed to local values or gradients leads to a better understanding of the relationships between the parameters being optimized and a given set of target data. This in turn enables for a range of target data from multiple molecules to be combined in a robust manner and for the optimal region of parameter space to be trivially identified. The effectiveness of the approach is illustrated by using the method to refine the chlorine 6-12 Lennard-Jones parameters against experimental solvation free enthalpies in water and hexane as well as the density and heat of vaporization of the liquid at atmospheric pressure for a set of 10 aromatic-chloro compounds simultaneously. Single-step perturbation is used to efficiently calculate solvation free enthalpies for a wide range of parameter combinations. The capacity of this approach to parametrize accurate and transferrable force fields is discussed.
On Markov parameters in system identification
NASA Technical Reports Server (NTRS)
Phan, Minh; Juang, Jer-Nan; Longman, Richard W.
1991-01-01
A detailed discussion of Markov parameters in system identification is given. Different forms of input-output representation of linear discrete-time systems are reviewed and discussed. Interpretation of sampled response data as Markov parameters is presented. Relations between the state-space model and particular linear difference models via the Markov parameters are formulated. A generalization of Markov parameters to observer and Kalman filter Markov parameters for system identification is explained. These extended Markov parameters play an important role in providing not only a state-space realization, but also an observer/Kalman filter for the system of interest.
14 CFR 302.609 - Completion of proceedings.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 302.609 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION... dismissed, the Secretary will issue a determination as to whether the fee is reasonable within 120 days... issue a determination as to whether the fee is reasonable within 120 days after the complaint is filed. ...
14 CFR 302.609 - Completion of proceedings.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 302.609 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION... dismissed, the Secretary will issue a determination as to whether the fee is reasonable within 120 days... issue a determination as to whether the fee is reasonable within 120 days after the complaint is filed. ...
14 CFR 302.609 - Completion of proceedings.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 302.609 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION... dismissed, the Secretary will issue a determination as to whether the fee is reasonable within 120 days... issue a determination as to whether the fee is reasonable within 120 days after the complaint is filed. ...
14 CFR 302.609 - Completion of proceedings.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 302.609 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION... dismissed, the Secretary will issue a determination as to whether the fee is reasonable within 120 days... issue a determination as to whether the fee is reasonable within 120 days after the complaint is filed. ...
Influence of Constraint in Parameter Space on Quantum Games
NASA Astrophysics Data System (ADS)
Zhao, Hai-Jun; Fang, Xi-Ming
2004-04-01
We study the influence of the constraint in the parameter space on quantum games. Decomposing SU(2) operator into product of three rotation operators and controlling one kind of them, we impose a constraint on the parameter space of the players' operator. We find that the constraint can provide a tuner to make the bilateral payoffs equal, so that the mismatch of the players' action at multi-equilibrium could be avoided. We also find that the game exhibits an intriguing structure as a function of the parameter of the controlled operators, which is useful for making game models.
A radiological assessment of nuclear power and propulsion operations near Space Station Freedom
NASA Technical Reports Server (NTRS)
Bolch, Wesley E.; Thomas, J. Kelly; Peddicord, K. Lee; Nelson, Paul; Marshall, David T.; Busche, Donna M.
1990-01-01
Scenarios were identified which involve the use of nuclear power systems in the vicinity of Space Station Freedom (SSF) and their radiological impact on the SSF crew was quantified. Several of the developed scenarios relate to the use of SSF as an evolutionary transportation node for lunar and Mars missions. In particular, radiation doses delivered to SSF crew were calculated for both the launch and subsequent return of a Nuclear Electric Propulsion (NEP) cargo vehicle and a Nuclear Thermal Rocket (NTR) personnel vehicle to low earth orbit. The use of nuclear power on co-orbiting platforms and the storage and handling issues associated with radioisotope power systems were also explored as they relate to SSF. A central philosophy in these analyses was the utilization of a radiation dose budget, defined as the difference between recommended dose limits from all radiation sources and estimated doses received by crew members from natural space radiations. Consequently, for each scenario examined, the dose budget concept was used to identify and quantify constraints on operational parameters such as launch separation distances, returned vehicle parking distances, and reactor shutdown times prior to vehicle approach. The results indicate that realistic scenarios do not exist which would preclude the use of nuclear power sources in the vicinity of SSF. The radiation dose to the SSF crew can be maintained at safe levels solely by implementing proper and reasonable operating procedures.
NASA Astrophysics Data System (ADS)
Boivin, David; Bigot-Astruc, Marianne; De Montmorillon, Louis-Anne; Provost, Lionel; Sillard, Pierre; Bergonzo, Aurélien
2009-02-01
After many years of expectations, Fiber To The Home (FTTH) has finally become a reality with a wide number of projects already running worldwide and growing. Optical fiber is inevitably taking more and more importance in our environment, but for many good reasons, the space we are truly willing or able to allocate to it remains limited. These installation constrainsts have turned into additional requirements that need to be addressed for both active and passive components. If exceptional bending performances obtained without degrading backward compatibilities is a pre-requisite to deployment success,1 other parameters also need to be carefully taken into account when designing the ideal candidate for use in confined environments. Among them, one can cite the bend loss homogeneity over length and bending directions, the resistance to high optical power under bending and the tolerance to modal noise. In this paper, we present the design and performances of a bend insensitive fiber optimized towards more space savings and miniaturization of components. In addition to exceptional bending performances - lower than 0.1 dB/turn over a 5 mm bending radius -, its design guarantees impressive homogeneity levels and enhanced safety margins for high power applications while being still resistant to modal noise. Successfull cleave- and splice-ability results are finally presented, making this fiber ideally suited for use in components, pigtails and patchcords.
10 CFR 63.304 - Reasonable expectation.
Code of Federal Regulations, 2013 CFR
2013-01-01
... REPOSITORY AT YUCCA MOUNTAIN, NEVADA Postclosure Public Health and Environmental Standards § 63.304... uncertainties in making long-term projections of the performance of the Yucca Mountain disposal system; (3) Does... the full range of defensible and reasonable parameter distributions rather than only upon extreme...
10 CFR 63.304 - Reasonable expectation.
Code of Federal Regulations, 2014 CFR
2014-01-01
... REPOSITORY AT YUCCA MOUNTAIN, NEVADA Postclosure Public Health and Environmental Standards § 63.304... uncertainties in making long-term projections of the performance of the Yucca Mountain disposal system; (3) Does... the full range of defensible and reasonable parameter distributions rather than only upon extreme...
10 CFR 63.304 - Reasonable expectation.
Code of Federal Regulations, 2012 CFR
2012-01-01
... REPOSITORY AT YUCCA MOUNTAIN, NEVADA Postclosure Public Health and Environmental Standards § 63.304... uncertainties in making long-term projections of the performance of the Yucca Mountain disposal system; (3) Does... the full range of defensible and reasonable parameter distributions rather than only upon extreme...
10 CFR 63.304 - Reasonable expectation.
Code of Federal Regulations, 2011 CFR
2011-01-01
... REPOSITORY AT YUCCA MOUNTAIN, NEVADA Postclosure Public Health and Environmental Standards § 63.304... uncertainties in making long-term projections of the performance of the Yucca Mountain disposal system; (3) Does... the full range of defensible and reasonable parameter distributions rather than only upon extreme...
10 CFR 63.304 - Reasonable expectation.
Code of Federal Regulations, 2010 CFR
2010-01-01
... REPOSITORY AT YUCCA MOUNTAIN, NEVADA Postclosure Public Health and Environmental Standards § 63.304... uncertainties in making long-term projections of the performance of the Yucca Mountain disposal system; (3) Does... the full range of defensible and reasonable parameter distributions rather than only upon extreme...
High Frontier, The Journal for Space & Missile Professionals. Volume 3, Number 4, August 2007
2007-08-01
effective . Space power theories based on satellites and historical space activ - ity may be reasonably successful at adding structure to current...provider of combat effects for all mediums and levels of warfare should be proud of their accom- plishments. As we engage in global combat operations...we see everyday how our space and missile forces play a significant role in support of land, sea, and air combat operations. Space effects are
Linear and Nonlinear Time-Frequency Analysis for Parameter Estimation of Resident Space Objects
2017-02-22
AFRL-AFOSR-UK-TR-2017-0023 Linear and Nonlinear Time -Frequency Analysis for Parameter Estimation of Resident Space Objects Marco Martorella...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the...Nonlinear Time -Frequency Analysis for Parameter Estimation of Resident Space Objects 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-14-1-0183 5c. PROGRAM
Physicochemical analog for modeling superimposed and coded memories
NASA Astrophysics Data System (ADS)
Ensanian, Minas
1992-07-01
The mammalian brain is distinguished by a life-time of memories being stored within the same general region of physicochemical space, and having two extraordinary features. First, memories to varying degrees are superimposed, as well as coded. Second, instantaneous recall of past events can often be affected by relatively simple, and seemingly unrelated sensory clues. For the purposes of attempting to mathematically model such complex behavior, and for gaining additional insights, it would be highly advantageous to be able to simulate or mimic similar behavior in a nonbiological entity where some analogical parameters of interest can reasonably be controlled. It has recently been discovered that in nonlinear accumulative metal fatigue memories (related to mechanical deformation) can be superimposed and coded in the crystal lattice, and that memory, that is, the total number of stress cycles can be recalled (determined) by scanning not the surfaces but the `edges' of the objects. The new scanning technique known as electrotopography (ETG) now makes the state space modeling of metallic networks possible. The author provides an overview of the new field and outlines the areas that are of immediate interest to the science of artificial neural networks.
International Space Station Solar Array Wing On-Orbit Electrical Performance Degradation Measured
NASA Technical Reports Server (NTRS)
Gustafson, Eric D.; Kerslake, Thomas W.
2004-01-01
The port-side photovoltaic power module (P6) was activated on the International Space Station in December 2000. P6 provides electrical power to channels 2B and 4B to operate ISS power loads. A P6 is shown in the preceding photograph. This article highlights the work done at the NASA Glenn Research Center to calculate the on-orbit degradation of the P6 solar array wings (SAWs) using on-orbit data from December 2000 to February 2003. During early ISS operations, the 82 strings of photovoltaic cells that make up a SAW can provide much more power than is necessary to meet the demand. To deal with excess power, a sequential shunt unit successively shunts the current from the strings. This shunt current was the parameter chosen for the SAW performance degradation study for the following reasons: (1) it is based on a direct shunt current measurement in the sequential shunt unit, (2) the shunt current has a low temperature dependence that reduces the data correction error from using a computationally derived array temperature, and (3) the SSU shunt current is essentially the same as the SAW short-circuit current on a per-string basis.
Toward understanding the mechanics of hovering in insects, hummingbirds and bats
NASA Astrophysics Data System (ADS)
Vejdani, Hamid; Boerma, David; Swartz, Sharon; Breuer, Kenneth
2016-11-01
We present results on the dynamical characteristics of two different mechanisms of hovering, corresponding to the behavior of hummingbirds and bats. Using a Lagrangian formulation, we have developed a dynamical model of a body (trunk) and two rectangular wings. The trunk has 3 degrees of freedom (x, z and pitch angle) and each wing has 3 modes of actuation: flapping, pronation/supination, and wingspan extension/flexion (only present for bats). Wings can be effectively massless (hummingbird and insect wings) or relatively massive (important in the case of bats). The aerodynamic drag and lift forces are calculated using a quasi-steady blade-element model. The regions of state space in which hovering is possible are computed by over an exhaustive range of parameters. The effect of wing mass is to shrink the phase space available for viable hovering and, in general, to require higher wingbeat frequency. Moreover, by exploring hovering energy requirements, we find that the pronation angle of the wings also plays a critical role. For bats, who have relatively heavy wings, we show wing extension and flexion is critical in order to maintain a plausible hovering posture with reasonable power requirements. Comparisons with biological data show good agreement with our model predictions.
Purposeful Development of the Intelligence, Surveillance, and Reconnaissance for Space Cadre
2016-07-13
available. For these reasons, an attack on US space assets seems an un- likely scenario. However, the threat to space has changed since 1967. Enhanced and...get them up to speed on issues that affect ISR support to ACC weapon systems. Context The above scenarios would seem unthinkable to ACC but are very
NASA Space Human Factors Program
NASA Technical Reports Server (NTRS)
1992-01-01
This booklet briefly and succinctly treats 23 topics of particular interest to the NASA Space Human Factors Program. Most articles are by different authors who are mainly NASA Johnson or NASA Ames personnel. Representative topics covered include mental workload and performance in space, light effects on Circadian rhythms, human sleep, human reasoning, microgravity effects and automation and crew performance.
A Space Crisis. Alaska State Museum.
ERIC Educational Resources Information Center
Alaska State Museum, Juneau.
The 24,000 square foot Alaska State Museum is experiencing a space crisis which hinders its ability to effectively meet present demands. The museum's collection has more than tripled from 5,600 objects 17 years ago to 23,000 objects today. Available storage and exhibition space is filled and only 10% of the collection is on exhibit. The reason for…
Pakistani Children in Sheffield and Their Perception and Use of Public Open Spaces.
ERIC Educational Resources Information Center
Woolley, Helen; ul Amin, Noor
1995-01-01
Examines urban public open spaces used by Pakistani children in Sheffield, United Kingdom. Results reveal that most children visit open spaces on a daily or weekly basis and that parks and playgrounds are preferred. Lists activities engaged in by the majority of children as well as reasons for not undertaking activities. (AIM)
An Evaluation of Articulatory Working Space Area in Vowel Production of Adults with Down Syndrome
ERIC Educational Resources Information Center
Bunton, Kate; Leddy, Mark
2011-01-01
Many adolescents and adults with Down syndrome have reduced speech intelligibility. Reasons for this reduction may relate to differences in anatomy and physiology, both of which are important for creating an intelligible speech signal. The purpose of this study was to document acoustic vowel space and articulatory working space for two adult…
NASA Technical Reports Server (NTRS)
Siegel, Peter H.; Ward, John; Maiwald, Frank; Mehdi, Imran
2007-01-01
Terahertz is the primary frequency for line and continuum radiation from cool (5-100K) gas (atoms and molecules) and dust. This viewgraph presentation reviews the reasons for the interest in Terahertz Space Applications; the Terahertz Space Missions: in the past, present and planned for the future, Terahertz source requirements and examples of some JPL instruments; and a case study for a flight deliverable: THz Local Oscillators for ESA s Herschel Space Telescope
Reducing the Knowledge Tracing Space
ERIC Educational Resources Information Center
Ritter, Steven; Harris, Thomas K.; Nixon, Tristan; Dickison, Daniel; Murray, R. Charles; Towle, Brendon
2009-01-01
In Cognitive Tutors, student skill is represented by estimates of student knowledge on various knowledge components. The estimate for each knowledge component is based on a four-parameter model developed by Corbett and Anderson [Nb]. In this paper, we investigate the nature of the parameter space defined by these four parameters by modeling data…
Evaluating a common semi-mechanistic mathematical model of gene-regulatory networks
2015-01-01
Modeling and simulation of gene-regulatory networks (GRNs) has become an important aspect of modern systems biology investigations into mechanisms underlying gene regulation. A key challenge in this area is the automated inference (reverse-engineering) of dynamic, mechanistic GRN models from gene expression time-course data. Common mathematical formalisms for representing such models capture two aspects simultaneously within a single parameter: (1) Whether or not a gene is regulated, and if so, the type of regulator (activator or repressor), and (2) the strength of influence of the regulator (if any) on the target or effector gene. To accommodate both roles, "generous" boundaries or limits for possible values of this parameter are commonly allowed in the reverse-engineering process. This approach has several important drawbacks. First, in the absence of good guidelines, there is no consensus on what limits are reasonable. Second, because the limits may vary greatly among different reverse-engineering experiments, the concrete values obtained for the models may differ considerably, and thus it is difficult to compare models. Third, if high values are chosen as limits, the search space of the model inference process becomes very large, adding unnecessary computational load to the already complex reverse-engineering process. In this study, we demonstrate that restricting the limits to the [−1, +1] interval is sufficient to represent the essential features of GRN systems and offers a reduction of the search space without loss of quality in the resulting models. To show this, we have carried out reverse-engineering studies on data generated from artificial and experimentally determined from real GRN systems. PMID:26356485
Constructing Sample Space with Combinatorial Reasoning: A Mixed Methods Study
ERIC Educational Resources Information Center
McGalliard, William A., III.
2012-01-01
Recent curricular developments suggest that students at all levels need to be statistically literate and able to efficiently and accurately make probabilistic decisions. Furthermore, statistical literacy is a requirement to being a well-informed citizen of society. Research also recognizes that the ability to reason probabilistically is supported…
Quantitative Literacy Courses as a Space for Fusing Literacies
ERIC Educational Resources Information Center
Tunstall, Samuel Luke; Matz, Rebecca L.; Craig, Jeffrey C.
2016-01-01
In this article, we examine how students in a general education quantitative literacy course reason with public issues when unprompted to use quantitative reasoning. Michigan State University, like many institutions, not only has a quantitative literacy requirement for all undergraduates but also offers two courses specifically for meeting the…
Determination of the Parameter Sets for the Best Performance of IPS-driven ENLIL Model
NASA Astrophysics Data System (ADS)
Yun, Jongyeon; Choi, Kyu-Cheol; Yi, Jonghyuk; Kim, Jaehun; Odstrcil, Dusan
2016-12-01
Interplanetary scintillation-driven (IPS-driven) ENLIL model was jointly developed by University of California, San Diego (UCSD) and National Aeronaucics and Space Administration/Goddard Space Flight Center (NASA/GSFC). The model has been in operation by Korean Space Weather Cetner (KSWC) since 2014. IPS-driven ENLIL model has a variety of ambient solar wind parameters and the results of the model depend on the combination of these parameters. We have conducted researches to determine the best combination of parameters to improve the performance of the IPS-driven ENLIL model. The model results with input of 1,440 combinations of parameters are compared with the Advanced Composition Explorer (ACE) observation data. In this way, the top 10 parameter sets showing best performance were determined. Finally, the characteristics of the parameter sets were analyzed and application of the results to IPS-driven ENLIL model was discussed.
Nonstationary Extreme Value Analysis in a Changing Climate: A Software Package
NASA Astrophysics Data System (ADS)
Cheng, L.; AghaKouchak, A.; Gilleland, E.
2013-12-01
Numerous studies show that climatic extremes have increased substantially in the second half of the 20th century. For this reason, analysis of extremes under a nonstationary assumption has received a great deal of attention. This paper presents a software package developed for estimation of return levels, return periods, and risks of climatic extremes in a changing climate. This MATLAB software package offers tools for analysis of climate extremes under both stationary and non-stationary assumptions. The Nonstationary Extreme Value Analysis (hereafter, NEVA) provides an efficient and generalized framework for analyzing extremes using Bayesian inference. NEVA estimates the extreme value parameters using a Differential Evolution Markov Chain (DE-MC) which utilizes the genetic algorithm Differential Evolution (DE) for global optimization over the real parameter space with the Markov Chain Monte Carlo (MCMC) approach and has the advantage of simplicity, speed of calculation and convergence over conventional MCMC. NEVA also offers the confidence interval and uncertainty bounds of estimated return levels based on the sampled parameters. NEVA integrates extreme value design concepts, data analysis tools, optimization and visualization, explicitly designed to facilitate analysis extremes in geosciences. The generalized input and output files of this software package make it attractive for users from across different fields. Both stationary and nonstationary components of the package are validated for a number of case studies using empirical return levels. The results show that NEVA reliably describes extremes and their return levels.
14 CFR 1203.410 - Limitations.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION INFORMATION SECURITY PROGRAM Guides for... requires protection in the interest of national security; and (2) the information may reasonably be... agency; or to restrain competition. (b) Basic scientific research information not clearly related to the...
ERIC Educational Resources Information Center
Dittoe, William; Porter, Nat
2007-01-01
For more than a decade, educators and designers have been moving tentatively into uncharted waters. This article reports that administrators, faculty, and planners now recognize that learning spaces should be developed for reasons beyond utilization numbers. With declining retention and graduation rates, education institutions are acknowledging…
Spectral domain optical coherence tomography with extended depth-of-focus by aperture synthesis
NASA Astrophysics Data System (ADS)
Bo, En; Liu, Linbo
2016-10-01
We developed a spectral domain optical coherence tomography (SD-OCT) with an extended depth-of-focus (DOF) by synthetizing aperture. For a designated Gaussian-shape light source, the lateral resolution was determined by the numerical aperture (NA) of the objective lens and can be approximately maintained over the confocal parameter, which was defined as twice the Rayleigh range. However, the DOF was proportional to the square of the lateral resolution. Consequently, a trade-off existed between the DOF and lateral resolution, and researchers had to weigh and judge which was more important for their research reasonably. In this study, three distinct optical apertures were obtained by imbedding a circular phase spacer in the sample arm. Due to the optical path difference between three distinct apertures caused by the phase spacer, three images were aligned with equal spacing along z-axis vertically. By correcting the optical path difference (OPD) and defocus-induced wavefront curvature, three images with distinct depths were coherently summed together. This system digitally refocused the sample tissue and obtained a brand new image with higher lateral resolution over the confocal parameter when imaging the polystyrene calibration beads.
NASA Technical Reports Server (NTRS)
Jones, L. D.
1979-01-01
The Space Environment Test Division Post-Test Data Reduction Program processes data from test history tapes generated on the Flexible Data System in the Space Environment Simulation Laboratory at the National Aeronautics and Space Administration/Lyndon B. Johnson Space Center. The program reads the tape's data base records to retrieve the item directory conversion file, the item capture file and the process link file to determine the active parameters. The desired parameter names are read in by lead cards after which the periodic data records are read to determine parameter data level changes. The data is considered to be compressed rather than full sample rate. Tabulations and/or a tape for generating plots may be output.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotasidis, Fotis A., E-mail: Fotis.Kotasidis@unige.ch; Zaidi, Habib; Geneva Neuroscience Centre, Geneva University, CH-1205 Geneva
2014-06-15
Purpose: The Ingenuity time-of-flight (TF) PET/MR is a recently developed hybrid scanner combining the molecular imaging capabilities of PET with the excellent soft tissue contrast of MRI. It is becoming common practice to characterize the system's point spread function (PSF) and understand its variation under spatial transformations to guide clinical studies and potentially use it within resolution recovery image reconstruction algorithms. Furthermore, due to the system's utilization of overlapping and spherical symmetric Kaiser-Bessel basis functions during image reconstruction, its image space PSF and reconstructed spatial resolution could be affected by the selection of the basis function parameters. Hence, a detailedmore » investigation into the multidimensional basis function parameter space is needed to evaluate the impact of these parameters on spatial resolution. Methods: Using an array of 12 × 7 printed point sources, along with a custom made phantom, and with the MR magnet on, the system's spatially variant image-based PSF was characterized in detail. Moreover, basis function parameters were systematically varied during reconstruction (list-mode TF OSEM) to evaluate their impact on the reconstructed resolution and the image space PSF. Following the spatial resolution optimization, phantom, and clinical studies were subsequently reconstructed using representative basis function parameters. Results: Based on the analysis and under standard basis function parameters, the axial and tangential components of the PSF were found to be almost invariant under spatial transformations (∼4 mm) while the radial component varied modestly from 4 to 6.7 mm. Using a systematic investigation into the basis function parameter space, the spatial resolution was found to degrade for basis functions with a large radius and small shape parameter. However, it was found that optimizing the spatial resolution in the reconstructed PET images, while having a good basis function superposition and keeping the image representation error to a minimum, is feasible, with the parameter combination range depending upon the scanner's intrinsic resolution characteristics. Conclusions: Using the printed point source array as a MR compatible methodology for experimentally measuring the scanner's PSF, the system's spatially variant resolution properties were successfully evaluated in image space. Overall the PET subsystem exhibits excellent resolution characteristics mainly due to the fact that the raw data are not under-sampled/rebinned, enabling the spatial resolution to be dictated by the scanner's intrinsic resolution and the image reconstruction parameters. Due to the impact of these parameters on the resolution properties of the reconstructed images, the image space PSF varies both under spatial transformations and due to basis function parameter selection. Nonetheless, for a range of basis function parameters, the image space PSF remains unaffected, with the range depending on the scanner's intrinsic resolution properties.« less
Reduction of lighting energy consumption in office buildings through improved daylight design
NASA Astrophysics Data System (ADS)
Papadouri, Maria Violeta Prado
This study aims to investigate the lighting energy consumption in office buildings and the options for its reduction. One way to reduce lighting energy consumption is by improving the daylight design. A better use of daylight in buildings might be an outcome from the effort made in different directions. Like the improvement of a building's fabric and layout, the materials, even the furniture in a space influences the daylight quality considerably. Also very important role in lighting energy consumption has the development of more efficient lighting technology like the electric lighting control systems, such as photo sensors and occupancy sensors. Both systems are responsible so that the electric light is not used without reason. As the focusing area of this study, is to find ways to improve the daylight use in buildings, a consequent question is which are the methods provided in order to achieve this The accuracy of the methodology used is also an important issue in order to achieve reliable results. The methodology applied in this study includes the analysis of a case study by taking field measurements and computer simulations. The first stage included gathering information about the lighting design of the building and monitoring the light levels, both from natural and from the electric lighting. The second stage involved testing with computer simulations, different parameters that were expected to improve the daylight exploitation of the specific area. The results of the field measurements showed that the main problems of the space were the low natural light levels and the poor daylight distribution. The annual electric lighting energy consumption, as it was calculated with the use of computer simulations, represented the annual energy consumption of a typical air-conditioned prestige office building (energy consumption guide 19, for energy use in offices, 2000). After several computer simulations, the results showed that initial design parameters of the building can affect the lighting energy consumption of the space significantly. On the other hand, relatively small changes, like changing the reflectance of the surfaces and the lighting control systems can make even more difference to the light quality of the space and the reduction of lighting energy consumption.
Space Shuttle Pad Exposure Period Meteorological Parameters STS-1 Through STS-107
NASA Technical Reports Server (NTRS)
Overbey, B. G.; Roberts, B. C.
2005-01-01
During the 113 missions of the Space Transportation System (STS) to date, the Space Shuttle fleet has been exposed to the elements on the launch pad for approx. 4,195 days. The Natural Environments Branch at Marshall Space Flight Center archives atmospheric environments to which the Space Shuttle vehicles are exposed. This Technical Memorandum (TM) provides a summary of the historical record of the meteorological conditions encountered by the Space Shuttle fleet during the pad exposure period. Parameters included in this TM are temperature, relative humidity, wind speed, wind direction, sea level pressure, and precipitation. Extremes for each of these parameters for each mission are also summarized. Sources for the data include meteorological towers and hourly surface observations. Data are provided from the first launch of the STS in 1981 through the launch of STS-107 in 2003.
Optimal Constellation Design for Maximum Continuous Coverage of Targets Against a Space Background
2012-05-31
constellation is considered with the properties shown in Table 13. The parameter hres refers to the number of equally spaced offset planes in which cross...mean anomaly 180 ◦ M0i mean anomaly of lead satellite at epoch 0 ◦ R omni-directional sensor range 5000 km m initial polygon resolution 50 PPC hres ...a Walker Star. Idealized parameters for the Iridium constellation are shown in Table 14. The parameter hres refers to the number of equally spaced
Guo, Chaohua; Wei, Mingzhen; Liu, Hong
2018-01-01
Development of unconventional shale gas reservoirs (SGRs) has been boosted by the advancements in two key technologies: horizontal drilling and multi-stage hydraulic fracturing. A large number of multi-stage fractured horizontal wells (MsFHW) have been drilled to enhance reservoir production performance. Gas flow in SGRs is a multi-mechanism process, including: desorption, diffusion, and non-Darcy flow. The productivity of the SGRs with MsFHW is influenced by both reservoir conditions and hydraulic fracture properties. However, rare simulation work has been conducted for multi-stage hydraulic fractured SGRs. Most of them use well testing methods, which have too many unrealistic simplifications and assumptions. Also, no systematical work has been conducted considering all reasonable transport mechanisms. And there are very few works on sensitivity studies of uncertain parameters using real parameter ranges. Hence, a detailed and systematic study of reservoir simulation with MsFHW is still necessary. In this paper, a dual porosity model was constructed to estimate the effect of parameters on shale gas production with MsFHW. The simulation model was verified with the available field data from the Barnett Shale. The following mechanisms have been considered in this model: viscous flow, slip flow, Knudsen diffusion, and gas desorption. Langmuir isotherm was used to simulate the gas desorption process. Sensitivity analysis on SGRs' production performance with MsFHW has been conducted. Parameters influencing shale gas production were classified into two categories: reservoir parameters including matrix permeability, matrix porosity; and hydraulic fracture parameters including hydraulic fracture spacing, and fracture half-length. Typical ranges of matrix parameters have been reviewed. Sensitivity analysis have been conducted to analyze the effect of the above factors on the production performance of SGRs. Through comparison, it can be found that hydraulic fracture parameters are more sensitive compared with reservoir parameters. And reservoirs parameters mainly affect the later production period. However, the hydraulic fracture parameters have a significant effect on gas production from the early period. The results of this study can be used to improve the efficiency of history matching process. Also, it can contribute to the design and optimization of hydraulic fracture treatment design in unconventional SGRs.
Wei, Mingzhen; Liu, Hong
2018-01-01
Development of unconventional shale gas reservoirs (SGRs) has been boosted by the advancements in two key technologies: horizontal drilling and multi-stage hydraulic fracturing. A large number of multi-stage fractured horizontal wells (MsFHW) have been drilled to enhance reservoir production performance. Gas flow in SGRs is a multi-mechanism process, including: desorption, diffusion, and non-Darcy flow. The productivity of the SGRs with MsFHW is influenced by both reservoir conditions and hydraulic fracture properties. However, rare simulation work has been conducted for multi-stage hydraulic fractured SGRs. Most of them use well testing methods, which have too many unrealistic simplifications and assumptions. Also, no systematical work has been conducted considering all reasonable transport mechanisms. And there are very few works on sensitivity studies of uncertain parameters using real parameter ranges. Hence, a detailed and systematic study of reservoir simulation with MsFHW is still necessary. In this paper, a dual porosity model was constructed to estimate the effect of parameters on shale gas production with MsFHW. The simulation model was verified with the available field data from the Barnett Shale. The following mechanisms have been considered in this model: viscous flow, slip flow, Knudsen diffusion, and gas desorption. Langmuir isotherm was used to simulate the gas desorption process. Sensitivity analysis on SGRs’ production performance with MsFHW has been conducted. Parameters influencing shale gas production were classified into two categories: reservoir parameters including matrix permeability, matrix porosity; and hydraulic fracture parameters including hydraulic fracture spacing, and fracture half-length. Typical ranges of matrix parameters have been reviewed. Sensitivity analysis have been conducted to analyze the effect of the above factors on the production performance of SGRs. Through comparison, it can be found that hydraulic fracture parameters are more sensitive compared with reservoir parameters. And reservoirs parameters mainly affect the later production period. However, the hydraulic fracture parameters have a significant effect on gas production from the early period. The results of this study can be used to improve the efficiency of history matching process. Also, it can contribute to the design and optimization of hydraulic fracture treatment design in unconventional SGRs. PMID:29320489
Estimability of geodetic parameters from space VLBI observables
NASA Technical Reports Server (NTRS)
Adam, Jozsef
1990-01-01
The feasibility of space very long base interferometry (VLBI) observables for geodesy and geodynamics is investigated. A brief review of space VLBI systems from the point of view of potential geodetic application is given. A selected notational convention is used to jointly treat the VLBI observables of different types of baselines within a combined ground/space VLBI network. The basic equations of the space VLBI observables appropriate for convariance analysis are derived and included. The corresponding equations for the ground-to-ground baseline VLBI observables are also given for a comparison. The simplified expression of the mathematical models for both space VLBI observables (time delay and delay rate) include the ground station coordinates, the satellite orbital elements, the earth rotation parameters, the radio source coordinates, and clock parameters. The observation equations with these parameters were examined in order to determine which of them are separable or nonseparable. Singularity problems arising from coordinate system definition and critical configuration are studied. Linear dependencies between partials are analytically derived. The mathematical models for ground-space baseline VLBI observables were tested with simulation data in the frame of some numerical experiments. Singularity due to datum defect is confirmed.
Dynamics in the Parameter Space of a Neuron Model
NASA Astrophysics Data System (ADS)
Paulo, C. Rech
2012-06-01
Some two-dimensional parameter-space diagrams are numerically obtained by considering the largest Lyapunov exponent for a four-dimensional thirteen-parameter Hindmarsh—Rose neuron model. Several different parameter planes are considered, and it is shown that depending on the combination of parameters, a typical scenario can be preserved: for some choice of two parameters, the parameter plane presents a comb-shaped chaotic region embedded in a large periodic region. It is also shown that there exist regions close to these comb-shaped chaotic regions, separated by the comb teeth, organizing themselves in period-adding bifurcation cascades.
Supplemental optical specifications for imaging systems: parameters of phase gradient
NASA Astrophysics Data System (ADS)
Xuan, Bin; Li, Jun-Feng; Wang, Peng; Chen, Xiao-Ping; Song, Shu-Mei; Xie, Jing-Jiang
2009-12-01
Specifications of phase error, peak to valley (PV), and root mean square (rms) are not able to represent the properties of a wavefront reasonably because of their irresponsibility for spatial frequencies. Power spectral density is a parameter that is especially effective to indicate the frequency regime. However, it seems not convenient for opticians to implement. Parameters of phase gradient, PV gradient, and rms gradient are most correlated with a point-spread function of an imaging system, and they can provide clear instruction of manufacture. The algorithms of gradient parameters have been modified in order to represent the image quality better. In order to demonstrate the analyses, an experimental spherical mirror has been worked out. It is clear that imaging performances can be maintained while manufacture difficulties are decreased when a reasonable trade-off between specifications of phase error and phase gradient is made.
NASA Astrophysics Data System (ADS)
Roth, Wolff-Michael; Milkent, Marlene M.
This study was designed as a test for two neo-Piagetian theories. More specifically, this research examined the relationships between the development of proportional reasoning strategies and three cognitive variables from Pascual-Leone's and Case's neo-Piagetian theories. A priori hypotheses linked the number of problems students worked until they induced a proportional reasoning strategy to the variables of M-space, degree of field dependence, and short-term storage space. The subjects consisted of students enrolled in Physical Science I, a science course for nonscience majors at the University of Southern Mississippi. Of the 34 subjects in the study, 23 were classified as concrete operational on the basis of eight ratio tasks. Problems corresponding to five developmental levels of proportional reasoning (according to Piagetian and neo-Piagetian theory), were presented by a microcomputer to the 23 subjects who had been classified as concrete operational. After a maximum of 6 hours of treatment, 17 of the 23 subjects had induced ratio schemata at the upper formal level (IIIB), while the remaining subjects used lower formal level (IIIA) schemata. The data analyses showed that neither M-space and degree of field-dependence, either alone or in combination, nor short-term storage predicted the number of problems students need to do until they induce an appropriate problem-solving strategy. However, there were significant differences in the short-term storage space of those subjects who mastered ratio problems at the highest level and those who did not. Also, the subjects' degree of field-dependence was not a predictor of either the ability to transfer problem-solving strategies to a new setting or the reuse of inappropriate strategies. The results of this study also suggest that short-term storage space is a variable with high correlations to a number of aspects of learning such as transfer and choice of strategy after feedback.
A new Bayesian recursive technique for parameter estimation
NASA Astrophysics Data System (ADS)
Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis
2006-08-01
The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.
Sun, Xiaodian; Jin, Li; Xiong, Momiao
2008-01-01
It is system dynamics that determines the function of cells, tissues and organisms. To develop mathematical models and estimate their parameters are an essential issue for studying dynamic behaviors of biological systems which include metabolic networks, genetic regulatory networks and signal transduction pathways, under perturbation of external stimuli. In general, biological dynamic systems are partially observed. Therefore, a natural way to model dynamic biological systems is to employ nonlinear state-space equations. Although statistical methods for parameter estimation of linear models in biological dynamic systems have been developed intensively in the recent years, the estimation of both states and parameters of nonlinear dynamic systems remains a challenging task. In this report, we apply extended Kalman Filter (EKF) to the estimation of both states and parameters of nonlinear state-space models. To evaluate the performance of the EKF for parameter estimation, we apply the EKF to a simulation dataset and two real datasets: JAK-STAT signal transduction pathway and Ras/Raf/MEK/ERK signaling transduction pathways datasets. The preliminary results show that EKF can accurately estimate the parameters and predict states in nonlinear state-space equations for modeling dynamic biochemical networks. PMID:19018286
Adaptive Parameter Estimation of Person Recognition Model in a Stochastic Human Tracking Process
NASA Astrophysics Data System (ADS)
Nakanishi, W.; Fuse, T.; Ishikawa, T.
2015-05-01
This paper aims at an estimation of parameters of person recognition models using a sequential Bayesian filtering method. In many human tracking method, any parameters of models used for recognize the same person in successive frames are usually set in advance of human tracking process. In real situation these parameters may change according to situation of observation and difficulty level of human position prediction. Thus in this paper we formulate an adaptive parameter estimation using general state space model. Firstly we explain the way to formulate human tracking in general state space model with their components. Then referring to previous researches, we use Bhattacharyya coefficient to formulate observation model of general state space model, which is corresponding to person recognition model. The observation model in this paper is a function of Bhattacharyya coefficient with one unknown parameter. At last we sequentially estimate this parameter in real dataset with some settings. Results showed that sequential parameter estimation was succeeded and were consistent with observation situations such as occlusions.
NASA Technical Reports Server (NTRS)
1972-01-01
The space shuttle fact sheet is presented. Four important reasons for the program are considered to be: (1) It is the only meaningful new manned space program which can be accomplished on a modest budget. (2) It is needed to make space operations less complex and costly. (3) It is required for scientific applications in civilian and military activities. (4) It will encourage greater international participation in space flight. The space shuttle and orbiter configurations are discussed along with the missions. The scope of the study and the costs of each contract for the major contractor are listed.
Reflexive reasoning for distributed real-time systems
NASA Technical Reports Server (NTRS)
Goldstein, David
1994-01-01
This paper discusses the implementation and use of reflexive reasoning in real-time, distributed knowledge-based applications. Recently there has been a great deal of interest in agent-oriented systems. Implementing such systems implies a mechanism for sharing knowledge, goals and other state information among the agents. Our techniques facilitate an agent examining both state information about other agents and the parameters of the knowledge-based system shell implementing its reasoning algorithms. The shell implementing the reasoning is the Distributed Artificial Intelligence Toolkit, which is a derivative of CLIPS.
Proposed Cavity for Reduced Slip-Stacking Loss
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eldred, J.; Zwaska, R.
This paper employs a novel dynamical mechanism to improve the performance of slip-stacking. Slip-stacking in an accumulation technique used at Fermilab since 2004 which nearly double the proton intensity. During slip-stacking, the Recycler or the Main Injector stores two particles beams that spatially overlap but have different momenta. The two particle beams are longitudinally focused by two 53 MHz 100 kV RF cavities with a small frequency difference between them. We propose an additional 106 MHz 20 kV RF cavity, with a frequency at the double the average of the upper and lower main RF frequencies. In simulation, we findmore » the proposed RF cavity significantly enhances the stable bucket area and reduces slip-stacking losses under reasonable injection scenarios. We quantify and map the stability of the parameter space for any accelerator implementing slip-stacking with the addition of a harmonic RF cavity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Ke; Linden, Tim, E-mail: kefang@umd.edu, E-mail: linden.70@osu.edu
Radio observations at multiple frequencies have detected a significant isotropic emission component between 22 MHz and 10 GHz, commonly termed the ARCADE-2 Excess. The origin of this radio emission is unknown, as the intensity, spectrum and isotropy of the signal are difficult to model with either traditional astrophysical mechanisms or novel physics such as dark matter annihilation. We posit a new model capable of explaining the key components of the excess radio emission. Specifically, we show that the re-acceleration of non-thermal electrons via turbulence in merging galaxy clusters are capable of explaining the intensity, spectrum, and isotropy of the ARCADE-2more » data. We examine the parameter spaces of cluster re-acceleration, magnetic field, and merger rate, finding that the radio excess can be reproduced assuming reasonable assumptions for each. Finally, we point out that future observations will definitively confirm or rule-out the contribution of cluster mergers to the isotropic radio background.« less
Analysis of the Cape Cod tracer data
Ezzedine, Souheil; Rubin, Yoram
1997-01-01
An analysis of the Cape Cod test was performed using several first- and higher-order theoretical models. We compare conditional and unconditional solutions of the transport equation and employ them for analysis of the experimental data. We consider spatial moments, mass breakthrough curves, and the distribution of the solute mass in space. The concentration measurements were also analyzed using theoretical models for the expected value and variance of concentration. The theoretical models we employed are based on the spatial correlation structure of the conductivity field, without any fitting of parameters to the tracer data, and hence we can test the predictive power of the theories tested. The effects of recharge on macrodispersion are investigated, and it is shown that recharge provides a reasonable explanation for the enhanced lateral spread of the Cape Cod plume. The compendium of the experimental results presented here is useful for testing of theoretical and numerical models.
Difficult Decisions Made Easier
NASA Technical Reports Server (NTRS)
2006-01-01
NASA missions are extremely complex and prone to sudden, catastrophic failure if equipment falters or if an unforeseen event occurs. For these reasons, NASA trains to expect the unexpected. It tests its equipment and systems in extreme conditions, and it develops risk-analysis tests to foresee any possible problems. The Space Agency recently worked with an industry partner to develop reliability analysis software capable of modeling complex, highly dynamic systems, taking into account variations in input parameters and the evolution of the system over the course of a mission. The goal of this research was multifold. It included performance and risk analyses of complex, multiphase missions, like the insertion of the Mars Reconnaissance Orbiter; reliability analyses of systems with redundant and/or repairable components; optimization analyses of system configurations with respect to cost and reliability; and sensitivity analyses to identify optimal areas for uncertainty reduction or performance enhancement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petiteau, Antoine; Shang Yu; Babak, Stanislav
Coalescing massive black hole binaries are the strongest and probably the most important gravitational wave sources in the LISA band. The spin and orbital precessions bring complexity in the waveform and make the likelihood surface richer in structure as compared to the nonspinning case. We introduce an extended multimodal genetic algorithm which utilizes the properties of the signal and the detector response function to analyze the data from the third round of mock LISA data challenge (MLDC3.2). The performance of this method is comparable, if not better, to already existing algorithms. We have found all five sources present in MLDC3.2more » and recovered the coalescence time, chirp mass, mass ratio, and sky location with reasonable accuracy. As for the orbital angular momentum and two spins of the black holes, we have found a large number of widely separated modes in the parameter space with similar maximum likelihood values.« less
Evolutionary fuzzy modeling human diagnostic decisions.
Peña-Reyes, Carlos Andrés
2004-05-01
Fuzzy CoCo is a methodology, combining fuzzy logic and evolutionary computation, for constructing systems able to accurately predict the outcome of a human decision-making process, while providing an understandable explanation of the underlying reasoning. Fuzzy logic provides a formal framework for constructing systems exhibiting both good numeric performance (accuracy) and linguistic representation (interpretability). However, fuzzy modeling--meaning the construction of fuzzy systems--is an arduous task, demanding the identification of many parameters. To solve it, we use evolutionary computation techniques (specifically cooperative coevolution), which are widely used to search for adequate solutions in complex spaces. We have successfully applied the algorithm to model the decision processes involved in two breast cancer diagnostic problems, the WBCD problem and the Catalonia mammography interpretation problem, obtaining systems both of high performance and high interpretability. For the Catalonia problem, an evolved system was embedded within a Web-based tool-called COBRA-for aiding radiologists in mammography interpretation.
Probing lepton flavor violation signal via γ γ →l¯ilj in the left-right twin Higgs model at the ILC
NASA Astrophysics Data System (ADS)
Liu, Guo-Li; Wang, Fei; Xie, Kuan; Guo, Xiao-Fei
2017-08-01
To explain the small neutrino masses, heavy Majorana neutrinos are introduced in the left-right twin Higgs model. The heavy neutrinos—together with the charged scalars and the heavy gauge bosons—may contribute large mixings between the neutrinos and the charged leptons, which may induce some distinct lepton-flavor-violating processes. We check ℓ¯iℓj (i ,j =e ,μ ,τ ,i ≠j ) production in γ γ collisions in the left-right twin Higgs model, and find that the production rates may be large in some specific parameter space. In optimal cases, it is even possible to detect them with reasonable kinematical cuts. We also show that these collisions can effectively constrain the model parameters—such as the Higgs vacuum expectation value, the right-handed neutrino mass, etc.—and may serve as a sensitive probe of this new physics model.
Medical Rapid Response in Psychiatry: Reasons for Activation and Immediate Outcome.
Manu, Peter; Loewenstein, Kristy; Girshman, Yankel J; Bhatia, Padam; Barnes, Maira; Whelan, Joseph; Solderitch, Victoria A; Rogozea, Liliana; McManus, Marybeth
2015-12-01
Rapid response teams are used to improve the recognition of acute deteriorations in medical and surgical settings. They are activated by abnormal physiological parameters, symptoms or clinical concern, and are believed to decrease hospital mortality rates. We evaluated the reasons for activation and the outcome of rapid response interventions in a 222-bed psychiatric hospital in New York City using data obtained at the time of all activations from January through November, 2012. The primary outcome was the admission rate to a medical or surgical unit for each of the main reasons for activation. The 169 activations were initiated by nursing staff (78.7 %) and psychiatrists (13 %) for acute changes in condition (64.5 %), abnormal physiological parameters (27.2 %) and non-specified concern (8.3 %). The most common reasons for activation were chest pain (14.2 %), fluctuating level of consciousness (9.5 %), hypertension (9.5 %), syncope or fall (8.9 %), hypotension (8.3 %), dyspnea (7.7 %) and seizures (5.9 %). The rapid response team transferred 127 (75.2 %) patients to the Emergency Department and 46 (27.2 %) were admitted to a medical or surgical unit. The admission rates were statistically similar for acute changes in condition, abnormal physiological parameters, and clinicians' concern. In conclusion, a majority of rapid response activations in a self-standing psychiatric hospital were initiated by nursing staff for changes in condition, rather than for policy-specified abnormal physiological parameters. The findings suggest that a rapid response system may empower psychiatric nurses to use their clinical skills to identify patients requiring urgent transfer to a general hospital.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-11
... Suppliers: Boeing Space and Intelligence Systems, Space Exploration Technologies Corp., Aon plc. Obligor... for final commitment for a long-term loan or financial guarantee in excess of $100 million. Reason for...
Application of Optimization Techniques to Design of Unconventional Rocket Nozzle Configurations
NASA Technical Reports Server (NTRS)
Follett, W.; Ketchum, A.; Darian, A.; Hsu, Y.
1996-01-01
Several current rocket engine concepts such as the bell-annular tri-propellant engine, and the linear aerospike being proposed for the X-33 require unconventional three dimensional rocket nozzles which must conform to rectangular or sector shaped envelopes to meet integration constraints. These types of nozzles exist outside the current experience database, therefore, the application of efficient design methods for these propulsion concepts is critical to the success of launch vehicle programs. The objective of this work is to optimize several different nozzle configurations, including two- and three-dimensional geometries. Methodology includes coupling computational fluid dynamic (CFD) analysis to genetic algorithms and Taguchi methods as well as implementation of a streamline tracing technique. Results of applications are shown for several geometeries including: three dimensional thruster nozzles with round or super elliptic throats and rectangualar exits, two- and three-dimensional thrusters installed within a bell nozzle, and three dimensional thrusters with round throats and sector shaped exits. Due to the novel designs considered for this study, there is little experience which can be used to guide the effort and limit the design space. With a nearly infinite parameter space to explore, simple parametric design studies cannot possibly search the entire design space within the time frame required to impact the design cycle. For this reason, robust and efficient optimization methods are required to explore and exploit the design space to achieve high performance engine designs. Five case studies which examine the application of various techniques in the engineering environment are presented in this paper.
NASA Astrophysics Data System (ADS)
Meda, Adimurthy; Katti, Vadiraj V.
2017-08-01
The present work experimentally investigates the local distribution of wall static pressure and the heat transfer coefficient on a rough flat plate impinged by a slot air jet. The experimental parameters include, nozzle-to-plate spacing (Z /D h = 0.5-10.0), axial distance from stagnation point ( x/D h ), size of detached rib ( b = 4-12 mm) and Reynolds number ( Re = 2500-20,000). The wall static pressure on the surface is recorded using a Pitot tube and a differential pressure transmitter. Infrared thermal imaging technique is used to capture the temperature distribution on the target surface. It is observed that, the maximum wall static pressure occurs at the stagnation point ( x/D h = 0) for all nozzle-to-plate spacing ( Z/D h ) and rib dimensions studied. Coefficient of wall static pressure ( C p ) decreases monotonically with x/D h . Sub atmospheric pressure is evident in the detached rib configurations for jet to plate spacing up to 6.0 for all ribs studied. Sub atmospheric region is stronger at Z/D h = 0.5 due to the fluid accelerating under the rib. As nozzle to plate spacing ( Z/D h ) increases, the sub-atmospheric region becomes weak and vanishes gradually. Reasonable enhancement in both C p as well as Nu is observed for the detached rib configuration. Enhancement is found to decrease with the increase in the rib width. The results of the study can be used in optimizing the cooling system design.
Gong, Xingchu; Chen, Huali; Chen, Teng; Qu, Haibin
2014-01-01
Quality by design (QbD) concept is a paradigm for the improvement of botanical injection quality control. In this work, water precipitation process for the manufacturing of Xueshuantong injection, a botanical injection made from Notoginseng Radix et Rhizoma, was optimized using a design space approach as a sample. Saponin recovery and total saponin purity (TSP) in supernatant were identified as the critical quality attributes (CQAs) of water precipitation using a risk assessment for all the processes of Xueshuantong injection. An Ishikawa diagram and experiments of fractional factorial design were applied to determine critical process parameters (CPPs). Dry matter content of concentrated extract (DMCC), amount of water added (AWA), and stirring speed (SS) were identified as CPPs. Box-Behnken designed experiments were carried out to develop models between CPPs and process CQAs. Determination coefficients were higher than 0.86 for all the models. High TSP in supernatant can be obtained when DMCC is low and SS is high. Saponin recoveries decreased as DMCC increased. Incomplete collection of supernatant was the main reason for the loss of saponins. Design space was calculated using a Monte-Carlo simulation method with acceptable probability of 0.90. Recommended normal operation region are located in DMCC of 0.38-0.41 g/g, AWA of 3.7-4.9 g/g, and SS of 280-350 rpm, with a probability more than 0.919 to attain CQA criteria. Verification experiment results showed that operating DMCC, SS, and AWA within design space can attain CQA criteria with high probability.
Plasma measurement by optical visualization and triple probe method under high-speed impact
NASA Astrophysics Data System (ADS)
Sakai, T.; Umeda, K.; Kinoshita, S.; Watanabe, K.
2017-02-01
High-speed impact on spacecraft by space debris poses a threat. When a high-speed projectile collides with target, it is conceivable that the heat created by impact causes severe damage at impact point. Investigation of the temperature is necessary for elucidation of high-speed impact phenomena. However, it is very difficult to measure the temperature with standard methods for two main reasons. One reason is that a thermometer placed on the target is instantaneously destroyed upon impact. The other reason is that there is not enough time resolution to measure the transient temperature changes. In this study, the measurement of plasma induced by high-speed impact was investigated to estimate temperature changes near the impact point. High-speed impact experiments were performed with a vertical gas gun. The projectile speed was approximately 700 m/s, and the target material was A5052. The experimental data to calculate the plasma parameters of electron temperature and electron density were measured by triple probe method. In addition, the diffusion behavior of plasma was observed by optical visualization technique using high-speed camera. The frame rate and the exposure time were 260 kfps and 1.0 μs, respectively. These images are considered to be one proof to show the validity of plasma measurement. The experimental results showed that plasma signals were detected for around 70 μs, and the rising phase of the wave form was in good agreement with timing of optical visualization image when the plasma arrived at the tip of triple probe.
Space Vehicle Reliability Modeling in DIORAMA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tornga, Shawn Robert
When modeling system performance of space based detection systems it is important to consider spacecraft reliability. As space vehicles age the components become prone to failure for a variety of reasons such as radiation damage. Additionally, some vehicles may lose the ability to maneuver once they exhaust fuel supplies. Typically failure is divided into two categories: engineering mistakes and technology surprise. This document will report on a method of simulating space vehicle reliability in the DIORAMA framework.
SLS-SPEC-159 Cross-Program Design Specification for Natural Environments (DSNE) Revision E
NASA Technical Reports Server (NTRS)
Roberts, Barry C.
2017-01-01
The DSNE completes environment-related specifications for architecture, system-level, and lower-tier documents by specifying the ranges of environmental conditions that must be accounted for by NASA ESD Programs. To assure clarity and consistency, and to prevent requirements documents from becoming cluttered with extensive amounts of technical material, natural environment specifications have been compiled into this document. The intent is to keep a unified specification for natural environments that each Program calls out for appropriate application. This document defines the natural environments parameter limits (maximum and minimum values, energy spectra, or precise model inputs, assumptions, model options, etc.), for all ESD Programs. These environments are developed by the NASA Marshall Space Flight Center (MSFC) Natural Environments Branch (MSFC organization code: EV44). Many of the parameter limits are based on experience with previous programs, such as the Space Shuttle Program. The parameter limits contain no margin and are meant to be evaluated individually to ensure they are reasonable (i.e., do not apply unrealistic extreme-on-extreme conditions). The natural environments specifications in this document should be accounted for by robust design of the flight vehicle and support systems. However, it is understood that in some cases the Programs will find it more effective to account for portions of the environment ranges by operational mitigation or acceptance of risk in accordance with an appropriate program risk management plan and/or hazard analysis process. The DSNE is not intended as a definition of operational models or operational constraints, nor is it adequate, alone, for ground facilities which may have additional requirements (for example, building codes and local environmental constraints). "Natural environments," as the term is used here, refers to the environments that are not the result of intended human activity or intervention. It consists of a variety of external environmental factors (most of natural origin and a few of human origin) which impose restrictions or otherwise impact the development or operation of flight vehicles and destination surface systems.
Optimizing the fine lock performance of the Hubble Space Telescope fine guidance sensors
NASA Technical Reports Server (NTRS)
Eaton, David J.; Whittlesey, Richard; Abramowicz-Reed, Linda; Zarba, Robert
1993-01-01
This paper summarizes the on-orbit performance to date of the three Hubble Space Telescope Fine Guidance Sensors (FGS's) in Fine Lock mode, with respect to acquisition success rate, ability to maintain lock, and star brightness range. The process of optimizing Fine Lock performance, including the reasoning underlying the adjustment of uplink parameters, and the effects of optimization are described. The Fine Lock optimization process has combined theoretical and experimental approaches. Computer models of the FGS have improved understanding of the effects of uplink parameters and fine error averaging on the ability of the FGS to acquire stars and maintain lock. Empirical data have determined the variation of the interferometric error characteristics (so-called 's-curves') between FGS's and over each FGS field of view, identified binary stars, and quantified the systematic error in Coarse Track (the mode preceding Fine Lock). On the basis of these empirical data, the values of the uplink parameters can be selected more precisely. Since launch, optimization efforts have improved FGS Fine Lock performance, particularly acquisition, which now enjoys a nearly 100 percent success rate. More recent work has been directed towards improving FGS tolerance of two conditions that exceed its original design requirements. First, large amplitude spacecraft jitter is induced by solar panel vibrations following day/night transitions. This jitter is generally much greater than the FGS's were designed to track, and while the tracking ability of the FGS's has been shown to exceed design requirements, losses of Fine Lock after day/night transitions are frequent. Computer simulations have demonstrated a potential improvement in Fine Lock tracking of vehicle jitter near terminator crossings. Second, telescope spherical aberration degrades the interferometric error signal in Fine Lock, but use of the FGS two-thirds aperture stop restores the transfer function with a corresponding loss of throughput. This loss requires the minimum brightness of acquired stars to be about one magnitude brighter than originally planned.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lenardic, A.; Crowley, J. W., E-mail: ajns@rice.edu, E-mail: jwgcrowley@gmail.com
2012-08-20
A model of coupled mantle convection and planetary tectonics is used to demonstrate that history dependence can outweigh the effects of a planet's energy content and material parameters in determining its tectonic state. The mantle convection-surface tectonics system allows multiple tectonic modes to exist for equivalent planetary parameter values. The tectonic mode of the system is then determined by its specific geologic and climatic history. This implies that models of tectonics and mantle convection will not be able to uniquely determine the tectonic mode of a terrestrial planet without the addition of historical data. Historical data exists, to variable degrees,more » for all four terrestrial planets within our solar system. For the Earth, the planet with the largest amount of observational data, debate does still remain regarding the geologic and climatic history of Earth's deep past but constraints are available. For planets in other solar systems, no such constraints exist at present. The existence of multiple tectonic modes, for equivalent parameter values, points to a reason why different groups have reached different conclusions regarding the tectonic state of extrasolar terrestrial planets larger than Earth ({sup s}uper-Earths{sup )}. The region of multiple stable solutions is predicted to widen in parameter space for more energetic mantle convection (as would be expected for larger planets). This means that different groups can find different solutions, all potentially viable and stable, using identical models and identical system parameter values. At a more practical level, the results argue that the question of whether extrasolar terrestrial planets will have plate tectonics is unanswerable and will remain so until the temporal evolution of extrasolar planets can be constrained.« less
Qualitative models for space system engineering
NASA Technical Reports Server (NTRS)
Forbus, Kenneth D.
1990-01-01
The objectives of this project were: (1) to investigate the implications of qualitative modeling techniques for problems arising in the monitoring, diagnosis, and design of Space Station subsystems and procedures; (2) to identify the issues involved in using qualitative models to enhance and automate engineering functions. These issues include representing operational criteria, fault models, alternate ontologies, and modeling continuous signals at a functional level of description; and (3) to develop a prototype collection of qualitative models for fluid and thermal systems commonly found in Space Station subsystems. Potential applications of qualitative modeling to space-systems engineering, including the notion of intelligent computer-aided engineering are summarized. Emphasis is given to determining which systems of the proposed Space Station provide the most leverage for study, given the current state of the art. Progress on using qualitative models, including development of the molecular collection ontology for reasoning about fluids, the interaction of qualitative and quantitative knowledge in analyzing thermodynamic cycles, and an experiment on building a natural language interface to qualitative reasoning is reported. Finally, some recommendations are made for future research.
Vehicle Integrated Prognostic Reasoner (VIPR) Final Report
NASA Technical Reports Server (NTRS)
Bharadwaj, Raj; Mylaraswamy, Dinkar; Cornhill, Dennis; Biswas, Gautam; Koutsoukos, Xenofon; Mack, Daniel
2013-01-01
A systems view is necessary to detect, diagnose, predict, and mitigate adverse events during the flight of an aircraft. While most aircraft subsystems look for simple threshold exceedances and report them to a central maintenance computer, the vehicle integrated prognostic reasoner (VIPR) proactively generates evidence and takes an active role in aircraft-level health assessment. Establishing the technical feasibility and a design trade-space for this next-generation vehicle-level reasoning system (VLRS) is the focus of our work.
3D Reasoning from Blocks to Stability.
Zhaoyin Jia; Gallagher, Andrew C; Saxena, Ashutosh; Chen, Tsuhan
2015-05-01
Objects occupy physical space and obey physical laws. To truly understand a scene, we must reason about the space that objects in it occupy, and how each objects is supported stably by each other. In other words, we seek to understand which objects would, if moved, cause other objects to fall. This 3D volumetric reasoning is important for many scene understanding tasks, ranging from segmentation of objects to perception of a rich 3D, physically well-founded, interpretations of the scene. In this paper, we propose a new algorithm to parse a single RGB-D image with 3D block units while jointly reasoning about the segments, volumes, supporting relationships, and object stability. Our algorithm is based on the intuition that a good 3D representation of the scene is one that fits the depth data well, and is a stable, self-supporting arrangement of objects (i.e., one that does not topple). We design an energy function for representing the quality of the block representation based on these properties. Our algorithm fits 3D blocks to the depth values corresponding to image segments, and iteratively optimizes the energy function. Our proposed algorithm is the first to consider stability of objects in complex arrangements for reasoning about the underlying structure of the scene. Experimental results show that our stability-reasoning framework improves RGB-D segmentation and scene volumetric representation.
NASA Technical Reports Server (NTRS)
Feinberg, Lee D.; Hagopian, John; Budinoff, Jason; Dean, Bruce; Howard, Joe
2004-01-01
This paper summarizes efforts underway at the Goddard Space Flight Center to demonstrate a new type of space telescope architecture that builds on the rigid segmented telescope heritage of the James Webb Space Telescope but that solves several key challenges for future space telescopes. The architecture is based on a cost-effective segmented spherical primary mirror combined with a unique wavefront sensing and control system that allows for continuous phasing of the primary mirror. The segmented spherical primary allows for cost-effective 3-meter class (e.g., Midex and Discovery) missions as well as enables 30-meter telescope solutions that can be manufactured in a reasonable amount of time and for a reasonable amount of money. The continuous wavefront sensing and control architecture enables missions in low-earth-orbit and missions that do not require expensive stable structures and thermal control systems. For the 30-meter class applications, the paper discusses considerations for assembling and testing the telescopes in space. The paper also summarizes the scientific and technological roadmap for the architecture and also gives an overview of technology development, design studies, and testbed activities underway to demonstrate its feasibility.
NASA Technical Reports Server (NTRS)
Feinberg, Lee; Hagopian, John; Budinoff, Jason; Dean, Bruce; Howard, Joe
2005-01-01
This paper summarizes efforts underway at the Goddard Space Flight Center to demonstrate a new type of space telescope architecture that builds on the rigid, segmented telescope heritage of the James Webb Space Telescope but that solves several key challenges for future space telescopes. The architecture is based on a cost-effective segmented spherical primary mirror combined with a unique wavefront sensing and control system that allows for continuous phasing of the primary mirror. The segmented spherical primary allows for cost-effective 3-meter class (eg, Midex and Discovery) missions as well as enables 30-meter telescope solutions that can be manufactured in a reasonable amount of time and for a reasonable amount of money. The continuous wavefront sensing and control architecture enables missions in low-earth-orbit and missions that do not require expensive stable structures and thermal control systems. For the 30-meter class applications, the paper discusses considerations for assembling and testing the telescopes in space. The paper also summarizes the scientific and technological roadmap for the architecture and also gives an overview of technology development, design studies, and testbed activities underway to demonstrate it s feasibility.
NASA Technical Reports Server (NTRS)
Kumar, A. A.; Pandey, R. K.; Fogarty, T. N.; Wilkins, R.
1994-01-01
This paper addresses the subject of dual-use space technology transfer of a novel, non-traditional material termed ilmenite, found in a large percentage in the moon rocks brought back by NASA's APOLLO missions. The paper is somewhat premature in the sense that though the material as a mineral has been known for a long time, very little is known about pure single crystal ilmenite and hence few applications have been demonstrated. Yet, in another sense, it is very timely due to the fact that ilmenite promises to be a very interesting competition to silicon, silicon carbide and other compound semiconductors, especially those that are employed in high power, high temperature and large data storage/retrieval applications. It seems to be an excellent example of a small investment-high return situation. While some of the applications of this material - for production of oxygen, for instance - have been well-known, electronic applications have received relatively little attention. One reason for this was the fact that growth of single crystal ilmenite requires precise process conditions and parameters. We believe for the first time these have been determined in the Center for Electronic Materials, Texas A&M University. The work being done at Texas A&M University and Prairie View A&M University (supported by Battelle Pacific Northwest Laboratories and the Center for Space Power) indicates the excellent potential this material has in space as well as in terrestrial applications. To mention a few: as a wide band gap semiconductor it has applications in high temperature, high power situations, especially when heat dissipation is a problem such as may occur in the Space Station; the possibility of this material radiating in the blue region, it has immense applications in optoelectronics; as a material with a high density of highly directional d-bands, it lends itself to novel processing conditions and perhaps even to 'tunability' of physical parameters; as a potential scintillating material, it has possible applications as a sensor in waste management; as an oxygen sensor it has possible applications in automotive electronics; and as a radiation resistant material, it has obvious applications in the space environment. Results - experimental and theoretical - obtained so far in our laboratories will be reported with particular emphasis on the transfer of technology involving this fascinating material.
Concept for an International Standard related to Space Weather Effects on Space Systems
NASA Astrophysics Data System (ADS)
Tobiska, W. Kent; Tomky, Alyssa
There is great interest in developing an international standard related to space weather in order to specify the tools and parameters needed for space systems operations. In particular, a standard is important for satellite operators who may not be familiar with space weather. In addition, there are others who participate in space systems operations that would also benefit from such a document. For example, the developers of software systems that provide LEO satellite orbit determination, radio communication availability for scintillation events (GEO-to-ground L and UHF bands), GPS uncertainties, and the radiation environment from ground-to-space for commercial space tourism. These groups require recent historical data, current epoch specification, and forecast of space weather events into their automated or manual systems. Other examples are national government agencies that rely on space weather data provided by their organizations such as those represented in the International Space Environment Service (ISES) group of 14 national agencies. Designers, manufacturers, and launchers of space systems require real-time, operational space weather parameters that can be measured, monitored, or built into automated systems. Thus, a broad scope for the document will provide a useful international standard product to a variety of engineering and science domains. The structure of the document should contain a well-defined scope, consensus space weather terms and definitions, and internationally accepted descriptions of the main elements of space weather, its sources, and its effects upon space systems. Appendices will be useful for describing expanded material such as guidelines on how to use the standard, how to obtain specific space weather parameters, and short but detailed descriptions such as when best to use some parameters and not others; appendices provide a path for easily updating the standard since the domain of space weather is rapidly changing with new advances in scientific and engineering understanding. We present a draft outline that can be used as the basis for such a standard.
NASA Technical Reports Server (NTRS)
1981-01-01
Reasonable space systems concepts were systematically identified and defined and a total system was evaluated for the space disposal of nuclear wastes. Areas studied include space destinations, space transportation options, launch site options payload protection approaches, and payload rescue techniques. Systems level cost and performance trades defined four alternative space systems which deliver payloads to the selected 0.85 AU heliocentric orbit destination at least as economically as the reference system without requiring removal of the protective radiation shield container. No concepts significantly less costly than the reference concept were identified.
[Optimize dropping process of Ginkgo biloba dropping pills by using design space approach].
Shen, Ji-Chen; Wang, Qing-Qing; Chen, An; Pan, Fang-Lai; Gong, Xing-Chu; Qu, Hai-Bin
2017-07-01
In this paper, a design space approach was applied to optimize the dropping process of Ginkgo biloba dropping pills. Firstly, potential critical process parameters and potential process critical quality attributes were determined through literature research and pre-experiments. Secondly, experiments were carried out according to Box-Behnken design. Then the critical process parameters and critical quality attributes were determined based on the experimental results. Thirdly, second-order polynomial models were used to describe the quantitative relationships between critical process parameters and critical quality attributes. Finally, a probability-based design space was calculated and verified. The verification results showed that efficient production of Ginkgo biloba dropping pills can be guaranteed by operating within the design space parameters. The recommended operation ranges for the critical dropping process parameters of Ginkgo biloba dropping pills were as follows: dropping distance of 5.5-6.7 cm, and dropping speed of 59-60 drops per minute, providing a reference for industrial production of Ginkgo biloba dropping pills. Copyright© by the Chinese Pharmaceutical Association.
Parameter space of experimental chaotic circuits with high-precision control parameters.
de Sousa, Francisco F G; Rubinger, Rero M; Sartorelli, José C; Albuquerque, Holokx A; Baptista, Murilo S
2016-08-01
We report high-resolution measurements that experimentally confirm a spiral cascade structure and a scaling relationship of shrimps in the Chua's circuit. Circuits constructed using this component allow for a comprehensive characterization of the circuit behaviors through high resolution parameter spaces. To illustrate the power of our technological development for the creation and the study of chaotic circuits, we constructed a Chua circuit and study its high resolution parameter space. The reliability and stability of the designed component allowed us to obtain data for long periods of time (∼21 weeks), a data set from which an accurate estimation of Lyapunov exponents for the circuit characterization was possible. Moreover, this data, rigorously characterized by the Lyapunov exponents, allows us to reassure experimentally that the shrimps, stable islands embedded in a domain of chaos in the parameter spaces, can be observed in the laboratory. Finally, we confirm that their sizes decay exponentially with the period of the attractor, a result expected to be found in maps of the quadratic family.
ESTIMATION OF PHYSICAL PROPERTIES AND CHEMICAL REACTIVITY PARAMETERS OF ORGANIC COMPOUNDS
The computer program SPARC (Sparc Performs Automated Reasoning in Chemistry)has been under development for several years to estimate physical properties and chemical reactivity parameters of organic compounds strictly from molecular structure. SPARC uses computational algorithms ...
ERIC Educational Resources Information Center
Bodzin, Alec M.; Fu, Qiong; Bressler, Denise; Vallera, Farah L.
2015-01-01
Geospatially enabled learning technologies may enhance Earth science learning by placing emphasis on geographic space, visualization, scale, representation, and geospatial thinking and reasoning (GTR) skills. This study examined if and how a series of Web geographic information system investigations that the researchers developed improved urban…
The quantum measurement of time
NASA Technical Reports Server (NTRS)
Shepard, Scott R.
1994-01-01
Traditionally, in non-relativistic Quantum Mechanics, time is considered to be a parameter, rather than an observable quantity like space. In relativistic Quantum Field Theory, space and time are treated equally by reducing space to also be a parameter. Herein, after a brief review of other measurements, we describe a third possibility, which is to treat time as a directly observable quantity.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false Applicability. 169.1 Section 169.1 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRPORTS... that a proposed project is reasonably necessary for use in air commerce or in the interests of national...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Applicability. 169.1 Section 169.1 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRPORTS... that a proposed project is reasonably necessary for use in air commerce or in the interests of national...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 3 2012-01-01 2012-01-01 false Applicability. 169.1 Section 169.1 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRPORTS... that a proposed project is reasonably necessary for use in air commerce or in the interests of national...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false Applicability. 169.1 Section 169.1 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRPORTS... that a proposed project is reasonably necessary for use in air commerce or in the interests of national...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Applicability. 169.1 Section 169.1 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRPORTS... that a proposed project is reasonably necessary for use in air commerce or in the interests of national...
14 CFR § 1230.102 - Definitions.
Code of Federal Regulations, 2014 CFR
2014-01-01
....102 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION PROTECTION OF HUMAN SUBJECTS... by the Department of Labor). (f) Human subject means a living individual about whom an investigator... behavior that occurs in a context in which an individual can reasonably expect that no observation or...
NASA Astrophysics Data System (ADS)
Lev, Dorman
2016-07-01
Satellite anomalies (or malfunctions), including total distortion of electronics and loose of some satellites cost for Insurance Companies billions dollars per year. During especially active periods the probability of big satellite anomalies and their loosing increased very much. Now, when a great number of civil and military satellites are continuously worked for our practice life, the problem of satellite anomalies became very important. Many years ago about half of satellite anomalies were caused by technical reasons (for example, for Russian satellites Kosmos), but with time with increasing of production quality, this part became smaller and smaller. The other part, which now is dominated, caused by different space weather effects (energetic particles of CR and generated/trapped in the magnetosphere, and so on). We consider only satellite anomalies not caused by technical reasons: the total number of such anomalies about 6000 events, and separately for high and low altitude orbit satellites (5000 and about 800 events, correspondingly for high and low altitude satellites). No relation was found between low and high altitude satellite anomalies. Daily numbers of satellite anomalies, averaged by a superposed epoch method around sudden storm commencements and solar proton event onsets for high (>1500 km) and low (<1500 km) altitude orbits revealed a big difference in a behavior. Satellites were divided on several groups according to the orbital characteristics (altitude and inclination). The relation of satellite anomalies to the environmental parameters was found to be different for various orbits that should be taken into account under developing of the anomaly frequency models and forecasting. We consider also influence of CR on frequency of gene mutations and evolution of biosphere (we show that if it will be no CR, the Earth's civilization will be start only after milliards years later, what will be too late), CR role in thunderstorm phenomena and discharges, space weather effects on space technologies and radiation effects from solar and galactic CR in dependence of cutoff rigidities and altitude, influence magnetic storms accompanied by CR Forbush-effects on people health (increasing frequency of infarct myocardial and brain strokes), increasing frequency of car accidents (possible through people factor), increasing frequency of malfunctions in railway operation (possible, through induction currents), catastrophes in long-distance electric power lines and transformators, and in other ground technologies.
Brown, Guy C
2010-10-01
Control analysis can be used to try to understand why (quantitatively) systems are the way that they are, from rate constants within proteins to the relative amount of different tissues in organisms. Many biological parameters appear to be optimized to maximize rates under the constraint of minimizing space utilization. For any biological process with multiple steps that compete for control in series, evolution by natural selection will tend to even out the control exerted by each step. This is for two reasons: (i) shared control maximizes the flux for minimum protein concentration, and (ii) the selection pressure on any step is proportional to its control, and selection will, by increasing the rate of a step (relative to other steps), decrease its control over a pathway. The control coefficient of a parameter P over fitness can be defined as (∂N/N)/(∂P/P), where N is the number of individuals in the population, and ∂N is the change in that number as a result of the change in P. This control coefficient is equal to the selection pressure on P. I argue that biological systems optimized by natural selection will conform to a principle of sufficiency, such that the control coefficient of all parameters over fitness is 0. Thus in an optimized system small changes in parameters will have a negligible effect on fitness. This principle naturally leads to (and is supported by) the dominance of wild-type alleles over null mutants.
A prototype case-based reasoning human assistant for space crew assessment and mission management
NASA Technical Reports Server (NTRS)
Owen, Robert B.; Holland, Albert W.; Wood, Joanna
1993-01-01
We present a prototype human assistant system for space crew assessment and mission management. Our system is based on case episodes from American and Russian space missions and analog environments such as polar stations and undersea habitats. The general domain of small groups in isolated and confined environments represents a near ideal application area for case-based reasoning (CBR) - there are few reliable rules to follow, and most domain knowledge is in the form of cases. We define the problem domain and outline a unique knowledge representation system driven by conflict and communication triggers. The prototype system is able to represent, index, and retrieve case studies of human performance. We index by social, behavioral, and environmental factors. We present the problem domain, our current implementation, our research approach for an operational system, and prototype performance and results.
Approximate reasoning-based learning and control for proximity operations and docking in space
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Jani, Yashvant; Lea, Robert N.
1991-01-01
A recently proposed hybrid-neutral-network and fuzzy-logic-control architecture is applied to a fuzzy logic controller developed for attitude control of the Space Shuttle. A model using reinforcement learning and learning from past experience for fine-tuning its knowledge base is proposed. Two main components of this approximate reasoning-based intelligent control (ARIC) model - an action-state evaluation network and action selection network are described as well as the Space Shuttle attitude controller. An ARIC model for the controller is presented, and it is noted that the input layer in each network includes three nodes representing the angle error, angle error rate, and bias node. Preliminary results indicate that the controller can hold the pitch rate within its desired deadband and starts to use the jets at about 500 sec in the run.
A probabilistic approach for the estimation of earthquake source parameters from spectral inversion
NASA Astrophysics Data System (ADS)
Supino, M.; Festa, G.; Zollo, A.
2017-12-01
The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to investigate the robustness of the method and uncertainty propagation from the data-space to the parameter space. Finally, the method is applied to characterize the source parameters of the earthquakes occurring during the 2016-2017 Central Italy sequence, with the goal of investigating the source parameter scaling with magnitude.
An open-source job management framework for parameter-space exploration: OACIS
NASA Astrophysics Data System (ADS)
Murase, Y.; Uchitane, T.; Ito, N.
2017-11-01
We present an open-source software framework for parameter-space exporation, named OACIS, which is useful to manage vast amount of simulation jobs and results in a systematic way. Recent development of high-performance computers enabled us to explore parameter spaces comprehensively, however, in such cases, manual management of the workflow is practically impossible. OACIS is developed aiming at reducing the cost of these repetitive tasks when conducting simulations by automating job submissions and data management. In this article, an overview of OACIS as well as a getting started guide are presented.
Finite-element reentry heat-transfer analysis of space shuttle Orbiter
NASA Technical Reports Server (NTRS)
Ko, William L.; Quinn, Robert D.; Gong, Leslie
1986-01-01
A structural performance and resizing (SPAR) finite-element thermal analysis computer program was used in the heat-transfer analysis of the space shuttle orbiter subjected to reentry aerodynamic heating. Three wing cross sections and one midfuselage cross section were selected for the thermal analysis. The predicted thermal protection system temperatures were found to agree well with flight-measured temperatures. The calculated aluminum structural temperatures also agreed reasonably well with the flight data from reentry to touchdown. The effects of internal radiation and of internal convection were found to be significant. The SPAR finite-element solutions agreed reasonably well with those obtained from the conventional finite-difference method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Komolova, G.S.; Makeeva, V.F.; Egorov, I.A.
1978-10-26
A prime role is attributed to disturbances in DNA structure of nuclear chromatin as the mechanisms of radiation lesions in living cells. For this reason, it may be assumed that a change in the biological effectiveness of radiation delivered to animals under conditions that are extreme for vital functions, including space flight, may occur via modification of radiolesions in chromatin DNA. For this reason, the DNA of rats exposed to radiation from an onboard gamma source in the course of an actual space flight in the Kosmos-690 biosatellite was examined.
Method of measuring the dc electric field and other tokamak parameters
Fisch, Nathaniel J.; Kirtz, Arnold H.
1992-01-01
A method including externally imposing an impulsive momentum-space flux to perturb hot tokamak electrons thereby producing a transient synchrotron radiation signal, in frequency-time space, and the inference, using very fast algorithms, of plasma parameters including the effective ion charge state Z.sub.eff, the direction of the magnetic field, and the position and width in velocity space of the impulsive momentum-space flux, and, in particular, the dc toroidal electric field.
14 CFR 169.5 - FAA determination.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false FAA determination. 169.5 Section 169.5 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRPORTS... facility is reasonably necessary for use in air commerce or in the interests of national defense; that it...
14 CFR 169.5 - FAA determination.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false FAA determination. 169.5 Section 169.5 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRPORTS... facility is reasonably necessary for use in air commerce or in the interests of national defense; that it...
14 CFR 169.5 - FAA determination.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 3 2012-01-01 2012-01-01 false FAA determination. 169.5 Section 169.5 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRPORTS... facility is reasonably necessary for use in air commerce or in the interests of national defense; that it...
14 CFR 169.5 - FAA determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false FAA determination. 169.5 Section 169.5 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRPORTS... facility is reasonably necessary for use in air commerce or in the interests of national defense; that it...
14 CFR 169.5 - FAA determination.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false FAA determination. 169.5 Section 169.5 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRPORTS... facility is reasonably necessary for use in air commerce or in the interests of national defense; that it...
14 CFR 271.6 - Profit element.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Profit element. 271.6 Section 271.6 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC... element. The reasonable return for a carrier for providing essential air service at an eligible place...
14 CFR 271.6 - Profit element.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Profit element. 271.6 Section 271.6 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC... element. The reasonable return for a carrier for providing essential air service at an eligible place...
14 CFR 271.6 - Profit element.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Profit element. 271.6 Section 271.6 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC... element. The reasonable return for a carrier for providing essential air service at an eligible place...
14 CFR 271.6 - Profit element.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Profit element. 271.6 Section 271.6 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC... element. The reasonable return for a carrier for providing essential air service at an eligible place...
NASA Technical Reports Server (NTRS)
Smith, O. E.; Adelfang, S. I.; Tubbs, J. D.
1982-01-01
A five-parameter gamma distribution (BGD) having two shape parameters, two location parameters, and a correlation parameter is investigated. This general BGD is expressed as a double series and as a single series of the modified Bessel function. It reduces to the known special case for equal shape parameters. Practical functions for computer evaluations for the general BGD and for special cases are presented. Applications to wind gust modeling for the ascent flight of the space shuttle are illustrated.
Naden, Levi N; Shirts, Michael R
2016-04-12
We show how thermodynamic properties of molecular models can be computed over a large, multidimensional parameter space by combining multistate reweighting analysis with a linear basis function approach. This approach reduces the computational cost to estimate thermodynamic properties from molecular simulations for over 130,000 tested parameter combinations from over 1000 CPU years to tens of CPU days. This speed increase is achieved primarily by computing the potential energy as a linear combination of basis functions, computed from either modified simulation code or as the difference of energy between two reference states, which can be done without any simulation code modification. The thermodynamic properties are then estimated with the Multistate Bennett Acceptance Ratio (MBAR) as a function of multiple model parameters without the need to define a priori how the states are connected by a pathway. Instead, we adaptively sample a set of points in parameter space to create mutual configuration space overlap. The existence of regions of poor configuration space overlap are detected by analyzing the eigenvalues of the sampled states' overlap matrix. The configuration space overlap to sampled states is monitored alongside the mean and maximum uncertainty to determine convergence, as neither the uncertainty or the configuration space overlap alone is a sufficient metric of convergence. This adaptive sampling scheme is demonstrated by estimating with high precision the solvation free energies of charged particles of Lennard-Jones plus Coulomb functional form with charges between -2 and +2 and generally physical values of σij and ϵij in TIP3P water. We also compute entropy, enthalpy, and radial distribution functions of arbitrary unsampled parameter combinations using only the data from these sampled states and use the estimates of free energies over the entire space to examine the deviation of atomistic simulations from the Born approximation to the solvation free energy.
NASA Technical Reports Server (NTRS)
Kosmo, Joseph J.
2006-01-01
This viewgraph presentation describes the basic functions of space suits for EVA astronauts. Space suits are also described from the past, present and future space missions. The contents include: 1) Why Do You Need A Space Suit?; 2) Generic EVA System Requirements; 3) Apollo Lunar Surface Cycling Certification; 4) EVA Operating Cycles for Mars Surface Missions; 5) Mars Surface EVA Mission Cycle Requirements; 6) Robustness Durability Requirements Comparison; 7) Carry-Weight Capabilities; 8) EVA System Challenges (Mars); 9) Human Planetary Surface Exploration Experience; 10) NASA Johnson Space Center Planetary Analog Activities; 11) Why Perform Remote Field Tests; and 12) Other Reasons Why We Perform Remote Field Tests.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-08-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-01-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784
Oliveira, G M; de Oliveira, P P; Omar, N
2001-01-01
Cellular automata (CA) are important as prototypical, spatially extended, discrete dynamical systems. Because the problem of forecasting dynamic behavior of CA is undecidable, various parameter-based approximations have been developed to address the problem. Out of the analysis of the most important parameters available to this end we proposed some guidelines that should be followed when defining a parameter of that kind. Based upon the guidelines, new parameters were proposed and a set of five parameters was selected; two of them were drawn from the literature and three are new ones, defined here. This article presents all of them and makes their qualities evident. Then, two results are described, related to the use of the parameter set in the Elementary Rule Space: a phase transition diagram, and some general heuristics for forecasting the dynamics of one-dimensional CA. Finally, as an example of the application of the selected parameters in high cardinality spaces, results are presented from experiments involving the evolution of radius-3 CA in the Density Classification Task, and radius-2 CA in the Synchronization Task.
From Active Citizenship to World Citizenship: A Proposal for a World University
ERIC Educational Resources Information Center
Masschelein, Jan; Simons, Maarten
2009-01-01
This article explores how universities can function as spaces where a world citizenship takes shape. First, Kant's distinction between the "private use of reason" and "domestic gathering", on the one hand, and the "public use of reason" and "public gathering", on the other, is elucidated. This distinction is used, secondly, to argue that the…
Cultural Commonalities and Differences in Spatial Problem-Solving: A Computational Analysis
ERIC Educational Resources Information Center
Lovett, Andrew; Forbus, Kenneth
2011-01-01
A fundamental question in human cognition is how people reason about space. We use a computational model to explore cross-cultural commonalities and differences in spatial cognition. Our model is based upon two hypotheses: (1) the structure-mapping model of analogy can explain the visual comparisons used in spatial reasoning; and (2) qualitative,…
An Optimized Trajectory Planning for Welding Robot
NASA Astrophysics Data System (ADS)
Chen, Zhilong; Wang, Jun; Li, Shuting; Ren, Jun; Wang, Quan; Cheng, Qunchao; Li, Wentao
2018-03-01
In order to improve the welding efficiency and quality, this paper studies the combined planning between welding parameters and space trajectory for welding robot and proposes a trajectory planning method with high real-time performance, strong controllability and small welding error. By adding the virtual joint at the end-effector, the appropriate virtual joint model is established and the welding process parameters are represented by the virtual joint variables. The trajectory planning is carried out in the robot joint space, which makes the control of the welding process parameters more intuitive and convenient. By using the virtual joint model combined with the B-spline curve affine invariant, the welding process parameters are indirectly controlled by controlling the motion curve of the real joint. To solve the optimal time solution as the goal, the welding process parameters and joint space trajectory joint planning are optimized.
Pan, Xiao-Jie; Ou, De-Bin; Lin, Xing; Ye, Ming-Fang
2017-06-01
Residual air space problems after pulmonary lobectomy are an important concern in thoracic surgical practice, and various procedures have been applied to manage them. This study describes a novel technique using controllable paralysis of the diaphragm by localized freezing of the phrenic nerve, and assesses the effectiveness of this procedure to reduce air space after pulmonary lobectomy. In this prospective randomized study, 207 patients who underwent lobectomy or bilobectomy and systematic mediastinal node dissection in our department between January 2009 and November 2013 were randomly allocated to a cryoneuroablation group or a conventional group. Patients in the cryoneuroablation group (n = 104) received phrenic nerve cryoneuroablation after lung procedures, and patients in the conventional group (n = 103) did not receive cryoneuroablation after the procedure. Data regarding preoperative clinical and surgical characteristics in both groups were collected. Both groups were compared with regard to postoperative parameters such as total amount of pleural drainage, duration of chest tube placement, length of hospital stay, requirement for repeat chest drain insertion, prolonged air leak, and residual space. Perioperative lung function was also compared in both groups. Recovery of diaphragmatic movement in the cryoneuroablation group was checked by fluoroscopy on the 15th, 30th, and 60th day after surgery. There was no statistically significant difference in patient characteristics between the 2 groups; nor was there a difference in terms of hospital stay, new drain requirement, and incidence of empyema. In comparison with the conventional group, the cryoneuroablation group had less total drainage (1024 ± 562 vs 1520 ± 631 mL, P < .05), fewer cases of residual space (9 vs 2, P < .05), fewer cases of prolonged air leak (9 vs 1, P < .01), and shorter duration of drainage (3.2 ± 0.2 vs 4.3 + 0.3 days, P < .01). Diaphragmatic paralyses caused by cryoneuroablation reversed within 30 to 60 days. Cryoneuroablation of the phrenic nerve offers a reasonable option for prevention of residual air space following major pulmonary resection.
The Mice Drawer System (MDS) experiment and the space endurance record-breaking mice.
Cancedda, Ranieri; Liu, Yi; Ruggiu, Alessandra; Tavella, Sara; Biticchi, Roberta; Santucci, Daniela; Schwartz, Silvia; Ciparelli, Paolo; Falcetti, Giancarlo; Tenconi, Chiara; Cotronei, Vittorio; Pignataro, Salvatore
2012-01-01
The Italian Space Agency, in line with its scientific strategies and the National Utilization Plan for the International Space Station (ISS), contracted Thales Alenia Space Italia to design and build a spaceflight payload for rodent research on ISS: the Mice Drawer System (MDS). The payload, to be integrated inside the Space Shuttle middeck during transportation and inside the Express Rack in the ISS during experiment execution, was designed to function autonomously for more than 3 months and to involve crew only for maintenance activities. In its first mission, three wild type (Wt) and three transgenic male mice over-expressing pleiotrophin under the control of a bone-specific promoter (PTN-Tg) were housed in the MDS. At the time of launch, animals were 2-months old. MDS reached the ISS on board of Shuttle Discovery Flight 17A/STS-128 on August 28(th), 2009. MDS returned to Earth on November 27(th), 2009 with Shuttle Atlantis Flight ULF3/STS-129 after 91 days, performing the longest permanence of mice in space. Unfortunately, during the MDS mission, one PTN-Tg and two Wt mice died due to health status or payload-related reasons. The remaining mice showed a normal behavior throughout the experiment and appeared in excellent health conditions at landing. During the experiment, the mice health conditions and their water and food consumption were daily checked. Upon landing mice were sacrificed, blood parameters measured and tissues dissected for subsequent analysis. To obtain as much information as possible on microgravity-induced tissue modifications, we organized a Tissue Sharing Program: 20 research groups from 6 countries participated. In order to distinguish between possible effects of the MDS housing conditions and effects due to the near-zero gravity environment, a ground replica of the flight experiment was performed at the University of Genova. Control tissues were collected also from mice maintained on Earth in standard vivarium cages.
The Mice Drawer System (MDS) Experiment and the Space Endurance Record-Breaking Mice
Cancedda, Ranieri; Liu, Yi; Ruggiu, Alessandra; Tavella, Sara; Biticchi, Roberta; Santucci, Daniela; Schwartz, Silvia; Ciparelli, Paolo; Falcetti, Giancarlo; Tenconi, Chiara; Cotronei, Vittorio; Pignataro, Salvatore
2012-01-01
The Italian Space Agency, in line with its scientific strategies and the National Utilization Plan for the International Space Station (ISS), contracted Thales Alenia Space Italia to design and build a spaceflight payload for rodent research on ISS: the Mice Drawer System (MDS). The payload, to be integrated inside the Space Shuttle middeck during transportation and inside the Express Rack in the ISS during experiment execution, was designed to function autonomously for more than 3 months and to involve crew only for maintenance activities. In its first mission, three wild type (Wt) and three transgenic male mice over-expressing pleiotrophin under the control of a bone-specific promoter (PTN-Tg) were housed in the MDS. At the time of launch, animals were 2-months old. MDS reached the ISS on board of Shuttle Discovery Flight 17A/STS-128 on August 28th, 2009. MDS returned to Earth on November 27th, 2009 with Shuttle Atlantis Flight ULF3/STS-129 after 91 days, performing the longest permanence of mice in space. Unfortunately, during the MDS mission, one PTN-Tg and two Wt mice died due to health status or payload-related reasons. The remaining mice showed a normal behavior throughout the experiment and appeared in excellent health conditions at landing. During the experiment, the mice health conditions and their water and food consumption were daily checked. Upon landing mice were sacrificed, blood parameters measured and tissues dissected for subsequent analysis. To obtain as much information as possible on microgravity-induced tissue modifications, we organized a Tissue Sharing Program: 20 research groups from 6 countries participated. In order to distinguish between possible effects of the MDS housing conditions and effects due to the near-zero gravity environment, a ground replica of the flight experiment was performed at the University of Genova. Control tissues were collected also from mice maintained on Earth in standard vivarium cages. PMID:22666312
Estimation of k-ε parameters using surrogate models and jet-in-crossflow data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lefantzi, Sophia; Ray, Jaideep; Arunajatesan, Srinivasan
2014-11-01
We demonstrate a Bayesian method that can be used to calibrate computationally expensive 3D RANS (Reynolds Av- eraged Navier Stokes) models with complex response surfaces. Such calibrations, conditioned on experimental data, can yield turbulence model parameters as probability density functions (PDF), concisely capturing the uncertainty in the parameter estimates. Methods such as Markov chain Monte Carlo (MCMC) estimate the PDF by sampling, with each sample requiring a run of the RANS model. Consequently a quick-running surrogate is used instead to the RANS simulator. The surrogate can be very difficult to design if the model's response i.e., the dependence of themore » calibration variable (the observable) on the parameter being estimated is complex. We show how the training data used to construct the surrogate can be employed to isolate a promising and physically realistic part of the parameter space, within which the response is well-behaved and easily modeled. We design a classifier, based on treed linear models, to model the "well-behaved region". This classifier serves as a prior in a Bayesian calibration study aimed at estimating 3 k - ε parameters ( C μ, C ε2 , C ε1 ) from experimental data of a transonic jet-in-crossflow interaction. The robustness of the calibration is investigated by checking its predictions of variables not included in the cal- ibration data. We also check the limit of applicability of the calibration by testing at off-calibration flow regimes. We find that calibration yield turbulence model parameters which predict the flowfield far better than when the nomi- nal values of the parameters are used. Substantial improvements are still obtained when we use the calibrated RANS model to predict jet-in-crossflow at Mach numbers and jet strengths quite different from those used to generate the ex- perimental (calibration) data. Thus the primary reason for poor predictive skill of RANS, when using nominal values of the turbulence model parameters, was parametric uncertainty, which was rectified by calibration. Post-calibration, the dominant contribution to model inaccuraries are due to the structural errors in RANS.« less
NASA Technical Reports Server (NTRS)
Park, A.; Dominek, A. K.
1990-01-01
Constitutive parameter extraction from S parameter data using a rectangular waveguide whose cross section is partially filled with a material sample as opposed to being completely filled was examined. One reason for studying a partially filled geometry is to analyze the effect of air gaps between the sample and fixture for the extraction of constitutive parameters. Air gaps can occur in high temperature parameter measurements when the sample was prepared at room temperature. Single port and two port measurement approaches to parameter extraction are also discussed.
14 CFR 77.67 - Final decision of the Administrator.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 2 2010-01-01 2010-01-01 false Final decision of the Administrator. 77.67 Section 77.67 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION... reasons for it. He then issues an appropriate order to be served on each of the parties. ...
14 CFR 61.27 - Voluntary surrender or exchange of certificate.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 2 2010-01-01 2010-01-01 false Voluntary surrender or exchange of certificate. 61.27 Section 61.27 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF... my own reasons, with full knowledge that my (insert name of certificate or rating, as appropriate...
14 CFR 61.27 - Voluntary surrender or exchange of certificate.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 2 2011-01-01 2011-01-01 false Voluntary surrender or exchange of certificate. 61.27 Section 61.27 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF... my own reasons, with full knowledge that my (insert name of certificate or rating, as appropriate...
14 CFR 1275.105 - Conduct of the OIG investigation of research misconduct.
Code of Federal Regulations, 2010 CFR
2010-01-01
... research misconduct. 1275.105 Section 1275.105 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION RESEARCH MISCONDUCT § 1275.105 Conduct of the OIG investigation of research misconduct. (a) The OIG shall make every reasonable effort to complete a NASA research misconduct investigation and issue...
14 CFR 1275.105 - Conduct of the OIG investigation of research misconduct.
Code of Federal Regulations, 2011 CFR
2011-01-01
... misconduct. 1275.105 Section 1275.105 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION RESEARCH MISCONDUCT § 1275.105 Conduct of the OIG investigation of research misconduct. (a) The OIG shall make every reasonable effort to complete a NASA research misconduct investigation and issue a report...
14 CFR 1275.105 - Conduct of the OIG investigation of research misconduct.
Code of Federal Regulations, 2013 CFR
2013-01-01
... research misconduct. 1275.105 Section 1275.105 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION RESEARCH MISCONDUCT § 1275.105 Conduct of the OIG investigation of research misconduct. (a) The OIG shall make every reasonable effort to complete a NASA research misconduct investigation and issue...
14 CFR 1275.105 - Conduct of the OIG investigation of research misconduct.
Code of Federal Regulations, 2012 CFR
2012-01-01
... research misconduct. 1275.105 Section 1275.105 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION RESEARCH MISCONDUCT § 1275.105 Conduct of the OIG investigation of research misconduct. (a) The OIG shall make every reasonable effort to complete a NASA research misconduct investigation and issue...
Teaching Chemistry Using the Movie "Apollo 13."
ERIC Educational Resources Information Center
Goll, James G.; Woods, B. J.
1999-01-01
Offers suggestions for incorporating topics that relate to the Apollo 13 space mission into a chemistry course. Discusses connections between the study of chemistry and space exploration, including fuels and oxidants used, reasons for an oxygen tank rupture, and lithium hydroxide-containing carbon dioxide filters. Contains 11 references. (WRM)
14 CFR 1203.400 - Specific classifying guidance.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Information or material which is important to the national security of the United States in relation to other... Section 1203.400 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION INFORMATION SECURITY... reasonably be expected to cause damage to the national security. In cases where it is believed that a...
78 FR 20318 - Appraisal Subcommittee; Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-04
... government- issued photo ID and must agree to submit to reasonable security measures. The meeting space is intended to accommodate public attendees. However, if the space will not accommodate all requests, the ASC... at ASC meetings. Dated: March 29, 2013. James R. Park, Executive Director. [FR Doc. 2013-07754 Filed...
NASA Astrophysics Data System (ADS)
Cui, Tiangang; Marzouk, Youssef; Willcox, Karen
2016-06-01
Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.
Fletcher, Patrick; Bertram, Richard; Tabak, Joel
2016-06-01
Models of electrical activity in excitable cells involve nonlinear interactions between many ionic currents. Changing parameters in these models can produce a variety of activity patterns with sometimes unexpected effects. Further more, introducing new currents will have different effects depending on the initial parameter set. In this study we combined global sampling of parameter space and local analysis of representative parameter sets in a pituitary cell model to understand the effects of adding K (+) conductances, which mediate some effects of hormone action on these cells. Global sampling ensured that the effects of introducing K (+) conductances were captured across a wide variety of contexts of model parameters. For each type of K (+) conductance we determined the types of behavioral transition that it evoked. Some transitions were counterintuitive, and may have been missed without the use of global sampling. In general, the wide range of transitions that occurred when the same current was applied to the model cell at different locations in parameter space highlight the challenge of making accurate model predictions in light of cell-to-cell heterogeneity. Finally, we used bifurcation analysis and fast/slow analysis to investigate why specific transitions occur in representative individual models. This approach relies on the use of a graphics processing unit (GPU) to quickly map parameter space to model behavior and identify parameter sets for further analysis. Acceleration with modern low-cost GPUs is particularly well suited to exploring the moderate-sized (5-20) parameter spaces of excitable cell and signaling models.
Aerobrake assembly with minimum Space Station accommodation
NASA Technical Reports Server (NTRS)
Katzberg, Steven J.; Butler, David H.; Doggett, William R.; Russell, James W.; Hurban, Theresa
1991-01-01
The minimum Space Station Freedom accommodations required for initial assembly, repair, and refurbishment of the Lunar aerobrake were investigated. Baseline Space Station Freedom support services were assumed, as well as reasonable earth-to-orbit possibilities. A set of three aerobrake configurations representative of the major themes in aerobraking were developed. Structural assembly concepts, along with on-orbit assembly and refurbishment scenarios were created. The scenarios were exercised to identify required Space Station Freedom accommodations. Finally, important areas for follow-on study were also identified.
NASA Technical Reports Server (NTRS)
Crouch, R. K.; Fripp, A. L.; Debnam, W. J.; Clark, I. O.
1981-01-01
Crystals of the intermetallic compound Pb1-xSnxTe will be grown in furnaces on the Space Shuttle. The reasons for conducting this growth in space, the program of investigation to develop the space experiment and the requirements that are placed on the Space Shuttle furnace are discussed. Also included are relevent thermophysical properties of Pb1-xSnxTe to the degree which they are known.
Nijran, Kuldip S; Houston, Alex S; Fleming, John S; Jarritt, Peter H; Heikkinen, Jari O; Skrypniuk, John V
2014-07-01
In this second UK audit of quantitative parameters obtained from renography, phantom simulations were used in cases in which the 'true' values could be estimated, allowing the accuracy of the parameters measured to be assessed. A renal physical phantom was used to generate a set of three phantom simulations (six kidney functions) acquired on three different gamma camera systems. A total of nine phantom simulations and three real patient studies were distributed to UK hospitals participating in the audit. Centres were asked to provide results for the following parameters: relative function and time-to-peak (whole kidney and cortical region). As with previous audits, a questionnaire collated information on methodology. Errors were assessed as the root mean square deviation from the true value. Sixty-one centres responded to the audit, with some hospitals providing multiple sets of results. Twenty-one centres provided a complete set of parameter measurements. Relative function and time-to-peak showed a reasonable degree of accuracy and precision in most UK centres. The overall average root mean squared deviation of the results for (i) the time-to-peak measurement for the whole kidney and (ii) the relative function measurement from the true value was 7.7 and 4.5%, respectively. These results showed a measure of consistency in the relative function and time-to-peak that was similar to the results reported in a previous renogram audit by our group. Analysis of audit data suggests a reasonable degree of accuracy in the quantification of renography function using relative function and time-to-peak measurements. However, it is reasonable to conclude that the objectives of the audit could not be fully realized because of the limitations of the mechanical phantom in providing true values for renal parameters.
Monitoring and telemedicine support in remote environments and in human space flight.
Cermack, M
2006-07-01
The common features of remote environments are geographical separation, logistic problems with health care delivery and with patient retrieval, extreme natural conditions, artificial environment, or combination of all. The exposure can have adverse effects on patients' physiology, on care providers' performance and on hardware functionality. The time to definite treatment may vary between hours as in orbital space flight, days for remote exploratory camp, weeks for polar bases and months to years for interplanetary exploration. The generic system architecture, used in any telematic support, consists of data acquisition, data-processing and storage, telecommunications links, decision-making facilities and the means of command execution. At the present level of technology, a simple data transfer and two-way voice communication could be established from any place on the earth, but the current use of mobile communication technologies for telemedicine applications is still low, either for logistic, economic and political reasons, or because of limited knowledge about the available technology and procedures. Criteria for selection of portable telemedicine terminals in remote terrestrial places, characteristics of currently available mobile telecommunication systems, and the concept of integrated monitoring of physiological and environmental parameters are mentioned in the first section of this paper. The second part describes some aspects of emergency medical support in human orbital spaceflight, the limits of telemedicine support in near-Earth space environment and mentions some open issues related to long-term exploratory missions beyond the low Earth orbit.
Pinjari, Rahul V; Delcey, Mickaël G; Guo, Meiyuan; Odelius, Michael; Lundberg, Marcus
2016-02-15
The restricted active-space (RAS) approach can accurately simulate metal L-edge X-ray absorption spectra of first-row transition metal complexes without the use of any fitting parameters. These characteristics provide a unique capability to identify unknown chemical species and to analyze their electronic structure. To find the best balance between cost and accuracy, the sensitivity of the simulated spectra with respect to the method variables has been tested for two models, [FeCl6 ](3-) and [Fe(CN)6 ](3-) . For these systems, the reference calculations give deviations, when compared with experiment, of ≤1 eV in peak positions, ≤30% for the relative intensity of major peaks, and ≤50% for minor peaks. When compared with these deviations, the simulated spectra are sensitive to the number of final states, the inclusion of dynamical correlation, and the ionization potential electron affinity shift, in addition to the selection of the active space. The spectra are less sensitive to the quality of the basis set and even a double-ζ basis gives reasonable results. The inclusion of dynamical correlation through second-order perturbation theory can be done efficiently using the state-specific formalism without correlating the core orbitals. Although these observations are not directly transferable to other systems, they can, together with a cost analysis, aid in the design of RAS models and help to extend the use of this powerful approach to a wider range of transition metal systems. © 2015 Wiley Periodicals, Inc.
Curtis L. Vanderschaaf
2008-01-01
Mixed effects models can be used to obtain site-specific parameters through the use of model calibration that often produces better predictions of independent data. This study examined whether parameters of a mixed effect height-diameter model estimated using loblolly pine plantation data but calibrated using sweetgum plantation data would produce reasonable...
Exploring Replica-Exchange Wang-Landau sampling in higher-dimensional parameter space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valentim, Alexandra; Rocha, Julio C. S.; Tsai, Shan-Ho
We considered a higher-dimensional extension for the replica-exchange Wang-Landau algorithm to perform a random walk in the energy and magnetization space of the two-dimensional Ising model. This hybrid scheme combines the advantages of Wang-Landau and Replica-Exchange algorithms, and the one-dimensional version of this approach has been shown to be very efficient and to scale well, up to several thousands of computing cores. This approach allows us to split the parameter space of the system to be simulated into several pieces and still perform a random walk over the entire parameter range, ensuring the ergodicity of the simulation. Previous work, inmore » which a similar scheme of parallel simulation was implemented without using replica exchange and with a different way to combine the result from the pieces, led to discontinuities in the final density of states over the entire range of parameters. From our simulations, it appears that the replica-exchange Wang-Landau algorithm is able to overcome this diculty, allowing exploration of higher parameter phase space by keeping track of the joint density of states.« less
Trap configuration and spacing influences parameter estimates in spatial capture-recapture models
Sun, Catherine C.; Fuller, Angela K.; Royle, J. Andrew
2014-01-01
An increasing number of studies employ spatial capture-recapture models to estimate population size, but there has been limited research on how different spatial sampling designs and trap configurations influence parameter estimators. Spatial capture-recapture models provide an advantage over non-spatial models by explicitly accounting for heterogeneous detection probabilities among individuals that arise due to the spatial organization of individuals relative to sampling devices. We simulated black bear (Ursus americanus) populations and spatial capture-recapture data to evaluate the influence of trap configuration and trap spacing on estimates of population size and a spatial scale parameter, sigma, that relates to home range size. We varied detection probability and home range size, and considered three trap configurations common to large-mammal mark-recapture studies: regular spacing, clustered, and a temporal sequence of different cluster configurations (i.e., trap relocation). We explored trap spacing and number of traps per cluster by varying the number of traps. The clustered arrangement performed well when detection rates were low, and provides for easier field implementation than the sequential trap arrangement. However, performance differences between trap configurations diminished as home range size increased. Our simulations suggest it is important to consider trap spacing relative to home range sizes, with traps ideally spaced no more than twice the spatial scale parameter. While spatial capture-recapture models can accommodate different sampling designs and still estimate parameters with accuracy and precision, our simulations demonstrate that aspects of sampling design, namely trap configuration and spacing, must consider study area size, ranges of individual movement, and home range sizes in the study population.
NASA Astrophysics Data System (ADS)
Susyanto, Nanang
2017-12-01
We propose a simple derivation of the Cramer-Rao Lower Bound (CRLB) of parameters under equality constraints from the CRLB without constraints in regular parametric models. When a regular parametric model and an equality constraint of the parameter are given, a parametric submodel can be defined by restricting the parameter under that constraint. The tangent space of this submodel is then computed with the help of the implicit function theorem. Finally, the score function of the restricted parameter is obtained by projecting the efficient influence function of the unrestricted parameter on the appropriate inner product spaces.
Constraining neutron guide optimizations with phase-space considerations
NASA Astrophysics Data System (ADS)
Bertelsen, Mads; Lefmann, Kim
2016-09-01
We introduce a method named the Minimalist Principle that serves to reduce the parameter space for neutron guide optimization when the required beam divergence is limited. The reduced parameter space will restrict the optimization to guides with a minimal neutron intake that are still theoretically able to deliver the maximal possible performance. The geometrical constraints are derived using phase-space propagation from moderator to guide and from guide to sample, while assuming that the optimized guides will achieve perfect transport of the limited neutron intake. Guide systems optimized using these constraints are shown to provide performance close to guides optimized without any constraints, however the divergence received at the sample is limited to the desired interval, even when the neutron transport is not limited by the supermirrors used in the guide. As the constraints strongly limit the parameter space for the optimizer, two control parameters are introduced that can be used to adjust the selected subspace, effectively balancing between maximizing neutron transport and avoiding background from unnecessary neutrons. One parameter is needed to describe the expected focusing abilities of the guide to be optimized, going from perfectly focusing to no correlation between position and velocity. The second parameter controls neutron intake into the guide, so that one can select exactly how aggressively the background should be limited. We show examples of guides optimized using these constraints which demonstrates the higher signal to noise than conventional optimizations. Furthermore the parameter controlling neutron intake is explored which shows that the simulated optimal neutron intake is close to the analytically predicted, when assuming that the guide is dominated by multiple scattering events.
Turbulence in space plasmas and beyond
NASA Astrophysics Data System (ADS)
Galtier, S.
2018-07-01
Most of the visible matter in the Universe is in the form of highly turbulent plasmas. For a long time the turbulent character of astrophysical fluids has been neglected and not well understood. One reason for this is the extremely complicated physics involved in astrophysical processes ranging from the machinery of stars, solar and stellar winds, accretion disks to interstellar clouds and galaxies. The other reason is that turbulence constitutes in itself a difficult subject where most of the fundamental results belongs to the incompressible hydrodynamics. Nevertheless, significant theoretical progress has been made during the last years to incorporate some ingredients like compressibility or small-scale plasma physics which are fundamental in astrophysics. This paper reviews some of these results with a strong focus on space plasmas (solar wind, solar corona). Turbulence in interstellar clouds (supersonic flows) and cosmology (space-time fluctuations) are also briefly mentioned.
Future Directions for Fusion Propulsion Research at NASA
NASA Technical Reports Server (NTRS)
Adams, Robert B.; Cassibry, Jason T.
2005-01-01
Fusion propulsion is inevitable if the human race remains dedicated to exploration of the solar system. There are fundamental reasons why fusion surpasses more traditional approaches to routine crewed missions to Mars, crewed missions to the outer planets, and deep space high speed robotic missions, assuming that reduced trip times, increased payloads, and higher available power are desired. A recent series of informal discussions were held among members from government, academia, and industry concerning fusion propulsion. We compiled a sufficient set of arguments for utilizing fusion in space. .If the U.S. is to lead the effort and produce a working system in a reasonable amount of time, NASA must take the initiative, relying on, but not waiting for, DOE guidance. Arguments for fusion propulsion are presented, along with fusion enabled mission examples, fusion technology trade space, and a proposed outline for future efforts.
The space exploration initiative
NASA Technical Reports Server (NTRS)
Priest, Pete
1991-01-01
A number of view graph charts are presented which outline the presentation. Outlined are reasons for going to Mars, why it is necessary to go to the Moon first, and the presidential decision on the space exploration initiative. Other representative charts are entitled: Lunar transportation system requirement drivers; Mars transportation system requirement drivers; National space policy goals; Exploration hardware needed; Mars mission profile; Science on the Moon and Mars; and Two independent reviews.
NASA Technical Reports Server (NTRS)
Grissom, D. S.; Schneider, W. C.
1971-01-01
The determination of a base line (minimum weight) design for the primary structure of the living quarters modules in an earth-orbiting space base was investigated. Although the design is preliminary in nature, the supporting analysis is sufficiently thorough to provide a reasonably accurate weight estimate of the major components that are considered to comprise the structural weight of the space base.
Space Vehicle Flight Mechanics (La Mecanique du Vol des Vehicules Spatiaux)
1990-06-01
uncertainties to a reasonably or a single-stage-to-orbit vehicle manageable level". Some of the (without supersonic combustion) chiof anxieties were as...their landing on the moon or to manning space stations orbiting Earth, there exists an enormous infrastructure of scientists, engineers, managers and...politicians who together allow these ventures to come to fruition. This paper addresses the evolution of space flight, the technical and management
Spontaneous symmetry breaking in a two-lane model for bidirectional overtaking traffic
NASA Astrophysics Data System (ADS)
Appert-Rolland, C.; Hilhorst, H. J.; Schehr, G.
2010-08-01
Firstly, we consider a unidirectional flux \\bar {\\omega } of vehicles, each of which is characterized by its 'natural' velocity v drawn from a distribution P(v). The traffic flow is modeled as a collection of straight 'world lines' in the time-space plane, with overtaking events represented by a fixed queuing time τ imposed on the overtaking vehicle. This geometrical model exhibits platoon formation and allows, among many other things, for the calculation of the effective average velocity w\\equiv \\phi (v) of a vehicle of natural velocity v. Secondly, we extend the model to two opposite lanes, A and B. We argue that the queuing time τ in one lane is determined by the traffic density in the opposite lane. On the basis of reasonable additional assumptions we establish a set of equations that couple the two lanes and can be solved numerically. It appears that above a critical value \\bar {\\omega }_{\\mathrm {c}} of the control parameter \\bar {\\omega } the symmetry between the lanes is spontaneously broken: there is a slow lane where long platoons form behind the slowest vehicles, and a fast lane where overtaking is easy due to the wide spacing between the platoons in the opposite direction. A variant of the model is studied in which the spatial vehicle density \\bar {\\rho } rather than the flux \\bar {\\omega } is the control parameter. Unequal fluxes \\bar {\\omega }_{\\mathrm {A}} and \\bar {\\omega }_{\\mathrm {B}} in the two lanes are also considered. The symmetry breaking phenomenon exhibited by this model, even though no doubt hard to observe in pure form in real-life traffic, nevertheless indicates a tendency of such traffic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yidong Xia; Mitch Plummer; Robert Podgorney
2016-02-01
Performance of heat production process over a 30-year period is assessed in a conceptual EGS model with a geothermal gradient of 65K per km depth in the reservoir. Water is circulated through a pair of parallel wells connected by a set of single large wing fractures. The results indicate that the desirable output electric power rate and lifespan could be obtained under suitable material properties and system parameters. A sensitivity analysis on some design constraints and operation parameters indicates that 1) the fracture horizontal spacing has profound effect on the long-term performance of heat production, 2) the downward deviation anglemore » for the parallel doublet wells may help overcome the difficulty of vertical drilling to reach a favorable production temperature, and 3) the thermal energy production rate and lifespan has close dependence on water mass flow rate. The results also indicate that the heat production can be improved when the horizontal fracture spacing, well deviation angle, and production flow rate are under reasonable conditions. To conduct the reservoir modeling and simulations, an open-source, finite element based, fully implicit, fully coupled hydrothermal code, namely FALCON, has been developed and used in this work. Compared with most other existing codes that are either closed-source or commercially available in this area, this new open-source code has demonstrated a code development strategy that aims to provide an unparalleled easiness for user-customization and multi-physics coupling. Test results have shown that the FALCON code is able to complete the long-term tests efficiently and accurately, thanks to the state-of-the-art nonlinear and linear solver algorithms implemented in the code.« less
Jeong, Hoon Y.; Lee, Jun H.; Hayes, Kim F.
2010-01-01
Iron sulfide was synthesized by reacting aqueous solutions of sodium sulfide and ferrous chloride for 3 days. By X-ray powder diffraction (XRPD), the resultant phase was determined to be primarily nanocrystalline mackinawite (space group: P4/nmm) with unit cell parameters a = b = 3.67 Å and c = 5.20 Å. Iron K-edge XAS analysis also indicated the dominance of mackinawite. Lattice expansion of synthetic mackinawite was observed along the c-axis relative to well-crystalline mackinawite. Compared with relatively short-aged phase, the mackinawite prepared here was composed of larger crystallites with less elongated lattice spacings. The direct observation of lattice fringes by HR-TEM verified the applicability of Bragg diffraction in determining the lattice parameters of nanocrystalline mackinawite from XRPD patterns. Estimated particle size and external specific surface area (SSAext) of nanocrystalline mackinawite varied significantly with the methods used. The use of Scherrer equation for measuring crystallite size based on XRPD patterns is limited by uncertainty of the Scherrer constant (K) due to the presence of polydisperse particles. The presence of polycrystalline particles may also lead to inaccurate particle size estimation by Scherrer equation, given that crystallite and particle sizes are not equivalent. The TEM observation yielded the smallest SSAext of 103 m2/g. This measurement was not representative of dispersed particles due to particle aggregation from drying during sample preparation. In contrast, EGME method and PCS measurement yielded higher SSAext (276–345 m2/g by EGME and 424 ± 130 m2/g by PCS). These were in reasonable agreement with those previously measured by the methods insensitive to particle aggregation. PMID:21085620
Accuracy assessment of a mobile terrestrial lidar survey at Padre Island National Seashore
Lim, Samsung; Thatcher, Cindy A.; Brock, John C.; Kimbrow, Dustin R.; Danielson, Jeffrey J.; Reynolds, B.J.
2013-01-01
The higher point density and mobility of terrestrial laser scanning (light detection and ranging (lidar)) is desired when extremely detailed elevation data are needed for mapping vertically orientated complex features such as levees, dunes, and cliffs, or when highly accurate data are needed for monitoring geomorphic changes. Mobile terrestrial lidar scanners have the capability for rapid data collection on a larger spatial scale compared with tripod-based terrestrial lidar, but few studies have examined the accuracy of this relatively new mapping technology. For this reason, we conducted a field test at Padre Island National Seashore of a mobile lidar scanner mounted on a sport utility vehicle and integrated with a position and orientation system. The purpose of the study was to assess the vertical and horizontal accuracy of data collected by the mobile terrestrial lidar system, which is georeferenced to the Universal Transverse Mercator coordinate system and the North American Vertical Datum of 1988. To accomplish the study objectives, independent elevation data were collected by conducting a high-accuracy global positioning system survey to establish the coordinates and elevations of 12 targets spaced throughout the 12 km transect. These independent ground control data were compared to the lidar scanner-derived elevations to quantify the accuracy of the mobile lidar system. The performance of the mobile lidar system was also tested at various vehicle speeds and scan density settings (e.g. field of view and linear point spacing) to estimate the optimal parameters for desired point density. After adjustment of the lever arm parameters, the final point cloud accuracy was 0.060 m (east), 0.095 m (north), and 0.053 m (height). The very high density of the resulting point cloud was sufficient to map fine-scale topographic features, such as the complex shape of the sand dunes.
Occlusal traits of deciduous dentition of preschool children of Indian children
Bahadure, Rakesh N.; Thosar, Nilima; Gaikwad, Rahul
2012-01-01
Objectives: To assess the occlusal relationship, canine relationship, crowding, primate spaces, and anterior spacing in both maxillary and mandibular arches of primary dentition of Indian children of Wardha District and also to study the age-wise differences in occlusal characteristics. Materials and Methods: A total of 1053 (609 males and 444 females) children of 3-5 year age group with complete primary dentition were examined for occlusal relationship, canine relationship, crowding, primate spaces, and anterior spacing in both maxillary and mandibular arches. Results: The data after evaluation showed significant values for all parameters except mandibular anterior spacing, which was 47.6%. Mild crowding was prevalent at 5 year age group and moderate crowding was common at 3 year-age group. Conclusion: Evaluated parameters such as terminal molar relationship and canine relationship were predominantly progressing toward to normal but contacts and crowding status were contributing almost equal to physiologic anterior spacing. Five-year-age group showed higher values with respect to all the parameters. PMID:23633806
The computer program SPARC (SPARC Performs Automated Reasoning in Chemistry) has been under development for several years to estimate physical properties and chemical reactivity parameters of organic compounds strictly from molecular structure. SPARC uses computational algorithms...
Equations with Parameters: A Locus Approach
ERIC Educational Resources Information Center
Abramovich, Sergei; Norton, Anderson
2006-01-01
This paper introduces technology-based teaching ideas that facilitate the development of qualitative reasoning techniques in the context of quadratic equations with parameters. It reflects on activities designed for and used with prospective secondary mathematics teachers in accord with standards for teaching and recommendations for teachers in…
NASA Technical Reports Server (NTRS)
Quinn, Todd M.; Walters, Jerry L.
1991-01-01
Future space explorations will require long term human presence in space. Space environments that provide working and living quarters for manned missions are becoming increasingly larger and more sophisticated. Monitor and control of the space environment subsystems by expert system software, which emulate human reasoning processes, could maintain the health of the subsystems and help reduce the human workload. The autonomous power expert (APEX) system was developed to emulate a human expert's reasoning processes used to diagnose fault conditions in the domain of space power distribution. APEX is a fault detection, isolation, and recovery (FDIR) system, capable of autonomous monitoring and control of the power distribution system. APEX consists of a knowledge base, a data base, an inference engine, and various support and interface software. APEX provides the user with an easy-to-use interactive interface. When a fault is detected, APEX will inform the user of the detection. The user can direct APEX to isolate the probable cause of the fault. Once a fault has been isolated, the user can ask APEX to justify its fault isolation and to recommend actions to correct the fault. APEX implementation and capabilities are discussed.
NASA Astrophysics Data System (ADS)
Ma, Lei; Wang, Yizhong; Xu, Qingyang; Huang, Huafang; Zhang, Rui; Chen, Ning
2009-11-01
The main production method of branched chain amino acid (BCAA) is microbial fermentation. In this paper, to monitor and to control the fermentation process of BCAA, especially its logarithmic phase, parameters such as the color of fermentation broth, culture temperature, pH, revolution, dissolved oxygen, airflow rate, pressure, optical density, and residual glucose, are measured and/or controlled and/or adjusted. The color of fermentation broth is measured using the HIS color model and a BP neural network. The network's input is the histograms of hue H and saturation S, and output is the color description. Fermentation process parameters are adjusted using fuzzy reasoning, which is performed by inference rules. According to the practical situation of BCAA fermentation process, all parameters are divided into four grades, and different fuzzy rules are established.
A Real-Time Apple Grading System Using Multicolor Space
2014-01-01
This study was focused on the multicolor space which provides a better specification of the color and size of the apple in an image. In the study, a real-time machine vision system classifying apples into four categories with respect to color and size was designed. In the analysis, different color spaces were used. As a result, 97% identification success for the red fields of the apple was obtained depending on the values of the parameter “a” of CIE L*a*b*color space. Similarly, 94% identification success for the yellow fields was obtained depending on the values of the parameter y of CIE XYZ color space. With the designed system, three kinds of apples (Golden, Starking, and Jonagold) were investigated by classifying them into four groups with respect to two parameters, color and size. Finally, 99% success rate was achieved in the analyses conducted for 595 apples. PMID:24574880
NASA Astrophysics Data System (ADS)
Siade, Adam J.; Hall, Joel; Karelse, Robert N.
2017-11-01
Regional groundwater flow models play an important role in decision making regarding water resources; however, the uncertainty embedded in model parameters and model assumptions can significantly hinder the reliability of model predictions. One way to reduce this uncertainty is to collect new observation data from the field. However, determining where and when to obtain such data is not straightforward. There exist a number of data-worth and experimental design strategies developed for this purpose. However, these studies often ignore issues related to real-world groundwater models such as computational expense, existing observation data, high-parameter dimension, etc. In this study, we propose a methodology, based on existing methods and software, to efficiently conduct such analyses for large-scale, complex regional groundwater flow systems for which there is a wealth of available observation data. The method utilizes the well-established d-optimality criterion, and the minimax criterion for robust sampling strategies. The so-called Null-Space Monte Carlo method is used to reduce the computational burden associated with uncertainty quantification. And, a heuristic methodology, based on the concept of the greedy algorithm, is proposed for developing robust designs with subsets of the posterior parameter samples. The proposed methodology is tested on a synthetic regional groundwater model, and subsequently applied to an existing, complex, regional groundwater system in the Perth region of Western Australia. The results indicate that robust designs can be obtained efficiently, within reasonable computational resources, for making regional decisions regarding groundwater level sampling.
Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.
2014-12-01
Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.
The diagnostic capability of iron limes
NASA Astrophysics Data System (ADS)
Giannini, Teresa; Nisini, Brunella; Antoniucci, Simone; Alcala, Juan; Bacciotti, Francesca; Bonito, Rosaria; Podio, Linda; Stelzer, Beate; Whelan, Emma
2013-07-01
We present the VLT/X-Shooter spectrum of two jets from young protostars of different luminosity and mass, ESO-Halpha 574 and Par-Lup 3-4. In the covered spectral range (350-2500 nm) we detected more than 100 [FeII] and [FeIII] lines, which are used to precisely probe the key physical parameters of the gas (electron density and temperature, ionization degree, visual extinction). These quantities have been compared with shock-model predictions, which suggest that only the higher luminosity source (ESO-Ha 574) is able to drive a high-velocity and dissociative shock. The diagnostic capability of Iron, proven on the presented objects, represents a unique tool for the following reasons: 1) the large number of lines in the uv-infrared range makes possible to trace the physical conditions in a very large range of the parameter space; 2) at variance with the diagnostic commonly performed with other species, such as Oxygen, Nitrogen, and Sulphur, no assumption on the relative abundance is needed, since all the parameters are derived from line ratios of the same species; 3) in the unperturbed ISM, Iron is locked on the grain surfaces, while it is released in gas-phase if gas-grain or grain-grain collisions occur within a shock. Therefore the Iron abundance (derivable from ratios of Iron lines with those of other volatile species) is a direct probe of the presence of dust in the jet beam, an information crucial to understand whether jets originate close to the star or in the circumstellar disk.
A personalized health-monitoring system for elderly by combining rules and case-based reasoning.
Ahmed, Mobyen Uddin
2015-01-01
Health-monitoring system for elderly in home environment is a promising solution to provide efficient medical services that increasingly interest by the researchers within this area. It is often more challenging when the system is self-served and functioning as personalized provision. This paper proposed a personalized self-served health-monitoring system for elderly in home environment by combining general rules with a case-based reasoning approach. Here, the system generates feedback, recommendation and alarm in a personalized manner based on elderly's medical information and health parameters such as blood pressure, blood glucose, weight, activity, pulse, etc. A set of general rules has used to classify individual health parameters. The case-based reasoning approach is used to combine all different health parameters, which generates an overall classification of health condition. According to the evaluation result considering 323 cases and k=2 i.e., top 2 most similar retrieved cases, the sensitivity, specificity and overall accuracy are achieved as 90%, 97% and 96% respectively. The preliminary result of the system is acceptable since the feedback; recommendation and alarm messages are personalized and differ from the general messages. Thus, this approach could be possibly adapted for other situations in personalized elderly monitoring.
Rough case-based reasoning system for continues casting
NASA Astrophysics Data System (ADS)
Su, Wenbin; Lei, Zhufeng
2018-04-01
The continuous casting occupies a pivotal position in the iron and steel industry. The rough set theory and the CBR (case based reasoning, CBR) were combined in the research and implementation for the quality assurance of continuous casting billet to improve the efficiency and accuracy in determining the processing parameters. According to the continuous casting case, the object-oriented method was applied to express the continuous casting cases. The weights of the attributes were calculated by the algorithm which was based on the rough set theory and the retrieval mechanism for the continuous casting cases was designed. Some cases were adopted to test the retrieval mechanism, by analyzing the results, the law of the influence of the retrieval attributes on determining the processing parameters was revealed. A comprehensive evaluation model was established by using the attribute recognition theory. According to the features of the defects, different methods were adopted to describe the quality condition of the continuous casting billet. By using the system, the knowledge was not only inherited but also applied to adjust the processing parameters through the case based reasoning method as to assure the quality of the continuous casting and improve the intelligent level of the continuous casting.
NASA Technical Reports Server (NTRS)
Kurth, William S.
1991-01-01
The Plasma Diagnostics Package (PDP) is a spacecraft which was designed and built at The University of Iowa and which contained several scientific instruments. These instruments were used for measuring Space Shuttle Orbiter environmental parameters and plasma parameters. The PDP flew on two Space Shuttle flights. The first flight of the PDP was on Space Shuttle Mission STS-3 and was a part of the NASA/Office of Space Science payload (OSS-1). The second flight of the PDP was on Space Shuttle Mission STS/51F and was a part of Spacelab 2. The interpretation of both the OSS-1 and Spacelab 2 PDP results in terms of large space structure plasma interactions is emphasized.
An Integrated Optimal Estimation Approach to Spitzer Space Telescope Focal Plane Survey
NASA Technical Reports Server (NTRS)
Bayard, David S.; Kang, Bryan H.; Brugarolas, Paul B.; Boussalis, D.
2004-01-01
This paper discusses an accurate and efficient method for focal plane survey that was used for the Spitzer Space Telescope. The approach is based on using a high-order 37-state Instrument Pointing Frame (IPF) Kalman filter that combines both engineering parameters and science parameters into a single filter formulation. In this approach, engineering parameters such as pointing alignments, thermomechanical drift and gyro drifts are estimated along with science parameters such as plate scales and optical distortions. This integrated approach has many advantages compared to estimating the engineering and science parameters separately. The resulting focal plane survey approach is applicable to a diverse range of science instruments such as imaging cameras, spectroscopy slits, and scanning-type arrays alike. The paper will summarize results from applying the IPF Kalman Filter to calibrating the Spitzer Space Telescope focal plane, containing the MIPS, IRAC, and the IRS science Instrument arrays.
Urban and Spatial Opposition by the Subject
NASA Astrophysics Data System (ADS)
Saka, Gizem; Kırcı, Nazan
2017-10-01
In the production of spaces, an important aspect, that is ‘the subject’ was neglected with the influence of the industrial revolution, modernisation, capitalism and neo-liberalism. While the rationalist reason was standardising and extending production, the relationship between space and its user was broken-off. It initiated a tremendous change when the subject as the user of the spaces, singled out his own existence and needs from the whole and comprehended his self-distinctiveness. Such a split up indicating the act of critical thinking and liberation of the subject also created a demand for diversity. The demands of the subject being the user of the space was not met at the architectural and urban levels for several reasons. The subject feeling the discomfort of such a situation brings into view his criticisms first in his own individual space and then in public space for the purposes of expressing his right to live and his locus standi. Such acts being classified as adversary are being realised in order to provide the adaptability of the subject and the space to changing living conditions using different means. Such adversary touches being provided partly by the urbanites and partly by the professionals draw attention to the issue through by-pass interventions to the architecturally choked urban areas. By taking a stance against the existing situation, the intention is to treat space in a different way than what has been produced by the system, to re-produce it and to render it more democratic. All such alternative spatial situations show us that other production methods and lines of thought, other than what has been defined by the dominant market conditions are also possible. It has been asserted through these adversary instigations that there is a requirement for micro designs towards the daily and changing needs of the subject as a user during the act of design by architects and planners. For this reason, the part played by the designer should be wriggled out of ‘defining’ and ‘controlling’ effects and should turn towards using the transformational power of the society for the benefit of the same, lead the user and provide alternatives.
Transformations and representations supporting spatial perspective taking
Yu, Alfred B.; Zacks, Jeffrey M.
2018-01-01
Spatial perspective taking is the ability to reason about spatial relations relative to another’s viewpoint. Here, we propose a mechanistic hypothesis that relates mental representations of one’s viewpoint to the transformations used for spatial perspective taking. We test this hypothesis using a novel behavioral paradigm that assays patterns of response time and variation in those patterns across people. The results support the hypothesis that people maintain a schematic representation of the space around their body, update that representation to take another’s perspective, and thereby to reason about the space around their body. This is a powerful computational mechanism that can support imitation, coordination of behavior, and observational learning. PMID:29545731
Atomic Radius and Charge Parameter Uncertainty in Biomolecular Solvation Energy Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiu; Lei, Huan; Gao, Peiyuan
Atomic radii and charges are two major parameters used in implicit solvent electrostatics and energy calculations. The optimization problem for charges and radii is under-determined, leading to uncertainty in the values of these parameters and in the results of solvation energy calculations using these parameters. This paper presents a method for quantifying this uncertainty in solvation energies using surrogate models based on generalized polynomial chaos (gPC) expansions. There are relatively few atom types used to specify radii parameters in implicit solvation calculations; therefore, surrogate models for these low-dimensional spaces could be constructed using least-squares fitting. However, there are many moremore » types of atomic charges; therefore, construction of surrogate models for the charge parameter space required compressed sensing combined with an iterative rotation method to enhance problem sparsity. We present results for the uncertainty in small molecule solvation energies based on these approaches. Additionally, we explore the correlation between uncertainties due to radii and charges which motivates the need for future work in uncertainty quantification methods for high-dimensional parameter spaces.« less
NASA Astrophysics Data System (ADS)
Baumstark-Khan, C.
DNA damage and its repair processes are key factors in cancer induction and also in the treatment of malignancies. Cancer prevention during extended space missions becomes a topic of great importance for space radiobiology. The knowledge of individual responsiveness would allow the protection strategy to be tailored optimally in each case. Radiobiological analysis of cultured cells derived from tissue explants from individuals has shown that measurement of the surviving fraction after 2 Gy (SF2) may be used to predict the individual responsiveness. However, clonogenic assays are timeconsuming, thus alternative assays for the determination of radiore-sponse are being sought. For that reason CHO cell strains having different repair capacities were used for examining whether DNA strand break repair is a suitable experimental design to allow predictive statements. Cellular survival (CFA assay) and DNA strand breaks (total DNA strand breaks: FADU technique; DSBs: non-denaturing elution) were determined in parallel immediately after irradiation as well as after a 24 hour recovery period according to dose. There were no correlations between the dose-response curves of the initial level of DNA strand breaks and parameters that describe clonogenic survival curves (SF2). A good correlation exists between intrinsic cellular radioresistance and the extent of residual DNA strand breaks.
Propagation and wavefront ambiguity of linear nondiffracting beams
NASA Astrophysics Data System (ADS)
Grunwald, R.; Bock, M.
2014-02-01
Ultrashort-pulsed Bessel and Airy beams in free space are often interpreted as "linear light bullets". Usually, interconnected intensity profiles are considered a "propagation" along arbitrary pathways which can even follow curved trajectories. A more detailed analysis, however, shows that this picture gives an adequate description only in situations which do not require to consider the transport of optical signals or causality. To also cover these special cases, a generalization of the terms "beam" and "propagation" is necessary. The problem becomes clearer by representing the angular spectra of the propagating wave fields by rays or Poynting vectors. It is known that quasi-nondiffracting beams can be described as caustics of ray bundles. Their decomposition into Poynting vectors by Shack-Hartmann sensors indicates that, in the frame of their classical definition, the corresponding local wavefronts are ambiguous and concepts based on energy density are not appropriate to describe the propagation completely. For this reason, quantitative parameters like the beam propagation factor have to be treated with caution as well. For applications like communication or optical computing, alternative descriptions are required. A heuristic approach based on vector field based information transport and Fourier analysis is proposed here. Continuity and discontinuity of far field distributions in space and time are discussed. Quantum aspects of propagation are briefly addressed.
NASA Astrophysics Data System (ADS)
Agapiou, Sergios; Burger, Martin; Dashti, Masoumeh; Helin, Tapio
2018-04-01
We consider the inverse problem of recovering an unknown functional parameter u in a separable Banach space, from a noisy observation vector y of its image through a known possibly non-linear map {{\\mathcal G}} . We adopt a Bayesian approach to the problem and consider Besov space priors (see Lassas et al (2009 Inverse Problems Imaging 3 87-122)), which are well-known for their edge-preserving and sparsity-promoting properties and have recently attracted wide attention especially in the medical imaging community. Our key result is to show that in this non-parametric setup the maximum a posteriori (MAP) estimates are characterized by the minimizers of a generalized Onsager-Machlup functional of the posterior. This is done independently for the so-called weak and strong MAP estimates, which as we show coincide in our context. In addition, we prove a form of weak consistency for the MAP estimators in the infinitely informative data limit. Our results are remarkable for two reasons: first, the prior distribution is non-Gaussian and does not meet the smoothness conditions required in previous research on non-parametric MAP estimates. Second, the result analytically justifies existing uses of the MAP estimate in finite but high dimensional discretizations of Bayesian inverse problems with the considered Besov priors.
Exploring viable vacua of the Z 3-symmetric NMSSM
NASA Astrophysics Data System (ADS)
Beuria, Jyotiranjan; Chattopadhyay, Utpal; Datta, AseshKrishna; Dey, Abhishek
2017-04-01
We explore the vacua of the Z 3-symmetric Next-to-Minimal Supersymmetric Standard Model (NMSSM) and their stability by going beyond the simplistic paradigm that works with a tree-level neutral scalar potential and adheres to some specific flat directions in the field space. We work in the so-called phenomenological NMSSM (pNMSSM) scenario. Also, for our purpose, we adhere to a reasonably `natural' setup by requiring | μ eff| not too large. Key effects are demonstrated by first studying the profiles of this potential under various circumstances of physical interest via a semi-analytical approach. The results thereof are compared to the ones obtained from a dedicated package like Vevacious which further incorporates the thermal effects to the potential. Regions of the pNMSSM parameter space that render the desired symmetry breaking (DSB) vacuum absolutely stable, long- or short-lived (in relation to the age of the Universe) under quantum/thermal tunneling are delineated. Regions that result in the appearance of color and charge breaking (CCB) minima are also presented. It is demonstrated that light singlet scalars along with a light LSP (lightest supersymmetric particle) having an appreciable singlino admixture are compatible with a viable DSB vacuum. Their implications for collider experiments are commented upon.
Cai, Chuner; Wu, Lian; Li, Chunxia; He, Peimin; Li, Jie; Zhou, Jiahai
2011-01-01
Porphyra yezoensis is one of the most important and widely cultured seaweeds in China. Phycobiliproteins exhibit excellent spectroscopic properties and play versatile roles in the biomedical, food, cosmetics and chemical synthetic dye industries. Here, the purification and crystallization of phycoerythrin and phycocyanin, two phycobiliproteins extracted from P. yezoensis, are described. Using a novel protocol including co-precipitation with ammonium sulfate and hydroxyapatite column chromatography, both phycobiliproteins were produced on a large scale with improved quality and yield compared with those previously reported. Native PAGE analysis indicated that phycoerythrin and phycocyanin exist as (αβ)3 heterohexamers in solution. The crystals of phycoerythrin diffracted to 2.07 Å resolution and belonged to space group R3. The unit-cell parameters referred to hexagonal axes are a = b = 187.7, c = 59.7 Å, with nine (αβ)2 heterotetramers per unit cell. The crystals of phycocyanin diffracted to 2.70 Å resolution in space group P21. Matthews coefficient analysis shows that 10–19 (αβ) heterodimers of phycocyanin in the asymmetric unit would be reasonable. A self-rotation function calculation clarified this ambiguity and indicated that 12 (αβ) heterodimers of phycocyanin are assembled in the asymmetric unit. PMID:21543866
Variations of cosmic large-scale structure covariance matrices across parameter space
NASA Astrophysics Data System (ADS)
Reischke, Robert; Kiessling, Alina; Schäfer, Björn Malte
2017-03-01
The likelihood function for cosmological parameters, given by e.g. weak lensing shear measurements, depends on contributions to the covariance induced by the non-linear evolution of the cosmic web. As highly non-linear clustering to date has only been described by numerical N-body simulations in a reliable and sufficiently precise way, the necessary computational costs for estimating those covariances at different points in parameter space are tremendous. In this work, we describe the change of the matter covariance and the weak lensing covariance matrix as a function of cosmological parameters by constructing a suitable basis, where we model the contribution to the covariance from non-linear structure formation using Eulerian perturbation theory at third order. We show that our formalism is capable of dealing with large matrices and reproduces expected degeneracies and scaling with cosmological parameters in a reliable way. Comparing our analytical results to numerical simulations, we find that the method describes the variation of the covariance matrix found in the SUNGLASS weak lensing simulation pipeline within the errors at one-loop and tree-level for the spectrum and the trispectrum, respectively, for multipoles up to ℓ ≤ 1300. We show that it is possible to optimize the sampling of parameter space where numerical simulations should be carried out by minimizing interpolation errors and propose a corresponding method to distribute points in parameter space in an economical way.
Multisensor Fusion for Change Detection
NASA Astrophysics Data System (ADS)
Schenk, T.; Csatho, B.
2005-12-01
Combining sensors that record different properties of a 3-D scene leads to complementary and redundant information. If fused properly, a more robust and complete scene description becomes available. Moreover, fusion facilitates automatic procedures for object reconstruction and modeling. For example, aerial imaging sensors, hyperspectral scanning systems, and airborne laser scanning systems generate complementary data. We describe how data from these sensors can be fused for such diverse applications as mapping surface erosion and landslides, reconstructing urban scenes, monitoring urban land use and urban sprawl, and deriving velocities and surface changes of glaciers and ice sheets. An absolute prerequisite for successful fusion is a rigorous co-registration of the sensors involved. We establish a common 3-D reference frame by using sensor invariant features. Such features are caused by the same object space phenomena and are extracted in multiple steps from the individual sensors. After extracting, segmenting and grouping the features into more abstract entities, we discuss ways on how to automatically establish correspondences. This is followed by a brief description of rigorous mathematical models suitable to deal with linear and area features. In contrast to traditional, point-based registration methods, lineal and areal features lend themselves to a more robust and more accurate registration. More important, the chances to automate the registration process increases significantly. The result of the co-registration of the sensors is a unique transformation between the individual sensors and the object space. This makes spatial reasoning of extracted information more versatile; reasoning can be performed in sensor space or in 3-D space where domain knowledge about features and objects constrains reasoning processes, reduces the search space, and helps to make the problem well-posed. We demonstrate the feasibility of the proposed multisensor fusion approach with detecting surface elevation changes on the Byrd Glacier, Antarctica, with aerial imagery from 1980s and ICESat laser altimetry data from 2003-05. Change detection from such disparate data sets is an intricate fusion problem, beginning with sensor alignment, and on to reasoning with spatial information as to where changes occurred and to what extent.
SP_Ace: Stellar Parameters And Chemical abundances Estimator
NASA Astrophysics Data System (ADS)
Boeche, C.; Grebel, E. K.
2018-05-01
SP_Ace (Stellar Parameters And Chemical abundances Estimator) estimates the stellar parameters Teff, log g, [M/H], and elemental abundances. It employs 1D stellar atmosphere models in Local Thermodynamic Equilibrium (LTE). The code is highly automated and suitable for analyzing the spectra of large spectroscopic surveys with low or medium spectral resolution (R = 2000-20 000). A web service for calculating these values with the software is also available.
On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.
Yamazaki, Keisuke
2012-07-01
Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Saleem, M.; Resmi, L.; Misra, Kuntal; Pai, Archana; Arun, K. G.
2018-03-01
Short duration Gamma Ray Bursts (SGRB) and their afterglows are among the most promising electromagnetic (EM) counterparts of Neutron Star (NS) mergers. The afterglow emission is broad-band, visible across the entire electromagnetic window from γ-ray to radio frequencies. The flux evolution in these frequencies is sensitive to the multidimensional afterglow physical parameter space. Observations of gravitational wave (GW) from BNS mergers in spatial and temporal coincidence with SGRB and associated afterglows can provide valuable constraints on afterglow physics. We run simulations of GW-detected BNS events and assuming that all of them are associated with a GRB jet which also produces an afterglow, investigate how detections or non-detections in X-ray, optical and radio frequencies can be influenced by the parameter space. We narrow down the regions of afterglow parameter space for a uniform top-hat jet model, which would result in different detection scenarios. We list inferences which can be drawn on the physics of GRB afterglows from multimessenger astronomy with coincident GW-EM observations.
NASA Technical Reports Server (NTRS)
1981-01-01
The selection and training of cosmonauts and the preparation of the first Hungarian for flight on Salyut 36 and its linking with Soyuz 6 are described. Biographical sketches of the crew members, the reasons for different types of wearing different types of spacesuits during flight in the space station, and the experiments conducted are discussed. Photographs are included.
Parenting in a Technological Age
ERIC Educational Resources Information Center
Smedts, Geertrui
2008-01-01
Technology is not just a tool but an amalgam of conceptual, institutional, and interactional issues that occupy the space of technical reason. In this space, parents' identity is becoming narrowed according to a limited conception in which the place of "caring" is in danger of being lost. Parents are increasingly required to adopt knowledge on…
14 CFR § 1275.105 - Conduct of the OIG investigation of research misconduct.
Code of Federal Regulations, 2014 CFR
2014-01-01
... research misconduct. § 1275.105 Section § 1275.105 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION RESEARCH MISCONDUCT § 1275.105 Conduct of the OIG investigation of research misconduct. (a) The OIG shall make every reasonable effort to complete a NASA research misconduct investigation and issue...
78 FR 39729 - Appraisal Subcommittee Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-02
... agree to submit to reasonable security measures. The meeting space is intended to accommodate public attendees. However, if the space will not accommodate all requests, the ASC may refuse attendance on that..., 2013. James R. Park, Executive Director. [FR Doc. 2013-15850 Filed 7-1-13; 8:45 am] BILLING CODE P ...
Pratt and Whitney Space Propulsion NPSS Usage
NASA Technical Reports Server (NTRS)
Olson, Dean
2004-01-01
This talk presents Pratt and Whitney's space division overview of the Numerical Propulsion System Simulation (NPSS). It examines their reasons for wanting to use the NPSS system, their past activities supporting its development, and their planned future usage. It also gives an overview how different analysis tools fit into their overall product development.
Aggarwal, Ankush
2017-08-01
Motivated by the well-known result that stiffness of soft tissue is proportional to the stress, many of the constitutive laws for soft tissues contain an exponential function. In this work, we analyze properties of the exponential function and how it affects the estimation and comparison of elastic parameters for soft tissues. In particular, we find that as a consequence of the exponential function there are lines of high covariance in the elastic parameter space. As a result, one can have widely varying mechanical parameters defining the tissue stiffness but similar effective stress-strain responses. Drawing from elementary algebra, we propose simple changes in the norm and the parameter space, which significantly improve the convergence of parameter estimation and robustness in the presence of noise. More importantly, we demonstrate that these changes improve the conditioning of the problem and provide a more robust solution in the case of heterogeneous material by reducing the chances of getting trapped in a local minima. Based upon the new insight, we also propose a transformed parameter space which will allow for rational parameter comparison and avoid misleading conclusions regarding soft tissue mechanics.
Nuclear Propulsion through Direct Conversion of Fusion Energy: The Fusion Driven Rocket
NASA Technical Reports Server (NTRS)
Slough, John; Pancotti, Anthony; Kirtley, David; Pihl, Christopher; Pfaff, Michael
2012-01-01
The future of manned space exploration and development of space depends critically on the creation of a dramatically more proficient propulsion architecture for in-space transportation. A very persuasive reason for investigating the applicability of nuclear power in rockets is the vast energy density gain of nuclear fuel when compared to chemical combustion energy. Current nuclear fusion efforts have focused on the generation of electric grid power and are wholly inappropriate for space transportation as the application of a reactor based fusion-electric system creates a colossal mass and heat rejection problem for space application.
NASA Technical Reports Server (NTRS)
Griffin, Michael
2008-01-01
Speech topics include: Leadership in Space; Space Exploration: Real and Acceptable Reasons; Why Explore Space?; Space Exploration: Filling up the Canvas; Continuing the Voyage: The Spirit of Endeavour; Incorporating Space into Our Economic Sphere of Influence; The Role of Space Exploration in the Global Economy; Partnership in Space Activities; International Space Cooperation; National Strategy and the Civil Space Program; What the Hubble Space Telescope Teaches Us about Ourselves; The Rocket Team; NASA's Direction; Science and NASA; Science Priorities and Program Management; NASA and the Commercial Space Industry; NASA and the Business of Space; American Competitiveness: NASA's Role & Everyone's Responsibility; Space Exploration: A Frontier for American Collaboration; The Next Generation of Engineers; System Engineering and the "Two Cultures" of Engineering; Generalship of Engineering; NASA and Engineering Integrity; The Constellation Architecture; Then and Now: Fifty Years in Space; The Reality of Tomorrow; and Human Space Exploration: The Next 50 Years.
SU(5) with nonuniversal gaugino masses
NASA Astrophysics Data System (ADS)
Ajaib, M. Adeel
2018-02-01
We explore the sparticle spectroscopy of the supersymmetric SU(5) model with nonuniversal gaugino masses in light of latest experimental searches. We assume that the gaugino mass parameters are independent at the GUT scale. We find that the observed deviation in the anomalous magnetic moment of the muon can be explained in this model. The parameter space that explains this deviation predicts a heavy colored sparticle spectrum whereas the sleptons can be light. We also find a notable region of the parameter space that yields the desired relic abundance for dark matter. In addition, we analyze the model in light of latest limits from direct detection experiments and find that the parameter space corresponding to the observed deviation in the muon anomalous magnetic moment can be probed at some of the future direct detection experiments.
Interpreting the 750 GeV diphoton excess by the singlet extension of the Manohar-Wise model
NASA Astrophysics Data System (ADS)
Cao, Junjie; Han, Chengcheng; Shang, Liangliang; Su, Wei; Yang, Jin Min; Zhang, Yang
2016-04-01
The evidence of a new scalar particle X from the 750 GeV diphoton excess, and the absence of any other signal of new physics at the LHC so far suggest the existence of new colored scalars, which may be moderately light and thus can induce sizable Xgg and Xγγ couplings without resorting to very strong interactions. Motivated by this speculation, we extend the Manohar-Wise model by adding one gauge singlet scalar field. The resulting theory then predicts one singlet dominated scalar ϕ as well as three kinds of color-octet scalars, which can mediate through loops the ϕgg and ϕγγ interactions. After fitting the model to the diphoton data at the LHC, we find that in reasonable parameter regions the excess can be explained at 1σ level by the process gg → ϕ → γγ, and the best points predict the central value of the excess rate with χmin2 = 2.32, which corresponds to a p-value of 0.68. We also consider the constraints from various LHC Run I signals, and we conclude that, although these constraints are powerful in excluding the parameter space of the model, the best points are still experimentally allowed.
NASA Astrophysics Data System (ADS)
Rybus, Tomasz; Seweryn, Karol
2016-03-01
All devices designed to be used in space must be thoroughly tested in relevant conditions. For several classes of devices the reduced gravity conditions are the key factor. In early stages of development and later due to financial reasons, the tests need to be done on Earth. However, in Earth conditions it is impossible to obtain a different gravity field independent on all linear and rotational spatial coordinates. Therefore, various test-bed systems are used, with their design driven by the device's specific needs. One of such test-beds are planar air-bearing microgravity simulators. In such an approach, the tested objects (e.g., manipulators intended for on-orbit operations or vehicles simulating satellites in a close formation flight) are mounted on planar air-bearings that allow almost frictionless motion on a flat surface, thus simulating microgravity conditions in two dimensions. In this paper we present a comprehensive review of research activities related to planar air-bearing microgravity simulators, demonstrating achievements of the most active research groups and describing newest trends and ideas, such as tests of landing gears for low-g bodies. Major design parameters of air-bearing test-beds are also reviewed and a list of notable existing test-beds is presented.
NASA Astrophysics Data System (ADS)
Zhang, Tingxian; Xie, Luyou; Li, Jiguang; Lu, Zehuang
2017-07-01
We calculated the magnetic dipole and the electric quadrupole hyperfine interaction constants of 3 s 3 p 3,1P1o states and the isotope shift, including mass and field shift, factors for transitions from these two states to the ground state 3 s 2 1S0 in Al+ ions using the multiconfiguration Dirac-Hartree-Fock method. The effects of the electron correlations and the Breit interaction on these physical quantities were investigated in detail based on the active space approach. It is found that the core-core and the higher order correlations are considerable for evaluating the uncertainties of the atomic parameters concerned. The uncertainties of the hyperfine interaction constants in this work are less than 1.6%. Although the isotope shift factors are highly sensitive to the electron correlations, reasonable uncertainties were obtained by exploring the effects of the electron correlations. Moreover, we found that the relativistic nuclear recoil corrections to the mass shift factors are very small and insensitive to the electron correlations for Al+. These atomic parameters present in this work are valuable for extracting the nuclear electric quadrupole moments and the mean-square charge radii of Al isotopes.
System implications of aperture-shade design for the SIRTF Observatory
NASA Technical Reports Server (NTRS)
Lee, J. H.; Brooks, W. F.; Maa, S.
1987-01-01
The 1-m-aperture Space Infrared Telescope Facility (SIRTF) will operate with a sensitivity limited only by the zodiacal background. This sensitivity requirement places severe restrictions on the amount of stray light which can reach the focal plane from off-axis sources such as the sun or earth limb. In addition, radiation from these sources can degrade the lifetime of the telescope and instrument cryogenic system which is now planned for two years before the first servicing. Since the aperture of the telescope represents a break in the telescope insulation system and is effectively the first element in the optical train, the aperture shade is a key system component. The mass, length, and temperature of the shade should be minimized to reduce system cost while maximizing the telescope lifetime and stray light performance. The independent geometric parameters that characterize an asymmetrical shade for a 600 km, 28 deg orbit were identified, and the system sensitivity to the three most important shade parameters were explored. Despite the higher heat loads compared to previously studied polar orbit missions, the analysis determined that passive radiators of a reasonable size are sufficient to meet the system requirements. An optimized design for the SIRTF mission, based on the sensitivity analysis, is proposed.
Implementation of two-component advective flow solution in XSPEC
NASA Astrophysics Data System (ADS)
Debnath, Dipak; Chakrabarti, Sandip K.; Mondal, Santanu
2014-05-01
Spectral and temporal properties of black hole candidates can be explained reasonably well using Chakrabarti-Titarchuk solution of two-component advective flow (TCAF). This model requires two accretion rates, namely the Keplerian disc accretion rate and the halo accretion rate, the latter being composed of a sub-Keplerian, low-angular-momentum flow which may or may not develop a shock. In this solution, the relevant parameter is the relative importance of the halo (which creates the Compton cloud region) rate with respect to the Keplerian disc rate (soft photon source). Though this model has been used earlier to manually fit data of several black hole candidates quite satisfactorily, for the first time, we made it user friendly by implementing it into XSPEC software of Goddard Space Flight Center (GSFC)/NASA. This enables any user to extract physical parameters of the accretion flows, such as two accretion rates, the shock location, the shock strength, etc., for any black hole candidate. We provide some examples of fitting a few cases using this model. Most importantly, unlike any other model, we show that TCAF is capable of predicting timing properties from the spectral fits, since in TCAF, a shock is responsible for deciding spectral slopes as well as quasi-periodic oscillation frequencies. L86
Modeling the Gravitational Potential of a Cosmological Dark Matter Halo with Stellar Streams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanderson, Robyn E.; Hartke, Johanna; Helmi, Amina, E-mail: robyn@astro.columbia.edu
2017-02-20
Stellar streams result from the tidal disruption of satellites and star clusters as they orbit a host galaxy, and can be very sensitive probes of the gravitational potential of the host system. We select and study narrow stellar streams formed in a Milky-Way-like dark matter halo of the Aquarius suite of cosmological simulations, to determine if these streams can be used to constrain the present day characteristic parameters of the halo’s gravitational potential. We find that orbits integrated in both spherical and triaxial static Navarro–Frenk–White potentials reproduce the locations and kinematics of the various streams reasonably well. To quantify thismore » further, we determine the best-fit potential parameters by maximizing the amount of clustering of the stream stars in the space of their actions. We show that using our set of Aquarius streams, we recover a mass profile that is consistent with the spherically averaged dark matter profile of the host halo, although we ignored both triaxiality and time evolution in the fit. This gives us confidence that such methods can be applied to the many streams that will be discovered by the Gaia mission to determine the gravitational potential of our Galaxy.« less
Modeling Spatial Dependence of Rainfall Extremes Across Multiple Durations
NASA Astrophysics Data System (ADS)
Le, Phuong Dong; Leonard, Michael; Westra, Seth
2018-03-01
Determining the probability of a flood event in a catchment given that another flood has occurred in a nearby catchment is useful in the design of infrastructure such as road networks that have multiple river crossings. These conditional flood probabilities can be estimated by calculating conditional probabilities of extreme rainfall and then transforming rainfall to runoff through a hydrologic model. Each catchment's hydrological response times are unlikely to be the same, so in order to estimate these conditional probabilities one must consider the dependence of extreme rainfall both across space and across critical storm durations. To represent these types of dependence, this study proposes a new approach for combining extreme rainfall across different durations within a spatial extreme value model using max-stable process theory. This is achieved in a stepwise manner. The first step defines a set of common parameters for the marginal distributions across multiple durations. The parameters are then spatially interpolated to develop a spatial field. Storm-level dependence is represented through the max-stable process for rainfall extremes across different durations. The dependence model shows a reasonable fit between the observed pairwise extremal coefficients and the theoretical pairwise extremal coefficient function across all durations. The study demonstrates how the approach can be applied to develop conditional maps of the return period and return level across different durations.
Raevsky, O A; Perlovich, G L; Schaper, K-J
2007-01-01
On the basis of octanol solubility data (log S(o)) for 218 structurally diverse solid chemicals it was shown that the exclusive consideration of melting points did not provide satisfactory results in the quantitative prediction of this parameter (s = 0.92). The application of HYBOT physicochemical descriptors separately (s = 0.94) and together with melting points (s = 0.70) in the framework of a common regression model also was not successful, although contributions of volume-related and H-bond terms to solubility in octanol were identified. It was proposed that the main reason for such behaviour was the different crystal lattice interaction of different classes of chemicals. Successful calculations of the solubility in octanol of chemicals of interest were performed on the basis of the experimental solubility of structurally/physicochemically/numerically similar nearest neighbours with consideration of their difference in physicochemical parameters (molecular polarisability, H-bond acceptor and donor factors (s = 0.66)) and of these descriptors together with melting point differences (s = 0.38). Good results were obtained for all compounds having nearest neighbours with sufficient similarity, expressed by Tanimoto indexes, and by distances in the scaled 3D descriptor space. Obviously the success of this approach depends on the size of the database.
Interpreting Gas Production Decline Curves By Combining Geometry and Topology
NASA Astrophysics Data System (ADS)
Ewing, R. P.; Hu, Q.
2014-12-01
Shale gas production forms an increasing fraction of domestic US energy supplies, but individual gas production wells show steep production declines. Better understanding of this production decline would allow better economic forecasting; better understanding of the reasons behind the decline would allow better production management. Yet despite these incentives, production declines curves remain poorly understood, and current analyses range from Arps' purely empirical equation to new sophisticated approaches requiring multiple unavailable parameters. Models often fail to capture salient features: for example, in log-log space many wells decline with an exponent markedly different from the -0.5 expected from diffusion, and often show a transition from one decline mode to another. We propose a new approach based on the assumption that the rate-limiting step is gas movement from the matrix to the induced fracture network. The matrix is represented as an assemblage of equivalent spheres (geometry), with low matrix pore connectivity (topology) that results in a distance-dependent accessible porosity profile given by percolation theory. The basic theory has just 2 parameters: the sphere size distribution (geometry), and the crossover distance (topology) that characterizes the porosity distribution. The theory is readily extended to include e.g. alternative geometries and bi-modal size distributions. Comparisons with historical data are promising.
Lu, Huancai; Wu, Sean F
2009-03-01
The vibroacoustic responses of a highly nonspherical vibrating object are reconstructed using Helmholtz equation least-squares (HELS) method. The objectives of this study are to examine the accuracy of reconstruction and the impacts of various parameters involved in reconstruction using HELS. The test object is a simply supported and baffled thin plate. The reason for selecting this object is that it represents a class of structures that cannot be exactly described by the spherical Hankel functions and spherical harmonics, which are taken as the basis functions in the HELS formulation, yet the analytic solutions to vibroacoustic responses of a baffled plate are readily available so the accuracy of reconstruction can be checked accurately. The input field acoustic pressures for reconstruction are generated by the Rayleigh integral. The reconstructed normal surface velocities are validated against the benchmark values, and the out-of-plane vibration patterns at several natural frequencies are compared with the natural modes of a simply supported plate. The impacts of various parameters such as number of measurement points, measurement distance, location of the origin of the coordinate system, microphone spacing, and ratio of measurement aperture size to the area of source surface of reconstruction on the resultant accuracy of reconstruction are examined.
Transport regimes spanning magnetization-coupling phase space
NASA Astrophysics Data System (ADS)
Baalrud, Scott D.; Daligault, Jérôme
2017-10-01
The manner in which transport properties vary over the entire parameter-space of coupling and magnetization strength is explored. Four regimes are identified based on the relative size of the gyroradius compared to other fundamental length scales: the collision mean free path, Debye length, distance of closest approach, and interparticle spacing. Molecular dynamics simulations of self-diffusion and temperature anisotropy relaxation spanning the parameter space are found to agree well with the predicted boundaries. Comparison with existing theories reveals regimes where they succeed, where they fail, and where no theory has yet been developed.
System study of the utilization of space for carbon dioxide research
NASA Technical Reports Server (NTRS)
Glaser, P. E.; Vranka, R.
1985-01-01
The objectives included: compiling and selecting the Scientific Data Requirements (SDRs) pertinent to the CO2 Research Program that have the potential to be more successfully achieved by utilizing space-based sensor systems; assessment of potential space technology in monitoring those parameters which may be important first indicators of climate change due to increasing atmospheric CO2, including the behavior of the West Antarctic ice sheet; and determine the potential of space technology for monitoring those parameters to improve understanding of the coupling between CO2 and cloud cover.
Two particle model for studying the effects of space-charge force on strong head-tail instabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, Yong Ho; Chao, Alexander Wu; Blaskiewicz, Michael M.
In this paper, we present a new two particle model for studying the strong head-tail instabilities in the presence of the space-charge force. It is a simple expansion of the well-known two particle model for strong head-tail instability and is still analytically solvable. No chromaticity effect is included. It leads to a formula for the growth rate as a function of the two dimensionless parameters: the space-charge tune shift parameter (normalized by the synchrotron tune) and the wakefield strength, Upsilon. The three-dimensional contour plot of the growth rate as a function of those two dimensionless parameters reveals stopband structures. Manymore » simulation results generally indicate that a strong head-tail instability can be damped by a weak space-charge force, but the beam becomes unstable again when the space-charge force is further increased. The new two particle model indicates a similar behavior. In weak space-charge regions, additional tune shifts by the space-charge force dissolve the mode coupling. As the space-charge force is increased, they conversely restore the mode coupling, but then a further increase of the space-charge force decouples the modes again. Lastly, this mode coupling/decoupling behavior creates the stopband structures.« less
Two particle model for studying the effects of space-charge force on strong head-tail instabilities
Chin, Yong Ho; Chao, Alexander Wu; Blaskiewicz, Michael M.
2016-01-19
In this paper, we present a new two particle model for studying the strong head-tail instabilities in the presence of the space-charge force. It is a simple expansion of the well-known two particle model for strong head-tail instability and is still analytically solvable. No chromaticity effect is included. It leads to a formula for the growth rate as a function of the two dimensionless parameters: the space-charge tune shift parameter (normalized by the synchrotron tune) and the wakefield strength, Upsilon. The three-dimensional contour plot of the growth rate as a function of those two dimensionless parameters reveals stopband structures. Manymore » simulation results generally indicate that a strong head-tail instability can be damped by a weak space-charge force, but the beam becomes unstable again when the space-charge force is further increased. The new two particle model indicates a similar behavior. In weak space-charge regions, additional tune shifts by the space-charge force dissolve the mode coupling. As the space-charge force is increased, they conversely restore the mode coupling, but then a further increase of the space-charge force decouples the modes again. Lastly, this mode coupling/decoupling behavior creates the stopband structures.« less
Space Weather and the State of Cardiovascular System of a Healthy Human Being
NASA Astrophysics Data System (ADS)
Samsonov, S. N.; Manykina, V. I.; Krymsky, G. F.; Petrova, P. G.; Palshina, A. M.; Vishnevsky, V. V.
The term "space weather" characterizes a state of the near-Earth environmental space. An organism of human being represents an open system so the change of conditions in the environment including the near-Earth environmental space influences the health state of a human being.In recent years many works devoted to the effect of space weather on the life on the Earth, and the degree of such effect has been represented from a zero-order up to apocalypse. To reveal a real effect of space weather on the health of human being the international Russian- Ukrainian experiment "Geliomed" is carried out since 2005 (http://geliomed.immsp.kiev.ua) [Vishnevsky et al., 2009]. The analysis of observational set of data has allowed to show a synchronism and globality of such effect (simultaneous manifestation of space weather parameters in a state of cardiovascular system of volunteer groups removed from each other at a distance over 6000 km). The response of volunteer' cardiovascular system to the changes of space weather parameters were observed even at insignificant values of the Earth's geomagnetic field. But even at very considerable disturbances of space weather parameters a human being healthy did not feel painful symptoms though measurements of objective physiological indices showed their changes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matysiak, W; Yeung, D; Hsi, W
2014-06-01
Purpose: We present a study of dosimetric consequences on doses in water in modeling in-air proton fluence independently along principle axes for rotated elliptical spots. Methods: Phase-space parameters for modeling in-air fluence are the position sigma for the spatial distribution, the angle sigma for the angular distribution, and the correlation between position and angle distributions. Proton spots of the McLaren proton therapy system were measured at five locations near the isocenter for the energies of 180 MeV and 250 MeV. An elongated elliptical spot rotated with respect to the principle axes was observed for the 180 MeV, while a circular-likemore » spot was observed for the 250 MeV. In the first approach, the phase-space parameters were derived in the principle axes without rotation. In the second approach, the phase space parameters were derived in the reference frame with axes rotated to coincide with the major axes of the elliptical spot. Monte-Carlo simulations with derived phase-space parameters using both approaches to tally doses in water were performed and analyzed. Results: For the rotated elliptical 180 MeV spots, the position sigmas were 3.6 mm and 3.2 mm in principle axes, but were 4.3 mm and 2.0 mm when the reference frame was rotated. Measured spots fitted poorly the uncorrelated 2D Gaussian, but the quality of fit was significantly improved after the reference frame was rotated. As a Result, phase space parameters in the rotated frame were more appropriate for modeling in-air proton fluence of 180 MeV protons. Considerable differences were observed in Monte Carlo simulated dose distributions in water with phase-space parameters obtained with the two approaches. Conclusion: For rotated elliptical proton spots, phase-space parameters obtained in the rotated reference frame are better for modeling in-air proton fluence, and can be introduced into treatment planning systems.« less
A Tracker for Broken and Closely-Spaced Lines
1997-10-01
to combine the current level flow estimate and the previous level flow estimate. However, the result is still not good enough for some reasons. First...geometric attributes are not good enough to discriminate line segments, when they are crowded, parallel and closely-spaced to each other. On the other...level information [10]. Still, it is not good at dealing with closely-spaced line segments. Because it requires a proper size of square neighborhood to
Biomedical engineering strategies in system design space.
Savageau, Michael A
2011-04-01
Modern systems biology and synthetic bioengineering face two major challenges in relating properties of the genetic components of a natural or engineered system to its integrated behavior. The first is the fundamental unsolved problem of relating the digital representation of the genotype to the analog representation of the parameters for the molecular components. For example, knowing the DNA sequence does not allow one to determine the kinetic parameters of an enzyme. The second is the fundamental unsolved problem of relating the parameters of the components and the environment to the phenotype of the global system. For example, knowing the parameters does not tell one how many qualitatively distinct phenotypes are in the organism's repertoire or the relative fitness of the phenotypes in different environments. These also are challenges for biomedical engineers as they attempt to develop therapeutic strategies to treat pathology or to redirect normal cellular functions for biotechnological purposes. In this article, the second of these fundamental challenges will be addressed, and the notion of a "system design space" for relating the parameter space of components to the phenotype space of bioengineering systems will be focused upon. First, the concept of a system design space will be motivated by introducing one of its key components from an intuitive perspective. Second, a simple linear example will be used to illustrate a generic method for constructing the design space in which qualitatively distinct phenotypes can be identified and counted, their fitness analyzed and compared, and their tolerance to change measured. Third, two examples of nonlinear systems from different areas of biomedical engineering will be presented. Finally, after giving reference to a few other applications that have made use of the system design space approach to reveal important design principles, some concluding remarks concerning challenges and opportunities for further development will be made.
Design of double fuzzy clustering-driven context neural networks.
Kim, Eun-Hu; Oh, Sung-Kwun; Pedrycz, Witold
2018-08-01
In this study, we introduce a novel category of double fuzzy clustering-driven context neural networks (DFCCNNs). The study is focused on the development of advanced design methodologies for redesigning the structure of conventional fuzzy clustering-based neural networks. The conventional fuzzy clustering-based neural networks typically focus on dividing the input space into several local spaces (implied by clusters). In contrast, the proposed DFCCNNs take into account two distinct local spaces called context and cluster spaces, respectively. Cluster space refers to the local space positioned in the input space whereas context space concerns a local space formed in the output space. Through partitioning the output space into several local spaces, each context space is used as the desired (target) local output to construct local models. To complete this, the proposed network includes a new context layer for reasoning about context space in the output space. In this sense, Fuzzy C-Means (FCM) clustering is useful to form local spaces in both input and output spaces. The first one is used in order to form clusters and train weights positioned between the input and hidden layer, whereas the other one is applied to the output space to form context spaces. The key features of the proposed DFCCNNs can be enumerated as follows: (i) the parameters between the input layer and hidden layer are built through FCM clustering. The connections (weights) are specified as constant terms being in fact the centers of the clusters. The membership functions (represented through the partition matrix) produced by the FCM are used as activation functions located at the hidden layer of the "conventional" neural networks. (ii) Following the hidden layer, a context layer is formed to approximate the context space of the output variable and each node in context layer means individual local model. The outputs of the context layer are specified as a combination of both weights formed as linear function and the outputs of the hidden layer. The weights are updated using the least square estimation (LSE)-based method. (iii) At the output layer, the outputs of context layer are decoded to produce the corresponding numeric output. At this time, the weighted average is used and the weights are also adjusted with the use of the LSE scheme. From the viewpoint of performance improvement, the proposed design methodologies are discussed and experimented with the aid of benchmark machine learning datasets. Through the experiments, it is shown that the generalization abilities of the proposed DFCCNNs are better than those of the conventional FCNNs reported in the literature. Copyright © 2018 Elsevier Ltd. All rights reserved.
Bursting endemic bubbles in an adaptive network
NASA Astrophysics Data System (ADS)
Sherborne, N.; Blyuss, K. B.; Kiss, I. Z.
2018-04-01
The spread of an infectious disease is known to change people's behavior, which in turn affects the spread of disease. Adaptive network models that account for both epidemic and behavioral change have found oscillations, but in an extremely narrow region of the parameter space, which contrasts with intuition and available data. In this paper we propose a simple susceptible-infected-susceptible epidemic model on an adaptive network with time-delayed rewiring, and show that oscillatory solutions are now present in a wide region of the parameter space. Altering the transmission or rewiring rates reveals the presence of an endemic bubble—an enclosed region of the parameter space where oscillations are observed.
NASA Astrophysics Data System (ADS)
Xu, Wenfu; Hu, Zhonghua; Zhang, Yu; Liang, Bin
2017-03-01
After being launched into space to perform some tasks, the inertia parameters of a space robotic system may change due to fuel consumption, hardware reconfiguration, target capturing, and so on. For precision control and simulation, it is required to identify these parameters on orbit. This paper proposes an effective method for identifying the complete inertia parameters (including the mass, inertia tensor and center of mass position) of a space robotic system. The key to the method is to identify two types of simple dynamics systems: equivalent single-body and two-body systems. For the former, all of the joints are locked into a designed configuration and the thrusters are used for orbital maneuvering. The object function for optimization is defined in terms of acceleration and velocity of the equivalent single body. For the latter, only one joint is unlocked and driven to move along a planned (exiting) trajectory in free-floating mode. The object function is defined based on the linear and angular momentum equations. Then, the parameter identification problems are transformed into non-linear optimization problems. The Particle Swarm Optimization (PSO) algorithm is applied to determine the optimal parameters, i.e. the complete dynamic parameters of the two equivalent systems. By sequentially unlocking the 1st to nth joints (or unlocking the nth to 1st joints), the mass properties of body 0 to n (or n to 0) are completely identified. For the proposed method, only simple dynamics equations are needed for identification. The excitation motion (orbit maneuvering and joint motion) is also easily realized. Moreover, the method does not require prior knowledge of the mass properties of any body. It is general and practical for identifying a space robotic system on-orbit.
NASA Technical Reports Server (NTRS)
Erickson, Jon D. (Editor)
1992-01-01
The present volume on cooperative intelligent robotics in space discusses sensing and perception, Space Station Freedom robotics, cooperative human/intelligent robot teams, and intelligent space robotics. Attention is given to space robotics reasoning and control, ground-based space applications, intelligent space robotics architectures, free-flying orbital space robotics, and cooperative intelligent robotics in space exploration. Topics addressed include proportional proximity sensing for telerobots using coherent lasar radar, ground operation of the mobile servicing system on Space Station Freedom, teleprogramming a cooperative space robotic workcell for space stations, and knowledge-based task planning for the special-purpose dextrous manipulator. Also discussed are dimensions of complexity in learning from interactive instruction, an overview of the dynamic predictive architecture for robotic assistants, recent developments at the Goddard engineering testbed, and parallel fault-tolerant robot control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohammed, Irshad; Gnedin, Nickolay Y.
Baryonic effects are amongst the most severe systematics to the tomographic analysis of weak lensing data which is the principal probe in many future generations of cosmological surveys like LSST, Euclid etc.. Modeling or parameterizing these effects is essential in order to extract valuable constraints on cosmological parameters. In a recent paper, Eifler et al. (2015) suggested a reduction technique for baryonic effects by conducting a principal component analysis (PCA) and removing the largest baryonic eigenmodes from the data. In this article, we conducted the investigation further and addressed two critical aspects. Firstly, we performed the analysis by separating the simulations into training and test sets, computing a minimal set of principle components from the training set and examining the fits on the test set. We found that using only four parameters, corresponding to the four largest eigenmodes of the training set, the test sets can be fitted thoroughly with an RMSmore » $$\\sim 0.0011$$. Secondly, we explored the significance of outliers, the most exotic/extreme baryonic scenarios, in this method. We found that excluding the outliers from the training set results in a relatively bad fit and degraded the RMS by nearly a factor of 3. Therefore, for a direct employment of this method to the tomographic analysis of the weak lensing data, the principle components should be derived from a training set that comprises adequately exotic but reasonable models such that the reality is included inside the parameter domain sampled by the training set. The baryonic effects can be parameterized as the coefficients of these principle components and should be marginalized over the cosmological parameter space.« less
Q estimation of seismic data using the generalized S-transform
NASA Astrophysics Data System (ADS)
Hao, Yaju; Wen, Xiaotao; Zhang, Bo; He, Zhenhua; Zhang, Rui; Zhang, Jinming
2016-12-01
Quality factor, Q, is a parameter that characterizes the energy dissipation during seismic wave propagation. The reservoir pore is one of the main factors that affect the value of Q. Especially, when pore space is filled with oil or gas, the rock usually exhibits a relative low Q value. Such a low Q value has been used as a direct hydrocarbon indicator by many researchers. The conventional Q estimation method based on spectral ratio suffers from the problem of waveform tuning; hence, many researchers have introduced time-frequency analysis techniques to tackle this problem. Unfortunately, the window functions adopted in time-frequency analysis algorithms such as continuous wavelet transform (CWT) and S-transform (ST) contaminate the amplitude spectra because the seismic signal is multiplied by the window functions during time-frequency decomposition. The basic assumption of the spectral ratio method is that there is a linear relationship between natural logarithmic spectral ratio and frequency. However, this assumption does not hold if we take the influence of window functions into consideration. In this paper, we first employ a recently developed two-parameter generalized S-transform (GST) to obtain the time-frequency spectra of seismic traces. We then deduce the non-linear relationship between natural logarithmic spectral ratio and frequency. Finally, we obtain a linear relationship between natural logarithmic spectral ratio and a newly defined parameter γ by ignoring the negligible second order term. The gradient of this linear relationship is 1/Q. Here, the parameter γ is a function of frequency and source wavelet. Numerical examples for VSP and post-stack reflection data confirm that our algorithm is capable of yielding accurate results. The Q-value results estimated from field data acquired in western China show reasonable comparison with oil-producing well location.
40 CFR 98.456 - Data reporting requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., of Equation SS-6 of this subpart. (t) For any missing data, you must report the reason the data were missing, the parameters for which the data were missing, the substitute parameters used to estimate... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Data reporting requirements. 98.456...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yunpeng; Li, En, E-mail: lien@uestc.edu.cn; Guo, Gaofeng
2014-09-15
A pair of spot-focusing horn lens antenna is the key component in a free-space measurement system. The electromagnetic constitutive parameters of a planar sample are determined using transmitted and reflected electromagnetic beams. These parameters are obtained from the measured scattering parameters by the microwave network analyzer, thickness of the sample, and wavelength of a focused beam on the sample. Free-space techniques introduced by most papers consider the focused wavelength as the free-space wavelength. But in fact, the incident wave projected by a lens into the sample approximates a Gaussian beam, thus, there has an elongation of the wavelength in themore » focused beam and this elongation should be taken into consideration in dielectric and magnetic measurement. In this paper, elongation of the wavelength has been analyzed and measured. Measurement results show that the focused wavelength in the vicinity of the focus has an elongation of 1%–5% relative to the free-space wavelength. Elongation's influence on the measurement result of the permittivity and permeability has been investigated. Numerical analyses show that the elongation of the focused wavelength can cause the increase of the measured value of the permeability relative to traditionally measured value, but for the permittivity, it is affected by several parameters and may increase or decrease relative to traditionally measured value.« less
Latent resonance in tidal rivers, with applications to River Elbe
NASA Astrophysics Data System (ADS)
Backhaus, Jan O.
2015-11-01
We describe a systematic investigation of resonance in tidal rivers, and of river oscillations influenced by resonance. That is, we explore the grey-zone between absent and fully developed resonance. Data from this study are the results of a one-dimensional numerical channel model applied to a four-dimensional parameter space comprising geometry, i.e. length and depths of rivers, and varying dissipation and forcing. Similarity of real rivers and channels from parameter space is obtained with the help of a 'run-time depth'. We present a model-channel, which reproduces tidal oscillations of River Elbe in Hamburg, Germany with accuracy of a few centimetres. The parameter space contains resonant regions and regions with 'latent resonance'. The latter defines tidal oscillations that are elevated yet not in full but juvenile resonance. Dissipation reduces amplitudes of resonance while creating latent resonance. That is, energy of resonance radiates into areas in parameter space where periods of Eigen-oscillations are well separated from the period of the forcing tide. Increased forcing enhances the re-distribution of resonance in parameter space. The River Elbe is diagnosed as being in a state of anthropogenic latent resonance as a consequence of ongoing deepening by dredging. Deepening the river, in conjunction with the expected sea level rise, will inevitably cause increasing tidal ranges. As a rule of thumb, we found that 1 m deepening would cause 0.5 m increase in tidal range.
Improving the Performance of the Space Surveillance Telescope as a Function of Seeing Parameter
2015-03-26
Center, LAAFB, El Segundo, 2014. [27] G. S. F. S. M. B. a. J. S. H. Viggh, "Applying Electro-Optical Space Surveillance Technology to Asteroid ...IMPROVING THE PERFORMANCE OF THE SPACE SURVEILLANCE TELESCOPE AS A FUNCTION OF SEEING PARAMETER...or the United States Government. This material is declared a work of the U.S. Government and is not subject to copyright protection in the United
Out of the Blue and Into the Black: Creation of the United States Space Force.
1998-03-01
organizational diagnosis as a theorem for strategic change. An autopsy of related research and literature was conducted in order to establish justification for a separate service to advance space power for the nation. The first dimension examined is the medium of space. Defining the medium, along with such areas as airpower and space power establishes a factual foundation from which to launch the idea of a separate service. Reasoning for and against a separate service is presented, including application of the Organizational Diagnosis to the Air
Space-weather Parameters for 1,000 Active Regions Observed by SDO/HMI
NASA Astrophysics Data System (ADS)
Bobra, M.; Liu, Y.; Hoeksema, J. T.; Sun, X.
2013-12-01
We present statistical studies of several space-weather parameters, derived from observations of the photospheric vector magnetic field by the Helioseismic and Magnetic Imager (HMI) aboard the Solar Dynamics Observatory, for a thousand active regions. Each active region has been observed every twelve minutes during the entirety of its disk passage. Some of these parameters, such as energy density and shear angle, indicate the deviation of the photospheric magnetic field from that of a potential field. Other parameters include flux, helicity, field gradients, polarity inversion line properties, and measures of complexity. We show that some of these parameters are useful for event prediction.
THEORETICAL RESEARCH OF THE OPTICAL SPECTRA AND EPR PARAMETERS FOR Cs2NaYCl6:Dy3+ CRYSTAL
NASA Astrophysics Data System (ADS)
Dong, Hui-Ning; Dong, Meng-Ran; Li, Jin-Jin; Li, Deng-Feng; Zhang, Yi
2013-09-01
The calculated EPR parameters are in reasonable agreement with the observed values. The important material Cs2NaYCl6 doped with rare earth ions have received much attention because of its excellent optical and magnetic properties. Based on the superposition model, in this paper the crystal field energy levels, the electron paramagnetic resonance parameters g factors of Dy3+ and hyperfine structure constants of 161Dy3+ and 163Dy3+ isotopes in Cs2NaYCl6 crystal are studied by diagonalizing the 42 × 42 energy matrix. In the calculations, the contributions of various admixtures and interactions such as the J-mixing, the mixtures among the states with the same J-value, and the covalence are all considered. The calculated results are in reasonable agreement with the observed values. The results are discussed.
Strategic, Organizational and Standardization Aspects of Integrated Information Systems. Volume 6.
1987-12-01
TEST CHART NATIONAL BUREAU OF STANDARDS- 1963-A Masaustt Strategic, Organizational, and Intueoyffomto TechnlogyStandardization Aspects of UJ Kowledge ...reasons (such as the desired level of processing power and the amount of storage space), organizational reasons (such as each department obtaining its...of processing power falls, Abbott can afford to subordinate efficient processing for organizational effectiveness. 4. Steps in an Analytical Process
Mujeeb-ur-Rahman; Iqbal, Muhammad; Jilani, Muhammad Saleem; Waseem, Kashif
2007-12-15
A research project to evaluate the effect of different plant spacing on the production of cauliflower was conducted at Horticulture Research Area, Faculty of Agriculture, Gomal University, Dera Ismail Khan, NWFP, Pakistan. Six different plant spacing viz., 30, 35, 40, 45, 50 and 55 cm were used. The results revealed significant variations in all the parameters and amongst various plant spacing, 45 cm spacing showed the best response for all the parameters. Maximum plant height (49.33 cm), curd diameter (19.13 cm), maximum curd weight (1.23 kg plant(-1)) and yield (30.77 t ha(-1)) were recorded in the plots where the plants were spaced 45 cm apart.
A Summary of Meteorological Parameters During Space Shuttle Pad Exposure Periods
NASA Technical Reports Server (NTRS)
Overbey, Glenn; Roberts, Barry C.
2005-01-01
During the 113 missions of the Space Transportation System (STS), the Space Shuffle fleet has been exposed to the elements on the launch pad for a total of 4195 days. The Natural Environments Branch at Marshall Space Flight Center archives atmospheric environments to which the Space Shuttle vehicles are exposed. This paper provides a summary of the historical record of the meteorological conditions encountered by the Space Shuttle fleet during the pad exposure period. Sources of the surface parameters, including temperature, dew point temperature, relative humidity, wind speed, wind direction, sea level pressure and precipitation are presented. Data is provided from the first launch of the STS in 1981 through the launch of STS-107 in 2003.
Sartori, Andrea C; Wadley, Virginia G; Clay, Olivio J; Parisi, Jeanine M; Rebok, George W; Crowe, Michael
2012-06-01
We examined the relationship of cognitive and functional measures with life space (a measure of spatial mobility examining extent of movement within a person's environment) in older adults, and investigated the potential moderating role of personal control beliefs. Internal control beliefs reflect feelings of competence and personal agency, while attributions of external control imply a more dependent or passive point of view. Participants were 2,737 adults from the ACTIVE study, with a mean age of 74 years. Females comprised 76% of the sample, with good minority representation (27% African American). In multiple regression models controlling for demographic factors, cognitive domains of memory, reasoning, and processing speed were significantly associated with life space (p < .001 for each), and reasoning ability appeared most predictive (B = .117). Measures of everyday function also showed significant associations with life space, independent from the traditional cognitive measures. Interactions between cognitive function and control beliefs were tested, and external control beliefs moderated the relationship between memory and life space, with the combination of high objective memory and low external control beliefs yielding the highest life space (t = -2.07; p = .039). In conclusion, older adults with better cognitive function have a larger overall life space. Performance-based measures of everyday function may also be useful in assessing the functional outcome of life space. Additionally, subjective external control beliefs may moderate the relationship between objective cognitive function and life space. Future studies examining the relationships between these factors longitudinally appear worthwhile to further elucidate the interrelationships of cognitive function, control beliefs, and life space. PsycINFO Database Record (c) 2012 APA, all rights reserved
Crystal growth of device quality GaAs in space
NASA Technical Reports Server (NTRS)
Gatos, H. C.; Lagowski, J.
1979-01-01
The optimization of space processing of GaAs is described. The detailed compositional, structural, and electronic characterization of GaAs on a macro- and microscale and the relationships between growth parameters and the properties of GaAs are among the factors discussed. The key parameters limiting device performance are assessed.
Yu, Zheng-Yong; Zhu, Shun-Peng; Liu, Qiang; Liu, Yunhan
2017-05-08
As one of fracture critical components of an aircraft engine, accurate life prediction of a turbine blade to disk attachment is significant for ensuring the engine structural integrity and reliability. Fatigue failure of a turbine blade is often caused under multiaxial cyclic loadings at high temperatures. In this paper, considering different failure types, a new energy-critical plane damage parameter is proposed for multiaxial fatigue life prediction, and no extra fitted material constants will be needed for practical applications. Moreover, three multiaxial models with maximum damage parameters on the critical plane are evaluated under tension-compression and tension-torsion loadings. Experimental data of GH4169 under proportional and non-proportional fatigue loadings and a case study of a turbine disk-blade contact system are introduced for model validation. Results show that model predictions by Wang-Brown (WB) and Fatemi-Socie (FS) models with maximum damage parameters are conservative and acceptable. For the turbine disk-blade contact system, both of the proposed damage parameters and Smith-Watson-Topper (SWT) model show reasonably acceptable correlations with its field number of flight cycles. However, life estimations of the turbine blade reveal that the definition of the maximum damage parameter is not reasonable for the WB model but effective for both the FS and SWT models.
Yu, Zheng-Yong; Zhu, Shun-Peng; Liu, Qiang; Liu, Yunhan
2017-01-01
As one of fracture critical components of an aircraft engine, accurate life prediction of a turbine blade to disk attachment is significant for ensuring the engine structural integrity and reliability. Fatigue failure of a turbine blade is often caused under multiaxial cyclic loadings at high temperatures. In this paper, considering different failure types, a new energy-critical plane damage parameter is proposed for multiaxial fatigue life prediction, and no extra fitted material constants will be needed for practical applications. Moreover, three multiaxial models with maximum damage parameters on the critical plane are evaluated under tension-compression and tension-torsion loadings. Experimental data of GH4169 under proportional and non-proportional fatigue loadings and a case study of a turbine disk-blade contact system are introduced for model validation. Results show that model predictions by Wang-Brown (WB) and Fatemi-Socie (FS) models with maximum damage parameters are conservative and acceptable. For the turbine disk-blade contact system, both of the proposed damage parameters and Smith-Watson-Topper (SWT) model show reasonably acceptable correlations with its field number of flight cycles. However, life estimations of the turbine blade reveal that the definition of the maximum damage parameter is not reasonable for the WB model but effective for both the FS and SWT models. PMID:28772873
Theoretical Analysis of Spacing Parameters of Anisotropic 3D Surface Roughness
NASA Astrophysics Data System (ADS)
Rudzitis, J.; Bulaha, N.; Lungevics, J.; Linins, O.; Berzins, K.
2017-04-01
The authors of the research have analysed spacing parameters of anisotropic 3D surface roughness crosswise to machining (friction) traces RSm1 and lengthwise to machining (friction) traces RSm2. The main issue arises from the RSm2 values being limited by values of sampling length l in the measuring devices; however, on many occasions RSm2 values can exceed l values. Therefore, the mean spacing values of profile irregularities in the longitudinal direction in many cases are not reliable and they should be determined by another method. Theoretically, it is proved that anisotropic surface roughness anisotropy coefficient c=RSm1/RSm2 equals texture aspect ratio Str, which is determined by surface texture standard EN ISO 25178-2. This allows using parameter Str to determine mean spacing of profile irregularities and estimate roughness anisotropy.
Outdoor ground impedance models.
Attenborough, Keith; Bashir, Imran; Taherzadeh, Shahram
2011-05-01
Many models for the acoustical properties of rigid-porous media require knowledge of parameter values that are not available for outdoor ground surfaces. The relationship used between tortuosity and porosity for stacked spheres results in five characteristic impedance models that require not more than two adjustable parameters. These models and hard-backed-layer versions are considered further through numerical fitting of 42 short range level difference spectra measured over various ground surfaces. For all but eight sites, slit-pore, phenomenological and variable porosity models yield lower fitting errors than those given by the widely used one-parameter semi-empirical model. Data for 12 of 26 grassland sites and for three beech wood sites are fitted better by hard-backed-layer models. Parameter values obtained by fitting slit-pore and phenomenological models to data for relatively low flow resistivity grounds, such as forest floors, porous asphalt, and gravel, are consistent with values that have been obtained non-acoustically. Three impedance models yield reasonable fits to a narrow band excess attenuation spectrum measured at short range over railway ballast but, if extended reaction is taken into account, the hard-backed-layer version of the slit-pore model gives the most reasonable parameter values.
Convergence properties of η → 3π decays in chiral perturbation theory
NASA Astrophysics Data System (ADS)
Kolesár, Marián; Novotný, Jiří
2017-01-01
The convergence of the decay widths and some of the Dalitz plot parameters of the decay η → 3π seems problematic in low energy QCD. In the framework of resummed chiral perturbation theory, we explore the question of compatibility of experimental data with a reasonable convergence of a carefully defined chiral series. By treating the uncertainties in the higher orders statistically, we numerically generate a large set of theoretical predictions, which are then confronted with experimental information. In the case of the decay widths, the experimental values can be reconstructed for a reasonable range of the free parameters and thus no tension is observed, in spite of what some of the traditional calculations suggest. The Dalitz plot parameters a and d can be described very well too. When the parameters b and α are concerned, we find a mild tension for the whole range of the free parameters, at less than 2σ C.L. This can be interpreted in two ways - either some of the higher order corrections are indeed unexpectedly large or there is a specific configuration of the remainders, which is, however, not completely improbable.
Fan, Ming; Kuwahara, Hiroyuki; Wang, Xiaolei; Wang, Suojin; Gao, Xin
2015-11-01
Parameter estimation is a challenging computational problem in the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter estimation of gene circuit models from such time-series mRNA data has become an important method for quantitatively dissecting the regulation of gene expression. By focusing on the modeling of gene circuits, we examine here the performance of three types of state-of-the-art parameter estimation methods: population-based methods, online methods and model-decomposition-based methods. Our results show that certain population-based methods are able to generate high-quality parameter solutions. The performance of these methods, however, is heavily dependent on the size of the parameter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, online methods and model decomposition-based methods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fast methods with local search as a subsequent refinement procedure can substantially increase the quality of their parameter estimates to the level on par with the best solution obtained from the population-based methods while maintaining high computational speed. These suggest that such hybrid methods can be a promising alternative to the more commonly used population-based methods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatory mechanisms makes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Federated Space-Time Query for Earth Science Data Using OpenSearch Conventions
NASA Technical Reports Server (NTRS)
Lynnes, Chris; Beaumont, Bruce; Duerr, Ruth; Hua, Hook
2009-01-01
This slide presentation reviews a Space-time query system that has been developed to assist the user in finding Earth science data that fulfills the researchers needs. It reviews the reasons why finding Earth science data can be so difficult, and explains the workings of the Space-Time Query with OpenSearch and how this system can assist researchers in finding the required data, It also reviews the developments with client server systems.