Natural Environmental Service Support to NASA Vehicle, Technology, and Sensor Development Programs
NASA Technical Reports Server (NTRS)
1993-01-01
The research performed under this contract involved definition of the natural environmental parameters affecting the design, development, and operation of space and launch vehicles. The Universities Space Research Association (USRA) provided the manpower and resources to accomplish the following tasks: defining environmental parameters critical for design, development, and operation of launch vehicles; defining environmental forecasts required to assure optimal utilization of launch vehicles; and defining orbital environments of operation and developing models on environmental parameters affecting launch vehicle operations.
2008-03-01
multiplicative corrections as well as space mapping transformations for models defined over a lower dimensional space. A corrected surrogate model for the...correction functions used in [72]. If the low fidelity model g(x̃) is defined over a lower dimensional space then a space mapping transformation is...required. As defined in [21, 72], space mapping is a method of mapping between models of different dimensionality or fidelity. Let P denote the space
14 CFR 1215.108 - Defining user service requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Defining user service requirements. 1215.108 Section 1215.108 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION TRACKING AND DATA..., spacecraft design, operations planning, and other significant mission parameters. When these user evaluations...
14 CFR 1215.108 - Defining user service requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 5 2012-01-01 2012-01-01 false Defining user service requirements. 1215.108 Section 1215.108 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION TRACKING AND... services, spacecraft design, operations planning, and other significant mission parameters. When these user...
Locating and defining underground goaf caused by coal mining from space-borne SAR interferometry
NASA Astrophysics Data System (ADS)
Yang, Zefa; Li, Zhiwei; Zhu, Jianjun; Yi, Huiwei; Feng, Guangcai; Hu, Jun; Wu, Lixin; Preusse, Alex; Wang, Yunjia; Papst, Markus
2018-01-01
It is crucial to locate underground goafs (i.e., mined-out areas) resulting from coal mining and define their spatial dimensions for effectively controlling the induced damages and geohazards. Traditional geophysical techniques for locating and defining underground goafs, however, are ground-based, labour-consuming and costly. This paper presents a novel space-based method for locating and defining the underground goaf caused by coal extraction using Interferometric Synthetic Aperture Radar (InSAR) techniques. As the coal mining-induced goaf is often a cuboid-shaped void and eight critical geometric parameters (i.e., length, width, height, inclined angle, azimuth angle, mining depth, and two central geodetic coordinates) are capable of locating and defining this underground space, the proposed method reduces to determine the eight geometric parameters from InSAR observations. Therefore, it first applies the Probability Integral Method (PIM), a widely used model for mining-induced deformation prediction, to construct a functional relationship between the eight geometric parameters and the InSAR-derived surface deformation. Next, the method estimates these geometric parameters from the InSAR-derived deformation observations using a hybrid simulated annealing and genetic algorithm. Finally, the proposed method was tested with both simulated and two real data sets. The results demonstrate that the estimated geometric parameters of the goafs are accurate and compatible overall, with averaged relative errors of approximately 2.1% and 8.1% being observed for the simulated and the real data experiments, respectively. Owing to the advantages of the InSAR observations, the proposed method provides a non-contact, convenient and practical method for economically locating and defining underground goafs in a large spatial area from space.
Reducing the Knowledge Tracing Space
ERIC Educational Resources Information Center
Ritter, Steven; Harris, Thomas K.; Nixon, Tristan; Dickison, Daniel; Murray, R. Charles; Towle, Brendon
2009-01-01
In Cognitive Tutors, student skill is represented by estimates of student knowledge on various knowledge components. The estimate for each knowledge component is based on a four-parameter model developed by Corbett and Anderson [Nb]. In this paper, we investigate the nature of the parameter space defined by these four parameters by modeling data…
Oliveira, G M; de Oliveira, P P; Omar, N
2001-01-01
Cellular automata (CA) are important as prototypical, spatially extended, discrete dynamical systems. Because the problem of forecasting dynamic behavior of CA is undecidable, various parameter-based approximations have been developed to address the problem. Out of the analysis of the most important parameters available to this end we proposed some guidelines that should be followed when defining a parameter of that kind. Based upon the guidelines, new parameters were proposed and a set of five parameters was selected; two of them were drawn from the literature and three are new ones, defined here. This article presents all of them and makes their qualities evident. Then, two results are described, related to the use of the parameter set in the Elementary Rule Space: a phase transition diagram, and some general heuristics for forecasting the dynamics of one-dimensional CA. Finally, as an example of the application of the selected parameters in high cardinality spaces, results are presented from experiments involving the evolution of radius-3 CA in the Density Classification Task, and radius-2 CA in the Synchronization Task.
Physics issues of gamma ray burst emissions
NASA Technical Reports Server (NTRS)
Liang, Edison
1987-01-01
The critical physics issues in the interpretation of gamma-ray-burst spectra are reviewed. An attempt is made to define the emission-region parameter space satisfying the maximum number of observational and theoretical constraints. Also discussed are the physical mechanisms responsible for the bursts that are most consistent with the above parameter space.
Space station needs, attributes and architectural options. Volume 3, task 1: Mission requirements
NASA Technical Reports Server (NTRS)
1983-01-01
The mission requirements of the space station program are investigated. Mission parameters are divided into user support from private industry, scientific experimentation, U.S. national security, and space operations away from the space station. These categories define the design and use of the space station. An analysis of cost estimates is included.
NASA Astrophysics Data System (ADS)
Wells, J. R.; Kim, J. B.
2011-12-01
Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that multiple acceptable parameter sets exist. Further we expect to demonstrate that the multiple parameter sets produce significantly divergent future forecasts in NEP, C storage, and ET and runoff; and thereby identify a highly important source of DGVM uncertainty
Averaging of random walks and shift-invariant measures on a Hilbert space
NASA Astrophysics Data System (ADS)
Sakbaev, V. Zh.
2017-06-01
We study random walks in a Hilbert space H and representations using them of solutions of the Cauchy problem for differential equations whose initial conditions are numerical functions on H. We construct a finitely additive analogue of the Lebesgue measure: a nonnegative finitely additive measure λ that is defined on a minimal subset ring of an infinite-dimensional Hilbert space H containing all infinite-dimensional rectangles with absolutely converging products of the side lengths and is invariant under shifts and rotations in H. We define the Hilbert space H of equivalence classes of complex-valued functions on H that are square integrable with respect to a shift-invariant measure λ. Using averaging of the shift operator in H over random vectors in H with a distribution given by a one-parameter semigroup (with respect to convolution) of Gaussian measures on H, we define a one-parameter semigroup of contracting self-adjoint transformations on H, whose generator is called the diffusion operator. We obtain a representation of solutions of the Cauchy problem for the Schrödinger equation whose Hamiltonian is the diffusion operator.
Sensitivity analysis of the add-on price estimate for the edge-defined film-fed growth process
NASA Technical Reports Server (NTRS)
Mokashi, A. R.; Kachare, A. H.
1981-01-01
The analysis is in terms of cost parameters and production parameters. The cost parameters include equipment, space, direct labor, materials, and utilities. The production parameters include growth rate, process yield, and duty cycle. A computer program was developed specifically to do the sensitivity analysis.
Transformation to equivalent dimensions—a new methodology to study earthquake clustering
NASA Astrophysics Data System (ADS)
Lasocki, Stanislaw
2014-05-01
A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.
NASA Astrophysics Data System (ADS)
Susyanto, Nanang
2017-12-01
We propose a simple derivation of the Cramer-Rao Lower Bound (CRLB) of parameters under equality constraints from the CRLB without constraints in regular parametric models. When a regular parametric model and an equality constraint of the parameter are given, a parametric submodel can be defined by restricting the parameter under that constraint. The tangent space of this submodel is then computed with the help of the implicit function theorem. Finally, the score function of the restricted parameter is obtained by projecting the efficient influence function of the unrestricted parameter on the appropriate inner product spaces.
NASA Astrophysics Data System (ADS)
Xu, Wenfu; Hu, Zhonghua; Zhang, Yu; Liang, Bin
2017-03-01
After being launched into space to perform some tasks, the inertia parameters of a space robotic system may change due to fuel consumption, hardware reconfiguration, target capturing, and so on. For precision control and simulation, it is required to identify these parameters on orbit. This paper proposes an effective method for identifying the complete inertia parameters (including the mass, inertia tensor and center of mass position) of a space robotic system. The key to the method is to identify two types of simple dynamics systems: equivalent single-body and two-body systems. For the former, all of the joints are locked into a designed configuration and the thrusters are used for orbital maneuvering. The object function for optimization is defined in terms of acceleration and velocity of the equivalent single body. For the latter, only one joint is unlocked and driven to move along a planned (exiting) trajectory in free-floating mode. The object function is defined based on the linear and angular momentum equations. Then, the parameter identification problems are transformed into non-linear optimization problems. The Particle Swarm Optimization (PSO) algorithm is applied to determine the optimal parameters, i.e. the complete dynamic parameters of the two equivalent systems. By sequentially unlocking the 1st to nth joints (or unlocking the nth to 1st joints), the mass properties of body 0 to n (or n to 0) are completely identified. For the proposed method, only simple dynamics equations are needed for identification. The excitation motion (orbit maneuvering and joint motion) is also easily realized. Moreover, the method does not require prior knowledge of the mass properties of any body. It is general and practical for identifying a space robotic system on-orbit.
Parameter estimation uncertainty: Comparing apples and apples?
NASA Astrophysics Data System (ADS)
Hart, D.; Yoon, H.; McKenna, S. A.
2012-12-01
Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Parameter Validation for Evaluation of Spaceflight Hardware Reusability
NASA Technical Reports Server (NTRS)
Childress-Thompson, Rhonda; Dale, Thomas L.; Farrington, Phillip
2017-01-01
Within recent years, there has been an influx of companies around the world pursuing reusable systems for space flight. Much like NASA, many of these new entrants are learning that reusable systems are complex and difficult to acheive. For instance, in its first attempts to retrieve spaceflight hardware for future reuse, SpaceX unsuccessfully tried to land on a barge at sea, resulting in a crash-landing. As this new generation of launch developers continues to develop concepts for reusable systems, having a systematic approach for determining the most effective systems for reuse is paramount. Three factors that influence the effective implementation of reusability are cost, operability and reliability. Therefore, a method that integrates these factors into the decision-making process must be utilized to adequately determine whether hardware used in space flight should be reused or discarded. Previous research has identified seven features that contribute to the successful implementation of reusability for space flight applications, defined reusability for space flight applications, highlighted the importance of reusability, and presented areas that hinder successful implementation of reusability. The next step is to ensure that the list of reusability parameters previously identified is comprehensive, and any duplication is either removed or consolidated. The characteristics to judge the seven features as good indicators for successful reuse are identified and then assessed using multiattribute decision making. Next, discriminators in the form of metrics or descriptors are assigned to each parameter. This paper explains the approach used to evaluate these parameters, define the Measures of Effectiveness (MOE) for reusability, and quantify these parameters. Using the MOEs, each parameter is assessed for its contribution to the reusability of the hardware. Potential data sources needed to validate the approach will be identified.
A review of pharmaceutical extrusion: critical process parameters and scaling-up.
Thiry, J; Krier, F; Evrard, B
2015-02-01
Hot melt extrusion has been a widely used process in the pharmaceutical area for three decades. In this field, it is important to optimize the formulation in order to meet specific requirements. However, the process parameters of the extruder should be as much investigated as the formulation since they have a major impact on the final product characteristics. Moreover, a design space should be defined in order to obtain the expected product within the defined limits. This gives some freedom to operate as long as the processing parameters stay within the limits of the design space. Those limits can be investigated by varying randomly the process parameters but it is recommended to use design of experiments. An examination of the literature is reported in this review to summarize the impact of the variation of the process parameters on the final product properties. Indeed, the homogeneity of the mixing, the state of the drug (crystalline or amorphous), the dissolution rate, the residence time, can be influenced by variations in the process parameters. In particular, the impact of the following process parameters: temperature, screw design, screw speed and feeding, on the final product, has been reviewed. Copyright © 2014 Elsevier B.V. All rights reserved.
14 CFR 415.59 - Information requirements for payload review.
Code of Federal Regulations, 2013 CFR
2013-01-01
.... 415.59 Section 415.59 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION... review; (5) Orbital parameters for parking, transfer and final orbits; (6) Hazardous materials, as defined in § 401.5 of this chapter, and radioactive materials, and the amounts of each; (7) Intended...
NASA Astrophysics Data System (ADS)
Cecchini, Micael A.; Machado, Luiz A. T.; Wendisch, Manfred; Costa, Anja; Krämer, Martina; Andreae, Meinrat O.; Afchine, Armin; Albrecht, Rachel I.; Artaxo, Paulo; Borrmann, Stephan; Fütterer, Daniel; Klimach, Thomas; Mahnke, Christoph; Martin, Scot T.; Minikin, Andreas; Molleker, Sergej; Pardo, Lianet H.; Pöhlker, Christopher; Pöhlker, Mira L.; Pöschl, Ulrich; Rosenfeld, Daniel; Weinzierl, Bernadett
2017-12-01
The behavior of tropical clouds remains a major open scientific question, resulting in poor representation by models. One challenge is to realistically reproduce cloud droplet size distributions (DSDs) and their evolution over time and space. Many applications, not limited to models, use the gamma function to represent DSDs. However, even though the statistical characteristics of the gamma parameters have been widely studied, there is almost no study dedicated to understanding the phase space of this function and the associated physics. This phase space can be defined by the three parameters that define the DSD intercept, shape, and curvature. Gamma phase space may provide a common framework for parameterizations and intercomparisons. Here, we introduce the phase space approach and its characteristics, focusing on warm-phase microphysical cloud properties and the transition to the mixed-phase layer. We show that trajectories in this phase space can represent DSD evolution and can be related to growth processes. Condensational and collisional growth may be interpreted as pseudo-forces that induce displacements in opposite directions within the phase space. The actually observed movements in the phase space are a result of the combination of such pseudo-forces. Additionally, aerosol effects can be evaluated given their significant impact on DSDs. The DSDs associated with liquid droplets that favor cloud glaciation can be delimited in the phase space, which can help models to adequately predict the transition to the mixed phase. We also consider possible ways to constrain the DSD in two-moment bulk microphysics schemes, in which the relative dispersion parameter of the DSD can play a significant role. Overall, the gamma phase space approach can be an invaluable tool for studying cloud microphysical evolution and can be readily applied in many scenarios that rely on gamma DSDs.
In-Space Radiator Shape Optimization using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Hull, Patrick V.; Kittredge, Ken; Tinker, Michael; SanSoucie, Michael
2006-01-01
Future space exploration missions will require the development of more advanced in-space radiators. These radiators should be highly efficient and lightweight, deployable heat rejection systems. Typical radiators for in-space heat mitigation commonly comprise a substantial portion of the total vehicle mass. A small mass savings of even 5-10% can greatly improve vehicle performance. The objective of this paper is to present the development of detailed tools for the analysis and design of in-space radiators using evolutionary computation techniques. The optimality criterion is defined as a two-dimensional radiator with a shape demonstrating the smallest mass for the greatest overall heat transfer, thus the end result is a set of highly functional radiator designs. This cross-disciplinary work combines topology optimization and thermal analysis design by means of a genetic algorithm The proposed design tool consists of the following steps; design parameterization based on the exterior boundary of the radiator, objective function definition (mass minimization and heat loss maximization), objective function evaluation via finite element analysis (thermal radiation analysis) and optimization based on evolutionary algorithms. The radiator design problem is defined as follows: the input force is a driving temperature and the output reaction is heat loss. Appropriate modeling of the space environment is added to capture its effect on the radiator. The design parameters chosen for this radiator shape optimization problem fall into two classes, variable height along the width of the radiator and a spline curve defining the -material boundary of the radiator. The implementation of multiple design parameter schemes allows the user to have more confidence in the radiator optimization tool upon demonstration of convergence between the two design parameter schemes. This tool easily allows the user to manipulate the driving temperature regions thus permitting detailed design of in-space radiators for unique situations. Preliminary results indicate an optimized shape following that of the temperature distribution regions in the "cooler" portions of the radiator. The results closely follow the expected radiator shape.
Cope, Davis; Blakeslee, Barbara; McCourt, Mark E
2013-05-01
The difference-of-Gaussians (DOG) filter is a widely used model for the receptive field of neurons in the retina and lateral geniculate nucleus (LGN) and is a potential model in general for responses modulated by an excitatory center with an inhibitory surrounding region. A DOG filter is defined by three standard parameters: the center and surround sigmas (which define the variance of the radially symmetric Gaussians) and the balance (which defines the linear combination of the two Gaussians). These parameters are not directly observable and are typically determined by nonlinear parameter estimation methods applied to the frequency response function. DOG filters show both low-pass (optimal response at zero frequency) and bandpass (optimal response at a nonzero frequency) behavior. This paper reformulates the DOG filter in terms of a directly observable parameter, the zero-crossing radius, and two new (but not directly observable) parameters. In the two-dimensional parameter space, the exact region corresponding to bandpass behavior is determined. A detailed description of the frequency response characteristics of the DOG filter is obtained. It is also found that the directly observable optimal frequency and optimal gain (the ratio of the response at optimal frequency to the response at zero frequency) provide an alternate coordinate system for the bandpass region. Altogether, the DOG filter and its three standard implicit parameters can be determined by three directly observable values. The two-dimensional bandpass region is a potential tool for the analysis of populations of DOG filters (for example, populations of neurons in the retina or LGN), because the clustering of points in this parameter space may indicate an underlying organizational principle. This paper concentrates on circular Gaussians, but the results generalize to multidimensional radially symmetric Gaussians and are given as an appendix.
Reference equations of motion for automatic rendezvous and capture
NASA Technical Reports Server (NTRS)
Henderson, David M.
1992-01-01
The analysis presented in this paper defines the reference coordinate frames, equations of motion, and control parameters necessary to model the relative motion and attitude of spacecraft in close proximity with another space system during the Automatic Rendezvous and Capture phase of an on-orbit operation. The relative docking port target position vector and the attitude control matrix are defined based upon an arbitrary spacecraft design. These translation and rotation control parameters could be used to drive the error signal input to the vehicle flight control system. Measurements for these control parameters would become the bases for an autopilot or feedback control system (FCS) design for a specific spacecraft.
Critical space-time networks and geometric phase transitions from frustrated edge antiferromagnetism
NASA Astrophysics Data System (ADS)
Trugenberger, Carlo A.
2015-12-01
Recently I proposed a simple dynamical network model for discrete space-time that self-organizes as a graph with Hausdorff dimension dH=4 . The model has a geometric quantum phase transition with disorder parameter (dH-ds) , where ds is the spectral dimension of the dynamical graph. Self-organization in this network model is based on a competition between a ferromagnetic Ising model for vertices and an antiferromagnetic Ising model for edges. In this paper I solve a toy version of this model defined on a bipartite graph in the mean-field approximation. I show that the geometric phase transition corresponds exactly to the antiferromagnetic transition for edges, the dimensional disorder parameter of the former being mapped to the staggered magnetization order parameter of the latter. The model has a critical point with long-range correlations between edges, where a continuum random geometry can be defined, exactly as in Kazakov's famed 2D random lattice Ising model but now in any number of dimensions.
A Field Programmable Gate Array Based Software Defined Radio Design for the Space Environment
2009-12-01
CHANGING PARAMETERS ......................................................................97 APPENDIX B. ADDITIONAL APPLICATIONS ...Professor Frank Kragh was inspirational and always provided keen insight into the mathematics of signal analysis. Special thanks to Professor...and risk involved with launching a new satellite. [2] An FPGA design with potential for space applications was presented in [3]. This initial SDR
Moving to continuous facial expression space using the MPEG-4 facial definition parameter (FDP) set
NASA Astrophysics Data System (ADS)
Karpouzis, Kostas; Tsapatsoulis, Nicolas; Kollias, Stefanos D.
2000-06-01
Research in facial expression has concluded that at least six emotions, conveyed by human faces, are universally associated with distinct expressions. Sadness, anger, joy, fear, disgust and surprise are categories of expressions that are recognizable across cultures. In this work we form a relation between the description of the universal expressions and the MPEG-4 Facial Definition Parameter Set (FDP). We also investigate the relation between the movement of basic FDPs and the parameters that describe emotion-related words according to some classical psychological studies. In particular Whissel suggested that emotions are points in a space, which seem to occupy two dimensions: activation and evaluation. We show that some of the MPEG-4 Facial Animation Parameters (FAPs), approximated by the motion of the corresponding FDPs, can be combined by means of a fuzzy rule system to estimate the activation parameter. In this way variations of the six archetypal emotions can be achieved. Moreover, Plutchik concluded that emotion terms are unevenly distributed through the space defined by dimensions like Whissel's; instead they tend to form an approximately circular pattern, called 'emotion wheel,' modeled using an angular measure. The 'emotion wheel' can be defined as a reference for creating intermediate expressions from the universal ones, by interpolating the movement of dominant FDP points between neighboring basic expressions. By exploiting the relation between the movement of the basic FDP point and the activation and angular parameters we can model more emotions than the primary ones and achieve efficient recognition in video sequences.
Static shape control for flexible structures
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Scheid, R. E., Jr.
1986-01-01
An integrated methodology is described for defining static shape control laws for large flexible structures. The techniques include modeling, identifying and estimating the control laws of distributed systems characterized in terms of infinite dimensional state and parameter spaces. The models are expressed as interconnected elliptic partial differential equations governing a range of static loads, with the capability of analyzing electromagnetic fields around antenna systems. A second-order analysis is carried out for statistical errors, and model parameters are determined by maximizing an appropriate defined likelihood functional which adjusts the model to observational data. The parameter estimates are derived from the conditional mean of the observational data, resulting in a least squares superposition of shape functions obtained from the structural model.
1992-04-01
the voltage applied to the it" patch, K ’ is a parameter which depends on the geometry and piezoceramic...in the state space II L 2(fQ) x L2 (F0 ). Here L2(Q) is the quotient space of L2 over the constant functions. The use of the quotient space results...form of the problem, we also define the Hilbert space V = fti(Q) x H(F 0 ) where h!(Q) is the quotient space of Il’ over the constant functions
NASA Technical Reports Server (NTRS)
Liu, F. C.
1986-01-01
The objective of this investigation is to make analytical determination of the acceleration produced by crew motion in an orbiting space station and define design parameters for the suspension system of microgravity experiments. A simple structural model for simulation of the IOC space station is proposed. Mathematical formulation of this model provides the engineers a simple and direct tool for designing an effective suspension system.
1984-02-01
I . . . . . . An Introduction to Geometric Programming Patrick D. Allen and David W. Baker . . . . . . , . . . . . . . Space and Time...Zarwyn, US-Army Electronics R & D Comhiand GEOMETRIC PROGRAMING SPACE AND TIFFE ANALYSIS IN DYNAMIC PROGRAMING ALGORITHMS Renne..tf Stizti, AkeanXa...physical and parameter space can be connected by asymptotic matching. The purpose of the asymptotic analysis is to define the simplest problems
On the accuracy of the Padé-resummed master equation approach to dissipative quantum dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hsing-Ta; Reichman, David R.; Berkelbach, Timothy C.
2016-04-21
Well-defined criteria are proposed for assessing the accuracy of quantum master equations whose memory functions are approximated by Padé resummation of the first two moments in the electronic coupling. These criteria partition the parameter space into distinct levels of expected accuracy, ranging from quantitatively accurate regimes to regions of parameter space where the approach is not expected to be applicable. Extensive comparison of Padé-resummed master equations with numerically exact results in the context of the spin–boson model demonstrates that the proposed criteria correctly demarcate the regions of parameter space where the Padé approximation is reliable. The applicability analysis we presentmore » is not confined to the specifics of the Hamiltonian under consideration and should provide guidelines for other classes of resummation techniques.« less
Aggarwal, Ankush
2017-08-01
Motivated by the well-known result that stiffness of soft tissue is proportional to the stress, many of the constitutive laws for soft tissues contain an exponential function. In this work, we analyze properties of the exponential function and how it affects the estimation and comparison of elastic parameters for soft tissues. In particular, we find that as a consequence of the exponential function there are lines of high covariance in the elastic parameter space. As a result, one can have widely varying mechanical parameters defining the tissue stiffness but similar effective stress-strain responses. Drawing from elementary algebra, we propose simple changes in the norm and the parameter space, which significantly improve the convergence of parameter estimation and robustness in the presence of noise. More importantly, we demonstrate that these changes improve the conditioning of the problem and provide a more robust solution in the case of heterogeneous material by reducing the chances of getting trapped in a local minima. Based upon the new insight, we also propose a transformed parameter space which will allow for rational parameter comparison and avoid misleading conclusions regarding soft tissue mechanics.
Parameter estimation in nonlinear distributed systems - Approximation theory and convergence results
NASA Technical Reports Server (NTRS)
Banks, H. T.; Reich, Simeon; Rosen, I. G.
1988-01-01
An abstract approximation framework and convergence theory is described for Galerkin approximations applied to inverse problems involving nonlinear distributed parameter systems. Parameter estimation problems are considered and formulated as the minimization of a least-squares-like performance index over a compact admissible parameter set subject to state constraints given by an inhomogeneous nonlinear distributed system. The theory applies to systems whose dynamics can be described by either time-independent or nonstationary strongly maximal monotonic operators defined on a reflexive Banach space which is densely and continuously embedded in a Hilbert space. It is demonstrated that if readily verifiable conditions on the system's dependence on the unknown parameters are satisfied, and the usual Galerkin approximation assumption holds, then solutions to the approximating problems exist and approximate a solution to the original infinite-dimensional identification problem.
Approximate solution of space and time fractional higher order phase field equation
NASA Astrophysics Data System (ADS)
Shamseldeen, S.
2018-03-01
This paper is concerned with a class of space and time fractional partial differential equation (STFDE) with Riesz derivative in space and Caputo in time. The proposed STFDE is considered as a generalization of a sixth-order partial phase field equation. We describe the application of the optimal homotopy analysis method (OHAM) to obtain an approximate solution for the suggested fractional initial value problem. An averaged-squared residual error function is defined and used to determine the optimal convergence control parameter. Two numerical examples are studied, considering periodic and non-periodic initial conditions, to justify the efficiency and the accuracy of the adopted iterative approach. The dependence of the solution on the order of the fractional derivative in space and time and model parameters is investigated.
NASA Astrophysics Data System (ADS)
Goto, Shin-itiro; Umeno, Ken
2018-03-01
Maps on a parameter space for expressing distribution functions are exactly derived from the Perron-Frobenius equations for a generalized Boole transform family. Here the generalized Boole transform family is a one-parameter family of maps, where it is defined on a subset of the real line and its probability distribution function is the Cauchy distribution with some parameters. With this reduction, some relations between the statistical picture and the orbital one are shown. From the viewpoint of information geometry, the parameter space can be identified with a statistical manifold, and then it is shown that the derived maps can be characterized. Also, with an induced symplectic structure from a statistical structure, symplectic and information geometric aspects of the derived maps are discussed.
Markov Chain Monte Carlo Used in Parameter Inference of Magnetic Resonance Spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hock, Kiel; Earle, Keith
2016-02-06
In this paper, we use Boltzmann statistics and the maximum likelihood distribution derived from Bayes’ Theorem to infer parameter values for a Pake Doublet Spectrum, a lineshape of historical significance and contemporary relevance for determining distances between interacting magnetic dipoles. A Metropolis Hastings Markov Chain Monte Carlo algorithm is implemented and designed to find the optimum parameter set and to estimate parameter uncertainties. In conclusion, the posterior distribution allows us to define a metric on parameter space that induces a geometry with negative curvature that affects the parameter uncertainty estimates, particularly for spectra with low signal to noise.
Exploratory Model Analysis of the Space Based Infrared System (SBIRS) Low Global Scheduler Problem
1999-12-01
solution. The non- linear least squares model is defined as Y = f{e,t) where: 0 =M-element parameter vector Y =N-element vector of all data t...NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS EXPLORATORY MODEL ANALYSIS OF THE SPACE BASED INFRARED SYSTEM (SBIRS) LOW GLOBAL SCHEDULER...December 1999 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE EXPLORATORY MODEL ANALYSIS OF THE SPACE BASED INFRARED SYSTEM
Fast and accurate fitting and filtering of noisy exponentials in Legendre space.
Bao, Guobin; Schild, Detlev
2014-01-01
The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.
[Quant efficiency of the detection as a quality parameter of the visualization equipment].
Morgun, O N; Nemchenko, K E; Rogov, Iu V
2003-01-01
The critical parameter of notion "quant efficiency of detection" is defined in the paper. Different methods of specifying the detection quant efficiency (DQE) are under discussion. Thus, techniques of DQE determination for a whole unit and means of DQE finding at terminal space frequency are addressed. The notion of DQE at zero frequency is in the focus of attention. Finally, difficulties occurring in determining the above parameter as well as its disadvantages (as a parameter characterizing the quality of X-ray irradiation visualizing systems) are also discussed.
Effects of space environment on composites: An analytical study of critical experimental parameters
NASA Technical Reports Server (NTRS)
Gupta, A.; Carroll, W. F.; Moacanin, J.
1979-01-01
A generalized methodology currently employed at JPL, was used to develop an analytical model for effects of high-energy electrons and interactions between electron and ultraviolet effects. Chemical kinetic concepts were applied in defining quantifiable parameters; the need for determining short-lived transient species and their concentration was demonstrated. The results demonstrates a systematic and cost-effective means of addressing the issues and show qualitative and quantitative, applicable relationships between space radiation and simulation parameters. An equally important result is identification of critical initial experiments necessary to further clarify the relationships. Topics discussed include facility and test design; rastered vs. diffuse continuous e-beam; valid acceleration level; simultaneous vs. sequential exposure to different types of radiation; and interruption of test continuity.
NASA Technical Reports Server (NTRS)
Sulyma, P. R.; Penny, M. M.
1978-01-01
A base pressure data correlation study was conducted to define exhaust plume similarity parameters for use in Space Shuttle power-on launch vehicle aerodynamic test programs. Data correlations were performed for single bodies having, respectively, single and triple nozzle configurations and for a triple body configuration with single nozzles on each of the outside bodies. Base pressure similarity parameters were found to differ for the single nozzle and triple nozzle configurations. However, the correlation parameter for each was found to be a strong function of the nozzle exit momentum. Results of the data base evaluation are presented indicating an assessment of all data points. Analytical/experimental data comparisons were made for nozzle calibrations and correction factors derived, where indicated for use in nozzle exit plane data calculations.
Geometry of matrix product states: Metric, parallel transport, and curvature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haegeman, Jutho, E-mail: jutho.haegeman@gmail.com; Verstraete, Frank; Faculty of Physics and Astronomy, University of Ghent, Krijgslaan 281 S9, 9000 Gent
2014-02-15
We study the geometric properties of the manifold of states described as (uniform) matrix product states. Due to the parameter redundancy in the matrix product state representation, matrix product states have the mathematical structure of a (principal) fiber bundle. The total space or bundle space corresponds to the parameter space, i.e., the space of tensors associated to every physical site. The base manifold is embedded in Hilbert space and can be given the structure of a Kähler manifold by inducing the Hilbert space metric. Our main interest is in the states living in the tangent space to the base manifold,more » which have recently been shown to be interesting in relation to time dependence and elementary excitations. By lifting these tangent vectors to the (tangent space) of the bundle space using a well-chosen prescription (a principal bundle connection), we can define and efficiently compute an inverse metric, and introduce differential geometric concepts such as parallel transport (related to the Levi-Civita connection) and the Riemann curvature tensor.« less
Latent resonance in tidal rivers, with applications to River Elbe
NASA Astrophysics Data System (ADS)
Backhaus, Jan O.
2015-11-01
We describe a systematic investigation of resonance in tidal rivers, and of river oscillations influenced by resonance. That is, we explore the grey-zone between absent and fully developed resonance. Data from this study are the results of a one-dimensional numerical channel model applied to a four-dimensional parameter space comprising geometry, i.e. length and depths of rivers, and varying dissipation and forcing. Similarity of real rivers and channels from parameter space is obtained with the help of a 'run-time depth'. We present a model-channel, which reproduces tidal oscillations of River Elbe in Hamburg, Germany with accuracy of a few centimetres. The parameter space contains resonant regions and regions with 'latent resonance'. The latter defines tidal oscillations that are elevated yet not in full but juvenile resonance. Dissipation reduces amplitudes of resonance while creating latent resonance. That is, energy of resonance radiates into areas in parameter space where periods of Eigen-oscillations are well separated from the period of the forcing tide. Increased forcing enhances the re-distribution of resonance in parameter space. The River Elbe is diagnosed as being in a state of anthropogenic latent resonance as a consequence of ongoing deepening by dredging. Deepening the river, in conjunction with the expected sea level rise, will inevitably cause increasing tidal ranges. As a rule of thumb, we found that 1 m deepening would cause 0.5 m increase in tidal range.
Fusion of AIRSAR and TM Data for Parameter Classification and Estimation in Dense and Hilly Forests
NASA Technical Reports Server (NTRS)
Moghaddam, Mahta; Dungan, J. L.; Coughlan, J. C.
2000-01-01
The expanded remotely sensed data space consisting of coincident radar backscatter and optical reflectance data provides for a more complete description of the Earth surface. This is especially useful where many parameters are needed to describe a certain scene, such as in the presence of dense and complex-structured vegetation or where there is considerable underlying topography. The goal of this paper is to use a combination of radar and optical data to develop a methodology for parameter classification for dense and hilly forests, and further, class-specific parameter estimation. The area to be used in this study is the H. J. Andrews Forest in Oregon, one of the Long-Term Ecological Research (LTER) sites in the US. This area consists of various dense old-growth conifer stands, and contains significant topographic relief. The Andrews forest has been the subject of many ecological studies over several decades, resulting in an abundance of ground measurements. Recently, biomass and leaf-area index (LAI) values for approximately 30 reference stands have also become available which span a large range of those parameters. The remote sensing data types to be used are the C-, L-, and P-band polarimetric radar data from the JPL airborne SAR (AIRSAR), the C-band single-polarization data from the JPL topographic SAR (TOPSAR), and the Thematic Mapper (TM) data from Landsat, all acquired in late April 1998. The total number of useful independent data channels from the AIRSAR is 15 (three frequencies, each with three unique polarizations and amplitude and phase of the like-polarized correlation), from the TOPSAR is 2 (amplitude and phase of the interferometric correlation), and from the TM is 6 (the thermal band is not used). The range pixel spacing of the AIRSAR is 3.3m for C- and L-bands and 6.6m for P-band. The TOPSAR pixel spacing is 10m, and the TM pixel size is 30m. To achieve parameter classification, first a number of parameters are defined which are of interest to ecologists for forest process modeling. These parameters include total biomass, leaf biomass, LAI, and tree height. The remote sensing data from radar and TM are used to formulate a multivariate analysis problem given the ground measurements of the parameters. Each class of each parameter is defined by a probability density function (pdf), the spread of which defines the range of that class. High classification accuracy results from situations in which little overlap occurs between pdfs. Classification results provide the basis for the future work of class-specific parameter estimation using radar and optical data. This work was performed in part by the Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, and in part by the NASA Ames Research Center, Moffett Field, CA, both under contract from the National Aeronautics and Space Administration.
The Simpsons program 6-D phase space tracking with acceleration
NASA Astrophysics Data System (ADS)
Machida, S.
1993-12-01
A particle tracking code, Simpsons, in 6-D phase space including energy ramping has been developed to model proton synchrotrons and storage rings. We take time as the independent variable to change machine parameters and diagnose beam quality in a quite similar way as real machines, unlike existing tracking codes for synchrotrons which advance a particle element by element. Arbitrary energy ramping and rf voltage curves as a function of time are read as an input file for defining a machine cycle. The code is used to study beam dynamics with time dependent parameters. Some of the examples from simulations of the Superconducting Super Collider (SSC) boosters are shown.
Concept for an International Standard related to Space Weather Effects on Space Systems
NASA Astrophysics Data System (ADS)
Tobiska, W. Kent; Tomky, Alyssa
There is great interest in developing an international standard related to space weather in order to specify the tools and parameters needed for space systems operations. In particular, a standard is important for satellite operators who may not be familiar with space weather. In addition, there are others who participate in space systems operations that would also benefit from such a document. For example, the developers of software systems that provide LEO satellite orbit determination, radio communication availability for scintillation events (GEO-to-ground L and UHF bands), GPS uncertainties, and the radiation environment from ground-to-space for commercial space tourism. These groups require recent historical data, current epoch specification, and forecast of space weather events into their automated or manual systems. Other examples are national government agencies that rely on space weather data provided by their organizations such as those represented in the International Space Environment Service (ISES) group of 14 national agencies. Designers, manufacturers, and launchers of space systems require real-time, operational space weather parameters that can be measured, monitored, or built into automated systems. Thus, a broad scope for the document will provide a useful international standard product to a variety of engineering and science domains. The structure of the document should contain a well-defined scope, consensus space weather terms and definitions, and internationally accepted descriptions of the main elements of space weather, its sources, and its effects upon space systems. Appendices will be useful for describing expanded material such as guidelines on how to use the standard, how to obtain specific space weather parameters, and short but detailed descriptions such as when best to use some parameters and not others; appendices provide a path for easily updating the standard since the domain of space weather is rapidly changing with new advances in scientific and engineering understanding. We present a draft outline that can be used as the basis for such a standard.
Parameter Space of the Columbia River Estuarine Turbidity Maxima
NASA Astrophysics Data System (ADS)
McNeil, C. L.; Shcherbina, A.; Lopez, J.; Karna, T.; Baptista, A. M.; Crump, B. C.; Sanford, T. B.
2016-12-01
We present observations of estuarine turbidity maxima (ETM) in the North Channel of the Columbia River estuary (OR and WA, USA) covering different river discharge and flood tide conditions. Measurements were made using optical backscattering sensors on two REMUS-100 autonomous underwater vehicles (AUVs) during spring 2012, summer 2013, and fall 2012. Although significant short term variability in AUV measured optical backscatter was observed, some clustering of the data occurs around the estuarine regimes defined by a mixing parameter and a freshwater Froude number (Geyer & MacCready [2014]). Similar clustering is observed in long term time series of turbidity from the SATURN observatory. We will use available measurements and numerical model simulations of suspended sediment to further explore the variability of suspended sediment dynamics within a frame work of estuarine parameter space.
The concept of physical surface in nuclear matter
NASA Astrophysics Data System (ADS)
Mazilu, Nicolae; Agop, Maricel
2015-02-01
The main point of a physical definition of surface forces in the matter in general, especially in the nuclear matter, is that the curvature of surfaces and its variation should be physically defined. The forces are therefore just the vehicles of introducing physics. The problem of mathematical definition of a surface in term of the curvature parameters thus naturally occurs. The present work addresses this problem in terms of the asymptotic directions of a surface in a point. A physical meaning of these parameters is given, first in terms of inertial forces, then in terms of a differential theory of colors, whereby the space of curvature parameters is identified with the color space. The work concludes with an image of the evolution of a local portion of a surface.
The concept of temperature in space plasmas
NASA Astrophysics Data System (ADS)
Livadiotis, G.
2017-12-01
Independently of the initial distribution function, once the system is thermalized, its particles are stabilized into a specific distribution function parametrized by a temperature. Classical particle systems in thermal equilibrium have their phase-space distribution stabilized into a Maxwell-Boltzmann function. In contrast, space plasmas are particle systems frequently described by stationary states out of thermal equilibrium, namely, their distribution is stabilized into a function that is typically described by kappa distributions. The temperature is well-defined for systems at thermal equilibrium or stationary states described by kappa distributions. This is based on the equivalence of the two fundamental definitions of temperature, that is (i) the kinetic definition of Maxwell (1866) and (ii) the thermodynamic definition of Clausius (1862). This equivalence holds either for Maxwellians or kappa distributions, leading also to the equipartition theorem. The temperature and kappa index (together with density) are globally independent parameters characterizing the kappa distribution. While there is no equation of state or any universal relation connecting these parameters, various local relations may exist along the streamlines of space plasmas. Observations revealed several types of such local relations among plasma thermal parameters.
NASA Technical Reports Server (NTRS)
Thurmond, Beverly A.; Gillan, Douglas J.; Perchonok, Michele G.; Marcus, Beth A.; Bourland, Charles T.
1986-01-01
A team of engineers and food scientists from NASA, the aerospace industry, food companies, and academia are defining the Space Station Food System. The team identified the system requirements based on an analysis of past and current space food systems, food systems from isolated environment communities that resemble Space Station, and the projected Space Station parameters. The team is resolving conflicts among requirements through the use of trade-off analyses. The requirements will give rise to a set of specifications which, in turn, will be used to produce concepts. Concept verification will include testing of prototypes, both in 1-g and microgravity. The end-item specification provides an overall guide for assembling a functional food system for Space Station.
Defining clusters in APT reconstructions of ODS steels.
Williams, Ceri A; Haley, Daniel; Marquis, Emmanuelle A; Smith, George D W; Moody, Michael P
2013-09-01
Oxide nanoclusters in a consolidated Fe-14Cr-2W-0.3Ti-0.3Y₂O₃ ODS steel and in the alloy powder after mechanical alloying (but before consolidation) are investigated by atom probe tomography (APT). The maximum separation method is a standard method to define and characterise clusters from within APT data, but this work shows that the extent of clustering between the two materials is sufficiently different that the nanoclusters in the mechanically alloyed powder and in the consolidated material cannot be compared directly using the same cluster selection parameters. As the cluster selection parameters influence the size and composition of the clusters significantly, a procedure to optimise the input parameters for the maximum separation method is proposed by sweeping the d(max) and N(min) parameter space. By applying this method of cluster parameter selection combined with a 'matrix correction' to account for trajectory aberrations, differences in the oxide nanoclusters can then be reliably quantified. Copyright © 2012 Elsevier B.V. All rights reserved.
Implications of Network Topology on Stability
Kinkhabwala, Ali
2015-01-01
In analogy to chemical reaction networks, I demonstrate the utility of expressing the governing equations of an arbitrary dynamical system (interaction network) as sums of real functions (generalized reactions) multiplied by real scalars (generalized stoichiometries) for analysis of its stability. The reaction stoichiometries and first derivatives define the network’s “influence topology”, a signed directed bipartite graph. Parameter reduction of the influence topology permits simplified expression of the principal minors (sums of products of non-overlapping bipartite cycles) and Hurwitz determinants (sums of products of the principal minors or the bipartite cycles directly) for assessing the network’s steady state stability. Visualization of the Hurwitz determinants over the reduced parameters defines the network’s stability phase space, delimiting the range of its dynamics (specifically, the possible numbers of unstable roots at each steady state solution). Any further explicit algebraic specification of the network will project onto this stability phase space. Stability analysis via this hierarchical approach is demonstrated on classical networks from multiple fields. PMID:25826219
Fast and Accurate Fitting and Filtering of Noisy Exponentials in Legendre Space
Bao, Guobin; Schild, Detlev
2014-01-01
The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters. PMID:24603904
Three-dimensional desirability spaces for quality-by-design-based HPLC development.
Mokhtar, Hatem I; Abdel-Salam, Randa A; Hadad, Ghada M
2015-04-01
In this study, three-dimensional desirability spaces were introduced as a graphical representation method of design space. This was illustrated in the context of application of quality-by-design concepts on development of a stability indicating gradient reversed-phase high-performance liquid chromatography method for the determination of vinpocetine and α-tocopheryl acetate in a capsule dosage form. A mechanistic retention model to optimize gradient time, initial organic solvent concentration and ternary solvent ratio was constructed for each compound from six experimental runs. Then, desirability function of each optimized criterion and subsequently the global desirability function were calculated throughout the knowledge space. The three-dimensional desirability spaces were plotted as zones exceeding a threshold value of desirability index in space defined by the three optimized method parameters. Probabilistic mapping of desirability index aided selection of design space within the potential desirability subspaces. Three-dimensional desirability spaces offered better visualization and potential design spaces for the method as a function of three method parameters with ability to assign priorities to this critical quality as compared with the corresponding resolution spaces. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Naden, Levi N; Shirts, Michael R
2016-04-12
We show how thermodynamic properties of molecular models can be computed over a large, multidimensional parameter space by combining multistate reweighting analysis with a linear basis function approach. This approach reduces the computational cost to estimate thermodynamic properties from molecular simulations for over 130,000 tested parameter combinations from over 1000 CPU years to tens of CPU days. This speed increase is achieved primarily by computing the potential energy as a linear combination of basis functions, computed from either modified simulation code or as the difference of energy between two reference states, which can be done without any simulation code modification. The thermodynamic properties are then estimated with the Multistate Bennett Acceptance Ratio (MBAR) as a function of multiple model parameters without the need to define a priori how the states are connected by a pathway. Instead, we adaptively sample a set of points in parameter space to create mutual configuration space overlap. The existence of regions of poor configuration space overlap are detected by analyzing the eigenvalues of the sampled states' overlap matrix. The configuration space overlap to sampled states is monitored alongside the mean and maximum uncertainty to determine convergence, as neither the uncertainty or the configuration space overlap alone is a sufficient metric of convergence. This adaptive sampling scheme is demonstrated by estimating with high precision the solvation free energies of charged particles of Lennard-Jones plus Coulomb functional form with charges between -2 and +2 and generally physical values of σij and ϵij in TIP3P water. We also compute entropy, enthalpy, and radial distribution functions of arbitrary unsampled parameter combinations using only the data from these sampled states and use the estimates of free energies over the entire space to examine the deviation of atomistic simulations from the Born approximation to the solvation free energy.
Local divergence and curvature divergence in first order optics
NASA Astrophysics Data System (ADS)
Mafusire, Cosmas; Krüger, Tjaart P. J.
2018-06-01
The far-field divergence of a light beam propagating through a first order optical system is presented as a square root of the sum of the squares of the local divergence and the curvature divergence. The local divergence is defined as the ratio of the beam parameter product to the beam width whilst the curvature divergence is a ratio of the space-angular moment also to the beam width. It is established that the beam’s focusing parameter can be defined as a ratio of the local divergence to the curvature divergence. The relationships between the two divergences and other second moment-based beam parameters are presented. Their various mathematical properties are presented such as their evolution through first order systems. The efficacy of the model in the analysis of high power continuous wave laser-based welding systems is briefly discussed.
Two statistics for evaluating parameter identifiability and error reduction
Doherty, John; Hunt, Randall J.
2009-01-01
Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.
NASA Technical Reports Server (NTRS)
Over, Ann P.; Barrett, Michael J.; Reinhart, Richard C.; Free, James M.; Cikanek, Harry A., III
2011-01-01
The Communication Navigation and Networking Reconfigurable Testbed (CoNNeCT) is a NASA-sponsored mission, which will investigate the usage of Software Defined Radios (SDRs) as a multi-function communication system for space missions. A softwaredefined radio system is a communication system in which typical components of the system (e.g., modulators) are incorporated into software. The software-defined capability allows flexibility and experimentation in different modulation, coding and other parameters to understand their effects on performance. This flexibility builds inherent redundancy and flexibility into the system for improved operational efficiency, real-time changes to space missions and enhanced reliability/redundancy. The CoNNeCT Project is a collaboration between industrial radio providers and NASA. The industrial radio providers are providing the SDRs and NASA is designing, building and testing the entire flight system. The flight system will be integrated on the Express Logistics Carrier (ELC) on the International Space Station (ISS) after launch on the H-IIB Transfer Vehicle in 2012. This paper provides an overview of the technology research objectives, payload description, design challenges and pre-flight testing results.
Advanced protein crystal growth programmatic sensitivity study
NASA Technical Reports Server (NTRS)
1992-01-01
The purpose of this study is to define the costs of various APCG (Advanced Protein Crystal Growth) program options and to determine the parameters which, if changed, impact the costs and goals of the programs and to what extent. This was accomplished by developing and evaluating several alternate programmatic scenarios for the microgravity Advanced Protein Crystal Growth program transitioning from the present shuttle activity to the man tended Space Station to the permanently manned Space Station. These scenarios include selected variations in such sensitivity parameters as development and operational costs, schedules, technology issues, and crystal growth methods. This final report provides information that will aid in planning the Advanced Protein Crystal Growth Program.
Fractal Properties of Some Machined Surfaces
NASA Astrophysics Data System (ADS)
Thomas, T. R.; Rosén, B.-G.
Many surface profiles are self-affine fractals defined by fractal dimension D and topothesy Λ. Traditionally these parameters are derived laboriously from the slope and intercept of the profile's structure function. Recently a quicker and more convenient derivation from standard roughness parameters has been suggested. Based on this derivation, it is shown that D and Λ depend on two dimensionless numbers: the ratio of the mean peak spacing to the rms roughness, and the ratio of the mean local peak spacing to the sampling interval. Using this approach, values of D and Λ are calculated for 125 profiles produced by polishing, plateau honing and various single-point machining processes. Different processes are shown to occupy different regions in D-Λ space, and polished surfaces show a relationship between D and Λ which is independent of the surface material.
VLBI-based Products - Naval Oceanography Portal
section Advanced Search... Sections Home Time Earth Orientation Astronomy Meteorology Oceanography Ice You terrestrial reference frames and to predict the variable orientation of the Earth in three-dimensional space antennas that define a VLBI-based Terrestrial Reference Frame (TRF) and the Earth Orientation Parameters
A Graphical Approach to the Standard Principal-Agent Model.
ERIC Educational Resources Information Center
Zhou, Xianming
2002-01-01
States the principal-agent theory is difficult to teach because of its technical complexity and intractability. Indicates the equilibrium in the contract space is defined by the incentive parameter and insurance component of pay under a linear contract. Describes a graphical approach that students with basic knowledge of algebra and…
Effects of Space Flight on Rodent Tissues
NASA Technical Reports Server (NTRS)
Worgul, Basil V.
1997-01-01
As the inevitable expression of mankind's search for knowledge continues into space, the potential acute and long-term effects of space flight on human health must be fully appreciated. Despite its critical role relatively little is known regarding the effects of the space environment on the ocular system. Our proposed studies were aimed at determining whether or not space flight causes discernible disruption of the genomic integrity, cell kinetics, cytoarchitecture and other cytological parameters in the eye. Because of its defined and singular biology our main focus was on the lens and possible changes associated with its primary pathology, cataract. We also hoped to explore the possible effect of space flight on the preferred orientation of dividing cells in the perilimbal region of conjunctiva and cornea.
Statistical physics of the symmetric group.
Williams, Mobolaji
2017-04-01
Ordered chains (such as chains of amino acids) are ubiquitous in biological cells, and these chains perform specific functions contingent on the sequence of their components. Using the existence and general properties of such sequences as a theoretical motivation, we study the statistical physics of systems whose state space is defined by the possible permutations of an ordered list, i.e., the symmetric group, and whose energy is a function of how certain permutations deviate from some chosen correct ordering. Such a nonfactorizable state space is quite different from the state spaces typically considered in statistical physics systems and consequently has novel behavior in systems with interacting and even noninteracting Hamiltonians. Various parameter choices of a mean-field model reveal the system to contain five different physical regimes defined by two transition temperatures, a triple point, and a quadruple point. Finally, we conclude by discussing how the general analysis can be extended to state spaces with more complex combinatorial properties and to other standard questions of statistical mechanics models.
Statistical physics of the symmetric group
NASA Astrophysics Data System (ADS)
Williams, Mobolaji
2017-04-01
Ordered chains (such as chains of amino acids) are ubiquitous in biological cells, and these chains perform specific functions contingent on the sequence of their components. Using the existence and general properties of such sequences as a theoretical motivation, we study the statistical physics of systems whose state space is defined by the possible permutations of an ordered list, i.e., the symmetric group, and whose energy is a function of how certain permutations deviate from some chosen correct ordering. Such a nonfactorizable state space is quite different from the state spaces typically considered in statistical physics systems and consequently has novel behavior in systems with interacting and even noninteracting Hamiltonians. Various parameter choices of a mean-field model reveal the system to contain five different physical regimes defined by two transition temperatures, a triple point, and a quadruple point. Finally, we conclude by discussing how the general analysis can be extended to state spaces with more complex combinatorial properties and to other standard questions of statistical mechanics models.
Resolving structural influences on water-retention properties of alluvial deposits
Winfield, K.A.; Nimmo, J.R.; Izbicki, J.A.; Martin, P.M.
2006-01-01
With the goal of improving property-transfer model (PTM) predictions of unsaturated hydraulic properties, we investigated the influence of sedimentary structure, defined as particle arrangement during deposition, on laboratory-measured water retention (water content vs. potential [??(??)]) of 10 undisturbed core samples from alluvial deposits in the western Mojave Desert, California. The samples were classified as having fluvial or debris-flow structure based on observed stratification and measured spread of particle-size distribution. The ??(??) data were fit with the Rossi-Nimmo junction model, representing water retention with three parameters: the maximum water content (??max), the ??-scaling parameter (??o), and the shape parameter (??). We examined trends between these hydraulic parameters and bulk physical properties, both textural - geometric mean, Mg, and geometric standard deviation, ??g, of particle diameter - and structural - bulk density, ??b, the fraction of unfilled pore space at natural saturation, Ae, and porosity-based randomness index, ??s, defined as the excess of total porosity over 0.3. Structural parameters ??s and Ae were greater for fluvial samples, indicating greater structural pore space and a possibly broader pore-size distribution associated with a more systematic arrangement of particles. Multiple linear regression analysis and Mallow's Cp statistic identified combinations of textural and structural parameters for the most useful predictive models: for ??max, including Ae, ??s, and ??g, and for both ??o and ??, including only textural parameters, although use of Ae can somewhat improve ??o predictions. Textural properties can explain most of the sample-to-sample variation in ??(??) independent of deposit type, but inclusion of the simple structural indicators Ae and ??s can improve PTM predictions, especially for the wettest part of the ??(??) curve. ?? Soil Science Society of America.
Ohashi, Hidenori; Tamaki, Takanori; Yamaguchi, Takeo
2011-12-29
Molecular collisions, which are the microscopic origin of molecular diffusive motion, are affected by both the molecular surface area and the distance between molecules. Their product can be regarded as the free space around a penetrant molecule defined as the "shell-like free volume" and can be taken as a characteristic of molecular collisions. On the basis of this notion, a new diffusion theory has been developed. The model can predict molecular diffusivity in polymeric systems using only well-defined single-component parameters of molecular volume, molecular surface area, free volume, and pre-exponential factors. By consideration of the physical description of the model, the actual body moved and which neighbor molecules are collided with are the volume and the surface area of the penetrant molecular core. In the present study, a semiempirical quantum chemical calculation was used to calculate both of these parameters. The model and the newly developed parameters offer fairly good predictive ability. © 2011 American Chemical Society
Space Life Sciences at NASA: Spaceflight Health Policy and Standards
NASA Technical Reports Server (NTRS)
Davis, Jeffrey R.; House, Nancy G.
2006-01-01
In January 2005, the President proposed a new initiative, the Vision for Space Exploration. To accomplish the goals within the vision for space exploration, physicians and researchers at Johnson Space Center are establishing spaceflight health standards. These standards include fitness for duty criteria (FFD), permissible exposure limits (PELs), and permissible outcome limits (POLs). POLs delineate an acceptable maximum decrement or change in a physiological or behavioral parameter, as the result of exposure to the space environment. For example cardiovascular fitness for duty standards might be a measurable clinical parameter minimum that allows successful performance of all required duties. An example of a permissible exposure limit for radiation might be the quantifiable limit of exposure over a given length of time (e.g. life time radiation exposure). An example of a permissible outcome limit might be the length of microgravity exposure that would minimize bone loss. The purpose of spaceflight health standards is to promote operational and vehicle design requirements, aid in medical decision making during space missions, and guide the development of countermeasures. Standards will be based on scientific and clinical evidence including research findings, lessons learned from previous space missions, studies conducted in space analog environments, current standards of medical practices, risk management data, and expert recommendations. To focus the research community on the needs for exploration missions, NASA has developed the Bioastronautics Roadmap. The Bioastronautics Roadmap, NASA's approach to identification of risks to human space flight, revised baseline was released in February 2005. This document was reviewed by the Institute of Medicine in November 2004 and the final report was received in October 2005. The roadmap defines the most important research and operational needs that will be used to set policy, standards (define acceptable risk), and implement an overall Risk Management and Analysis process. Currently NASA is drafting spaceflight health standards for neurosensory alterations, space radiation exposure, behavioral health, muscle atrophy, cardiovascular fitness, immunological compromise, bone demineralization, and nutrition.
Biomedical and Human Factors Requirements for a Manned Earth Orbiting Station
NASA Technical Reports Server (NTRS)
Helvey, W.; Martell, C.; Peters, J.; Rosenthal, G.; Benjamin, F.; Albright, G.
1964-01-01
The primary objective of this study is to determine which biomedical and human factors measurements must be made aboard a space station to assure adequate evaluation of the astronaut's health and performance during prolonged space flights. The study has employed, where possible, a medical and engineering systems analysis to define the pertinent life sciences and space station design parameters and their influence on a measurement program. The major areas requiring evaluation in meeting the study objectives include a definition of the space environment, man's response to the environment, selection of measurement and data management techniques, experimental program, space station design requirements, and a trade-off analysis with final recommendations. The space environment factors that are believed to have a significant effect on man were evaluated. This includes those factors characteristic of the space environment (e. g. weightlessness, radiation) as well as those created within the space station (e. g. toxic contaminants, capsule atmosphere). After establishing the general features of the environment, an appraisal was made of the anticipated response of the astronaut to each of these factors. For thoroughness, the major organ systems and functions of the body were delineated, and a determination was made of their anticipated response to each of the environmental categories. A judgment was then made on the medical significance or importance of each response, which enabled a determination of which physiological and psychological effects should be monitored. Concurrently, an extensive list of measurement techniques and methods of data management was evaluated for applicability to the space station program. The various space station configurations and design parameters were defined in terms of the biomedical and human factors requirements to provide the measurements program. Research design of experimental programs for various station configurations, mission durations, and crew sizes were prepared, and, finally, a trade-off analysis of the critical variables in the station planning was completed with recommendations to enhance the confidence in the measurement program.
Estimating fracture spacing from natural tracers in shale-gas production
NASA Astrophysics Data System (ADS)
Bauer, S. J.; McKenna, S. A.; Heath, J. E.; Gardner, P.
2012-12-01
Resource appraisal and long-term recovery potential of shale gas relies on the characteristics of the fracture networks created within the formation. Both well testing and analysis of micro-seismic data can provide information on fracture characteristics, but approaches that directly utilize observations of gas transport through the fractures are not well-developed. We examine transport of natural tracers and analyze the breakthrough curves (BTC's) of these tracers with a multi-rate mass transfer (MMT) model to elucidate fracture characteristics. The focus here is on numerical simulation studies to determine constraints on the ability to accurately estimate fracture network characteristics as a function of the diffusion coefficients of the natural tracers, the number and timing of observations, the flow rates from the well, and the noise in the observations. Traditional tracer testing approaches for dual-porosity systems analyze the BTC of an injected tracer to obtain fracture spacing considering a single spacing value. An alternative model is the MMT model where diffusive mass transfer occurs simultaneously over a range of matrix block sizes defined by a statistical distribution (e.g., log-normal, gamma, or power-law). The goal of the estimation is defining the parameters of the fracture spacing distribution. The MMT model has not yet been applied to analysis of natural in situ natural tracers. Natural tracers are omnipresent in the subsurface, potentially obviating the needed for introduced tracers, and could be used to improve upon fracture characteristics estimated from pressure transient and decline curve production analysis. Results of this study provide guidance for data collection and analysis of natural tracers in fractured shale formations. Parameter estimation on simulated BTC's will provide guidance on the necessary timing of BTC sampling in field experiments. The MMT model can result in non-unique or nonphysical parameter estimates. We address this with Bayesian estimation approaches that can define uncertainty in estimated parameters as a posterior probability distribution. We will also use Bayesian estimation to examine model identifiability (e.g., selecting between parametric distributions of fracture spacing) from various BTC's. Application of the MMT model to natural tracers and hydraulic fractures in shale will require extension of the model to account for partitioning of the tracers between multiple phases and different mass transfer behavior in mixed gas-liquid (e.g., oil or groundwater rich) systems. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
The space shuttle payload planning working groups. Volume 7: Earth observations
NASA Technical Reports Server (NTRS)
1973-01-01
The findings of the Earth Observations working group of the space shuttle payload planning activity are presented. The objectives of the Earth Observation experiments are: (1) establishment of quantitative relationships between observable parameters and geophysical variables, (2) development, test, calibration, and evaluation of eventual flight instruments in experimental space flight missions, (3) demonstration of the operational utility of specific observation concepts or techniques as information inputs needed for taking actions, and (4) deployment of prototype and follow-on operational Earth Observation systems. The basic payload capability, mission duration, launch sites, inclinations, and payload limitations are defined.
NASA Technical Reports Server (NTRS)
Sulyma, P. R.
1980-01-01
Fundamental equations and similarity definition and application are described as well as the computational steps of a computer program developed to design model nozzles for wind tunnel tests conducted to define power-on aerodynamic characteristics of the space shuttle over a range of ascent trajectory conditions. The computer code capabilities, a user's guide for the model nozzle design program, and the output format are examined. A program listing is included.
Space shuttle solid rocket booster recovery system definition, volume 1
NASA Technical Reports Server (NTRS)
1973-01-01
The performance requirements, preliminary designs, and development program plans for an airborne recovery system for the space shuttle solid rocket booster are discussed. The analyses performed during the study phase of the program are presented. The basic considerations which established the system configuration are defined. A Monte Carlo statistical technique using random sampling of the probability distribution for the critical water impact parameters was used to determine the failure probability of each solid rocket booster component as functions of impact velocity and component strength capability.
Closing in on singlet scalar dark matter: LUX, invisible Higgs decays and gamma-ray lines
Feng, Lei; Profumo, Stefano; Ubaldi, Lorenzo
2015-03-10
Here, we study the implications of the Higgs discovery and of recent results from dark matter searches on real singlet scalar dark matter. The phenomenology of the model is defined by only two parameters, the singlet scalar mass m S and the quartic coupling a 2 between the SU(2) Higgs and the singlet scalar. We concentrate on the window 5 < m S /GeV < 300. The most dramatic impact on the viable parameter space of the model comes from direct dark matter searches with LUX, and, for very low masses in the few GeV range, from constraints from themore » invisible decay width of the Higgs. In the resonant region the best constraints come from gamma-ray line searches. We show that they leave only a small region of viable parameter space, for dark matter masses within a few percent of half the mass of the Higgs. We demonstrate that direct and indirect dark matter searches (especially the search for monochromatic gamma-ray lines) will play a key role in closing the residual parameter space in the near future.« less
NASA Astrophysics Data System (ADS)
Bowers, Peter; Rosowski, John J.
2018-05-01
An air-conduction circuit model that will serve as the basis for a model of bone-conduction hearing is developed for chinchilla. The lumped-element model is based on the classic Zwislocki model of the human middle ear. Model parameters are fit to various measurements of chinchilla middle-ear transfer functions and impedances. The model is in agreement with studies of the effects of middle-ear cavity holes in experiments that require access to the middle-ear air space.
Space orientation of total hip prosthesis. A method for three-dimensional determination.
Herrlin, K; Selvik, G; Pettersson, H
1986-01-01
A method for in vivo determination of orientation and relation in space of components of total hip prosthesis is described. The method allows for determination of the orientation of the prosthetic components in well defined anatomic planes of the body. Furthermore the range of free motion from neutral position to the point of contact between the edge of the acetabular opening and the neck of the femoral component can be determined in various directions. To assess the accuracy of the calculations a phantom prosthesis was studied in nine different positions and the measurements of the space oriented parameters according to the present method correlated to measurements of the same parameters according to Selvik's stereophotogrammetric method. Good correlation was found. The role of prosthetic malpositioning and component interaction evaluated with the present method in the development of prosthetic loosening and displacement is discussed.
Analysis of screeching in a cold flow jet experiment
NASA Technical Reports Server (NTRS)
Wang, M. E.; Slone, R. M., Jr.; Robertson, J. E.; Keefe, L.
1975-01-01
The screech phenomenon observed in a one-sixtieth scale model space shuttle test of the solid rocket booster exhaust flow noise has been investigated. A critical review is given of the cold flow test data representative of Space Shuttle launch configurations to define those parameters which contribute to screech generation. An acoustic feedback mechanism is found to be responsible for the generation of screech. A simple equation which permits prediction of screech frequency in terms of basic testing parameters such as the jet exhaust Mach number and the separating distance from nozzle exit to the surface of model launch pad is presented and is found in good agreement with the test data. Finally, techniques are recommended to eliminate or reduce the screech.
Kim, Sang-Woo; Nishimura, Jun; Tsuchiya, Asato
2012-01-06
We reconsider the matrix model formulation of type IIB superstring theory in (9+1)-dimensional space-time. Unlike the previous works in which the Wick rotation was used to make the model well defined, we regularize the Lorentzian model by introducing infrared cutoffs in both the spatial and temporal directions. Monte Carlo studies reveal that the two cutoffs can be removed in the large-N limit and that the theory thus obtained has no parameters other than one scale parameter. Moreover, we find that three out of nine spatial directions start to expand at some "critical time," after which the space has SO(3) symmetry instead of SO(9).
Optimal design of focused experiments and surveys
NASA Astrophysics Data System (ADS)
Curtis, Andrew
1999-10-01
Experiments and surveys are often performed to obtain data that constrain some previously underconstrained model. Often, constraints are most desired in a particular subspace of model space. Experiment design optimization requires that the quality of any particular design can be both quantified and then maximized. This study shows how the quality can be defined such that it depends on the amount of information that is focused in the particular subspace of interest. In addition, algorithms are presented which allow one particular focused quality measure (from the class of focused measures) to be evaluated efficiently. A subclass of focused quality measures is also related to the standard variance and resolution measures from linearized inverse theory. The theory presented here requires that the relationship between model parameters and data can be linearized around a reference model without significant loss of information. Physical and financial constraints define the space of possible experiment designs. Cross-well tomographic examples are presented, plus a strategy for survey design to maximize information about linear combinations of parameters such as bulk modulus, κ =λ+ 2μ/3.
Phase space analysis for anisotropic universe with nonlinear bulk viscosity
NASA Astrophysics Data System (ADS)
Sharif, M.; Mumtaz, Saadia
2018-06-01
In this paper, we discuss phase space analysis of locally rotationally symmetric Bianchi type I universe model by taking a noninteracting mixture of dust like and viscous radiation like fluid whose viscous pressure satisfies a nonlinear version of the Israel-Stewart transport equation. An autonomous system of equations is established by defining normalized dimensionless variables. In order to investigate stability of the system, we evaluate corresponding critical points for different values of the parameters. We also compute power-law scale factor whose behavior indicates different phases of the universe model. It is found that our analysis does not provide a complete immune from fine-tuning because the exponentially expanding solution occurs only for a particular range of parameters. We conclude that stable solutions exist in the presence of nonlinear model for bulk viscosity with different choices of the constant parameter m for anisotropic universe.
NASA Astrophysics Data System (ADS)
Colagrossi, Andrea; Lavagna, Michèle
2018-03-01
A space station in the vicinity of the Moon can be exploited as a gateway for future human and robotic exploration of the solar system. The natural location for a space system of this kind is about one of the Earth-Moon libration points. The study addresses the dynamics during rendezvous and docking operations with a very large space infrastructure in an EML2 Halo orbit. The model takes into account the coupling effects between the orbital and the attitude motion in a circular restricted three-body problem environment. The flexibility of the system is included, and the interaction between the modes of the structure and those related with the orbital motion is investigated. A lumped parameter technique is used to represents the flexible dynamics. The parameters of the space station are maintained as generic as possible, in a way to delineate a global scenario of the mission. However, the developed model can be tuned and updated according to the information that will be available in the future, when the whole system will be defined with a higher level of precision.
Creating Simulated Microgravity Patient Models
NASA Technical Reports Server (NTRS)
Hurst, Victor; Doerr, Harold K.; Bacal, Kira
2004-01-01
The Medical Operational Support Team (MOST) has been tasked by the Space and Life Sciences Directorate (SLSD) at the NASA Johnson Space Center (JSC) to integrate medical simulation into 1) medical training for ground and flight crews and into 2) evaluations of medical procedures and equipment for the International Space Station (ISS). To do this, the MOST requires patient models that represent the physiological changes observed during spaceflight. Despite the presence of physiological data collected during spaceflight, there is no defined set of parameters that illustrate or mimic a 'space normal' patient. Methods: The MOST culled space-relevant medical literature and data from clinical studies performed in microgravity environments. The areas of focus for data collection were in the fields of cardiovascular, respiratory and renal physiology. Results: The MOST developed evidence-based patient models that mimic the physiology believed to be induced by human exposure to a microgravity environment. These models have been integrated into space-relevant scenarios using a human patient simulator and ISS medical resources. Discussion: Despite the lack of a set of physiological parameters representing 'space normal,' the MOST developed space-relevant patient models that mimic microgravity-induced changes in terrestrial physiology. These models are used in clinical scenarios that will medically train flight surgeons, biomedical flight controllers (biomedical engineers; BME) and, eventually, astronaut-crew medical officers (CMO).
Q-operators for the open Heisenberg spin chain
NASA Astrophysics Data System (ADS)
Frassek, Rouven; Szécsényi, István M.
2015-12-01
We construct Q-operators for the open spin-1/2 XXX Heisenberg spin chain with diagonal boundary matrices. The Q-operators are defined as traces over an infinite-dimensional auxiliary space involving novel types of reflection operators derived from the boundary Yang-Baxter equation. We argue that the Q-operators defined in this way are polynomials in the spectral parameter and show that they commute with transfer matrix. Finally, we prove that the Q-operators satisfy Baxter's TQ-equation and derive the explicit form of their eigenvalues in terms of the Bethe roots.
Space Shuttle and Space Station Radio Frequency (RF) Exposure Analysis
NASA Technical Reports Server (NTRS)
Hwu, Shian U.; Loh, Yin-Chung; Sham, Catherine C.; Kroll, Quin D.
2005-01-01
This paper outlines the modeling techniques and important parameters to define a rigorous but practical procedure that can verify the compliance of RF exposure to the NASA standards for astronauts and electronic equipment. The electromagnetic modeling techniques are applied to analyze RF exposure in Space Shuttle and Space Station environments with reasonable computing time and resources. The modeling techniques are capable of taking into account the field interactions with Space Shuttle and Space Station structures. The obtained results illustrate the multipath effects due to the presence of the space vehicle structures. It's necessary to include the field interactions with the space vehicle in the analysis for an accurate assessment of the RF exposure. Based on the obtained results, the RF keep out zones are identified for appropriate operational scenarios, flight rules and necessary RF transmitter constraints to ensure a safe operating environment and mission success.
NASA Technical Reports Server (NTRS)
Campbell, Anthony B.; Nair, Satish S.; Miles, John B.; Iovine, John V.; Lin, Chin H.
1998-01-01
The present NASA space suit (the Shuttle EMU) is a self-contained environmental control system, providing life support, environmental protection, earth-like mobility, and communications. This study considers the thermal dynamics of the space suit as they relate to astronaut thermal comfort control. A detailed dynamic lumped capacitance thermal model of the present space suit is used to analyze the thermal dynamics of the suit with observations verified using experimental and flight data. Prior to using the model to define performance characteristics and limitations for the space suit, the model is first evaluated and improved. This evaluation includes determining the effect of various model parameters on model performance and quantifying various temperature prediction errors in terms of heat transfer and heat storage. The observations from this study are being utilized in two future design efforts, automatic thermal comfort control design for the present space suit and design of future space suit systems for Space Station, Lunar, and Martian missions.
NASA Astrophysics Data System (ADS)
Araújo, Iván Gómez; Sánchez, Jesús Antonio García; Andersen, Palle
2018-05-01
Transmissibility-based operational modal analysis is a recent and alternative approach used to identify the modal parameters of structures under operational conditions. This approach is advantageous compared with traditional operational modal analysis because it does not make any assumptions about the excitation spectrum (i.e., white noise with a flat spectrum). However, common methodologies do not include a procedure to extract closely spaced modes with low signal-to-noise ratios. This issue is relevant when considering that engineering structures generally have closely spaced modes and that their measured responses present high levels of noise. Therefore, to overcome these problems, a new combined method for modal parameter identification is proposed in this work. The proposed method combines blind source separation (BSS) techniques and transmissibility-based methods. Here, BSS techniques were used to recover source signals, and transmissibility-based methods were applied to estimate modal information from the recovered source signals. To achieve this combination, a new method to define a transmissibility function was proposed. The suggested transmissibility function is based on the relationship between the power spectral density (PSD) of mixed signals and the PSD of signals from a single source. The numerical responses of a truss structure with high levels of added noise and very closely spaced modes were processed using the proposed combined method to evaluate its ability to identify modal parameters in these conditions. Colored and white noise excitations were used for the numerical example. The proposed combined method was also used to evaluate the modal parameters of an experimental test on a structure containing closely spaced modes. The results showed that the proposed combined method is capable of identifying very closely spaced modes in the presence of noise and, thus, may be potentially applied to improve the identification of damping ratios.
NASA Astrophysics Data System (ADS)
Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.
2015-12-01
Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.
Looking for the WIMP next door
NASA Astrophysics Data System (ADS)
Evans, Jared A.; Gori, Stefania; Shelton, Jessie
2018-02-01
We comprehensively study experimental constraints and prospects for a class of minimal hidden sector dark matter (DM) models, highlighting how the cosmological history of these models informs the experimental signals. We study simple `secluded' models, where the DM freezes out into unstable dark mediator states, and consider the minimal cosmic history of this dark sector, where coupling of the dark mediator to the SM was sufficient to keep the two sectors in thermal equilibrium at early times. In the well-motivated case where the dark mediators couple to the Standard Model (SM) via renormalizable interactions, the requirement of thermal equilibrium provides a minimal, UV-insensitive, and predictive cosmology for hidden sector dark matter. We call DM that freezes out of a dark radiation bath in thermal equilibrium with the SM a WIMP next door, and demonstrate that the parameter space for such WIMPs next door is sharply defined, bounded, and in large part potentially accessible. This parameter space, and the corresponding signals, depend on the leading interaction between the SM and the dark mediator; we establish it for both Higgs and vector portal interactions. In particular, there is a cosmological lower bound on the portal coupling strength necessary to thermalize the two sectors in the early universe. We determine this thermalization floor as a function of equilibration temperature for the first time. We demonstrate that direct detection experiments are currently probing this cosmological lower bound in some regions of parameter space, while indirect detection signals and terrestrial searches for the mediator cut further into the viable parameter space. We present regions of interest for both direct detection and dark mediator searches, including motivated parameter space for the direct detection of sub-GeV DM.
Deciding about Decision Models of Remember and Know Judgments: A Reply to Murdock (2006)
ERIC Educational Resources Information Center
Macmillan, Neil A.; Rotello, Caren M.
2006-01-01
B. B. Murdock (2006; see record 2006-08257-009) has interpreted remember-know data within a decision space defined by item and associative information, the fundamental variables in his general recognition memory model TODAM (B. B. Murdock, 1982). He has related parameters of this extended model to stimulus characteristics for several classic…
NASA Astrophysics Data System (ADS)
Worthy, Johnny L.; Holzinger, Marcus J.; Scheeres, Daniel J.
2018-06-01
The observation to observation measurement association problem for dynamical systems can be addressed by determining if the uncertain admissible regions produced from each observation have one or more points of intersection in state space. An observation association method is developed which uses an optimization based approach to identify local Mahalanobis distance minima in state space between two uncertain admissible regions. A binary hypothesis test with a selected false alarm rate is used to assess the probability that an intersection exists at the point(s) of minimum distance. The systemic uncertainties, such as measurement uncertainties, timing errors, and other parameter errors, define a distribution about a state estimate located at the local Mahalanobis distance minima. If local minima do not exist, then the observations are not associated. The proposed method utilizes an optimization approach defined on a reduced dimension state space to reduce the computational load of the algorithm. The efficacy and efficiency of the proposed method is demonstrated on observation data collected from the Georgia Tech Space Object Research Telescope.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Semat, M.A.
1960-01-01
Transport and deposit conditions of uraniferous minerals are breifly described. The synthesis of crystallograpic, physical, optical, and thermal properties permits defining the main characteristics of this mineralogical group. Tables to facilicate identification of the supergene uranium minerals are given on investigation by anion and cation; system, cleavages, cell parameters, interplanar spacings, refractive indices, optical barings; classification by decreasing values of the most intense line of the powder diagram; diagram for the three higher interplanar spacings; and diagram of the refractive indices. (auth)
Plant and animal accommodation for Space Station Laboratory
NASA Technical Reports Server (NTRS)
Olson, Richard L.; Gustan, Edith A.; Wiley, Lowell F.
1986-01-01
An extended study has been conducted with the goals of defining and analyzing relevant parameters and significant tradeoffs for the accommodation of nonhuman research aboard the NASA Space Station, as well as conducting tradeoff analyses for orbital reconfiguring or reoutfitting of the laboratory facility and developing laboratory designs and program plans. The two items exerting the greatest influence on nonhuman life sciences research were identified as the centrifuge and the specimen environmental control and life support system; both should be installed on the ground rather than in orbit.
Expert-guided optimization for 3D printing of soft and liquid materials.
Abdollahi, Sara; Davis, Alexander; Miller, John H; Feinberg, Adam W
2018-01-01
Additive manufacturing (AM) has rapidly emerged as a disruptive technology to build mechanical parts, enabling increased design complexity, low-cost customization and an ever-increasing range of materials. Yet these capabilities have also created an immense challenge in optimizing the large number of process parameters in order achieve a high-performance part. This is especially true for AM of soft, deformable materials and for liquid-like resins that require experimental printing methods. Here, we developed an expert-guided optimization (EGO) strategy to provide structure in exploring and improving the 3D printing of liquid polydimethylsiloxane (PDMS) elastomer resin. EGO uses three steps, starting first with expert screening to select the parameter space, factors, and factor levels. Second is a hill-climbing algorithm to search the parameter space defined by the expert for the best set of parameters. Third is expert decision making to try new factors or a new parameter space to improve on the best current solution. We applied the algorithm to two calibration objects, a hollow cylinder and a five-sided hollow cube that were evaluated based on a multi-factor scoring system. The optimum print settings were then used to print complex PDMS and epoxy 3D objects, including a twisted vase, water drop, toe, and ear, at a level of detail and fidelity previously not obtained.
Expert-guided optimization for 3D printing of soft and liquid materials
Abdollahi, Sara; Davis, Alexander; Miller, John H.
2018-01-01
Additive manufacturing (AM) has rapidly emerged as a disruptive technology to build mechanical parts, enabling increased design complexity, low-cost customization and an ever-increasing range of materials. Yet these capabilities have also created an immense challenge in optimizing the large number of process parameters in order achieve a high-performance part. This is especially true for AM of soft, deformable materials and for liquid-like resins that require experimental printing methods. Here, we developed an expert-guided optimization (EGO) strategy to provide structure in exploring and improving the 3D printing of liquid polydimethylsiloxane (PDMS) elastomer resin. EGO uses three steps, starting first with expert screening to select the parameter space, factors, and factor levels. Second is a hill-climbing algorithm to search the parameter space defined by the expert for the best set of parameters. Third is expert decision making to try new factors or a new parameter space to improve on the best current solution. We applied the algorithm to two calibration objects, a hollow cylinder and a five-sided hollow cube that were evaluated based on a multi-factor scoring system. The optimum print settings were then used to print complex PDMS and epoxy 3D objects, including a twisted vase, water drop, toe, and ear, at a level of detail and fidelity previously not obtained. PMID:29621286
A pitfall of piecewise-polytropic equation of state inference
NASA Astrophysics Data System (ADS)
Raaijmakers, Geert; Riley, Thomas E.; Watts, Anna L.
2018-05-01
The only messenger radiation in the Universe which one can use to statistically probe the Equation of State (EOS) of cold dense matter is that originating from the near-field vicinities of compact stars. Constraining gravitational masses and equatorial radii of rotating compact stars is a major goal for current and future telescope missions, with a primary purpose of constraining the EOS. From a Bayesian perspective it is necessary to carefully discuss prior definition; in this context a complicating issue is that in practice there exist pathologies in the general relativistic mapping between spaces of local (interior source matter) and global (exterior spacetime) parameters. In a companion paper, these issues were raised on a theoretical basis. In this study we reproduce a probability transformation procedure from the literature in order to map a joint posterior distribution of Schwarzschild gravitational masses and radii into a joint posterior distribution of EOS parameters. We demonstrate computationally that EOS parameter inferences are sensitive to the choice to define a prior on a joint space of these masses and radii, instead of on a joint space interior source matter parameters. We focus on the piecewise-polytropic EOS model, which is currently standard in the field of astrophysical dense matter study. We discuss the implications of this issue for the field.
Yan, Binjun; Li, Yao; Guo, Zhengtai; Qu, Haibin
2014-01-01
The concept of quality by design (QbD) has been widely accepted and applied in the pharmaceutical manufacturing industry. There are still two key issues to be addressed in the implementation of QbD for herbal drugs. The first issue is the quality variation of herbal raw materials and the second issue is the difficulty in defining the acceptable ranges of critical quality attributes (CQAs). To propose a feedforward control strategy and a method for defining the acceptable ranges of CQAs for the two issues. In the case study of the ethanol precipitation process of Danshen (Radix Salvia miltiorrhiza) injection, regression models linking input material attributes and process parameters to CQAs were built first and an optimisation model for calculating the best process parameters according to the input materials was established. Then, the feasible material space was defined and the acceptable ranges of CQAs for the previous process were determined. In the case study, satisfactory regression models were built with cross-validated regression coefficients (Q(2) ) all above 91 %. The feedforward control strategy was applied successfully to compensate the quality variation of the input materials, which was able to control the CQAs in the 90-110 % ranges of the desired values. In addition, the feasible material space for the ethanol precipitation process was built successfully, which showed the acceptable ranges of the CQAs for the concentration process. The proposed methodology can help to promote the implementation of QbD for herbal drugs. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.
1992-01-01
The problem of analyzing and designing controllers for linear systems subject to real parameter uncertainty is considered. An elegant, unified theory for robust eigenvalue placement is presented for a class of D-regions defined by algebraic inequalities by extending the nominal matrix root clustering theory of Gutman and Jury (1981) to linear uncertain time systems. The author presents explicit conditions for matrix root clustering for different D-regions and establishes the relationship between the eigenvalue migration range and the parameter range. The bounds are all obtained by one-shot computation in the matrix domain and do not need any frequency sweeping or parameter gridding. The method uses the generalized Lyapunov theory for getting the bounds.
Rouiller, Yolande; Solacroup, Thomas; Deparis, Véronique; Barbafieri, Marco; Gleixner, Ralf; Broly, Hervé; Eon-Duval, Alex
2012-06-01
The production bioreactor step of an Fc-Fusion protein manufacturing cell culture process was characterized following Quality by Design principles. Using scientific knowledge derived from the literature and process knowledge gathered during development studies and manufacturing to support clinical trials, potential critical and key process parameters with a possible impact on product quality and process performance, respectively, were determined during a risk assessment exercise. The identified process parameters were evaluated using a design of experiment approach. The regression models generated from the data allowed characterizing the impact of the identified process parameters on quality attributes. The main parameters having an impact on product titer were pH and dissolved oxygen, while those having the highest impact on process- and product-related impurities and variants were pH and culture duration. The models derived from characterization studies were used to define the cell culture process design space. The design space limits were set in such a way as to ensure that the drug substance material would consistently have the desired quality. Copyright © 2012 Elsevier B.V. All rights reserved.
Automated Design Space Exploration with Aspen
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spafford, Kyle L.; Vetter, Jeffrey S.
Architects and applications scientists often use performance models to explore a multidimensional design space of architectural characteristics, algorithm designs, and application parameters. With traditional performance modeling tools, these explorations forced users to first develop a performance model and then repeatedly evaluate and analyze the model manually. These manual investigations proved laborious and error prone. More importantly, the complexity of this traditional process often forced users to simplify their investigations. To address this challenge of design space exploration, we extend our Aspen (Abstract Scalable Performance Engineering Notation) language with three new language constructs: user-defined resources, parameter ranges, and a collection ofmore » costs in the abstract machine model. Then, we use these constructs to enable automated design space exploration via a nonlinear optimization solver. We show how four interesting classes of design space exploration scenarios can be derived from Aspen models and formulated as pure nonlinear programs. The analysis tools are demonstrated using examples based on Aspen models for a three-dimensional Fast Fourier Transform, the CoMD molecular dynamics proxy application, and the DARPA Streaming Sensor Challenge Problem. Our results show that this approach can compose and solve arbitrary performance modeling questions quickly and rigorously when compared to the traditional manual approach.« less
Automated Design Space Exploration with Aspen
Spafford, Kyle L.; Vetter, Jeffrey S.
2015-01-01
Architects and applications scientists often use performance models to explore a multidimensional design space of architectural characteristics, algorithm designs, and application parameters. With traditional performance modeling tools, these explorations forced users to first develop a performance model and then repeatedly evaluate and analyze the model manually. These manual investigations proved laborious and error prone. More importantly, the complexity of this traditional process often forced users to simplify their investigations. To address this challenge of design space exploration, we extend our Aspen (Abstract Scalable Performance Engineering Notation) language with three new language constructs: user-defined resources, parameter ranges, and a collection ofmore » costs in the abstract machine model. Then, we use these constructs to enable automated design space exploration via a nonlinear optimization solver. We show how four interesting classes of design space exploration scenarios can be derived from Aspen models and formulated as pure nonlinear programs. The analysis tools are demonstrated using examples based on Aspen models for a three-dimensional Fast Fourier Transform, the CoMD molecular dynamics proxy application, and the DARPA Streaming Sensor Challenge Problem. Our results show that this approach can compose and solve arbitrary performance modeling questions quickly and rigorously when compared to the traditional manual approach.« less
Dirac Cellular Automaton from Split-step Quantum Walk
Mallick, Arindam; Chandrashekar, C. M.
2016-01-01
Simulations of one quantum system by an other has an implication in realization of quantum machine that can imitate any quantum system and solve problems that are not accessible to classical computers. One of the approach to engineer quantum simulations is to discretize the space-time degree of freedom in quantum dynamics and define the quantum cellular automata (QCA), a local unitary update rule on a lattice. Different models of QCA are constructed using set of conditions which are not unique and are not always in implementable configuration on any other system. Dirac Cellular Automata (DCA) is one such model constructed for Dirac Hamiltonian (DH) in free quantum field theory. Here, starting from a split-step discrete-time quantum walk (QW) which is uniquely defined for experimental implementation, we recover the DCA along with all the fine oscillations in position space and bridge the missing connection between DH-DCA-QW. We will present the contribution of the parameters resulting in the fine oscillations on the Zitterbewegung frequency and entanglement. The tuneability of the evolution parameters demonstrated in experimental implementation of QW will establish it as an efficient tool to design quantum simulator and approach quantum field theory from principles of quantum information theory. PMID:27184159
Origin of Disagreements in Tandem Mass Spectra Interpretation by Search Engines.
Tessier, Dominique; Lollier, Virginie; Larré, Colette; Rogniaux, Hélène
2016-10-07
Several proteomic database search engines that interpret LC-MS/MS data do not identify the same set of peptides. These disagreements occur even when the scores of the peptide-to-spectrum matches suggest good confidence in the interpretation. Our study shows that these disagreements observed for the interpretations of a given spectrum are almost exclusively due to the variation of what we call the "peptide space", i.e., the set of peptides that are actually compared to the experimental spectra. We discuss the potential difficulties of precisely defining the "peptide space." Indeed, although several parameters that are generally reported in publications can easily be set to the same values, many additional parameters-with much less straightforward user access-might impact the "peptide space" used by each program. Moreover, in a configuration where each search engine identifies the same candidates for each spectrum, the inference of the proteins may remain quite different depending on the false discovery rate selected.
Eon-duval, Alex; Valax, Pascal; Solacroup, Thomas; Broly, Hervé; Gleixner, Ralf; Strat, Claire L E; Sutter, James
2012-10-01
The article describes how Quality by Design principles can be applied to the drug substance manufacturing process of an Fc fusion protein. First, the quality attributes of the product were evaluated for their potential impact on safety and efficacy using risk management tools. Similarly, process parameters that have a potential impact on critical quality attributes (CQAs) were also identified through a risk assessment. Critical process parameters were then evaluated for their impact on CQAs, individually and in interaction with each other, using multivariate design of experiment techniques during the process characterisation phase. The global multi-step Design Space, defining operational limits for the entire drug substance manufacturing process so as to ensure that the drug substance quality targets are met, was devised using predictive statistical models developed during the characterisation study. The validity of the global multi-step Design Space was then confirmed by performing the entire process, from cell bank thawing to final drug substance, at its limits during the robustness study: the quality of the final drug substance produced under different conditions was verified against predefined targets. An adaptive strategy was devised whereby the Design Space can be adjusted to the quality of the input material to ensure reliable drug substance quality. Finally, all the data obtained during the process described above, together with data generated during additional validation studies as well as manufacturing data, were used to define the control strategy for the drug substance manufacturing process using a risk assessment methodology. Copyright © 2012 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Castelletti, Davide; Demir, Begüm; Bruzzone, Lorenzo
2014-10-01
This paper presents a novel semisupervised learning (SSL) technique defined in the context of ɛ-insensitive support vector regression (SVR) to estimate biophysical parameters from remotely sensed images. The proposed SSL method aims to mitigate the problems of small-sized biased training sets without collecting any additional samples with reference measures. This is achieved on the basis of two consecutive steps. The first step is devoted to inject additional priors information in the learning phase of the SVR in order to adapt the importance of each training sample according to distribution of the unlabeled samples. To this end, a weight is initially associated to each training sample based on a novel strategy that defines higher weights for the samples located in the high density regions of the feature space while giving reduced weights to those that fall into the low density regions of the feature space. Then, in order to exploit different weights for training samples in the learning phase of the SVR, we introduce a weighted SVR (WSVR) algorithm. The second step is devoted to jointly exploit labeled and informative unlabeled samples for further improving the definition of the WSVR learning function. To this end, the most informative unlabeled samples that have an expected accurate target values are initially selected according to a novel strategy that relies on the distribution of the unlabeled samples in the feature space and on the WSVR function estimated at the first step. Then, we introduce a restructured WSVR algorithm that jointly uses labeled and unlabeled samples in the learning phase of the WSVR algorithm and tunes their importance by different values of regularization parameters. Experimental results obtained for the estimation of single-tree stem volume show the effectiveness of the proposed SSL method.
The Microscope Space Mission and the In-Orbit Calibration Plan for its Instrument
NASA Astrophysics Data System (ADS)
Levy, Agnès Touboul, Pierre; Rodrigues, Manuel; Onera, Émilie Hardy; Métris, Gilles; Robert, Alain
2015-01-01
The MICROSCOPE space mission aims at testing the Equivalence Principle (EP) with an accuracy of 10-15. This principle is one of the basis of the General Relativity theory; it states the equivalence between gravitational and inertial mass. The test is based on the precise measurement of a gravitational signal by a differential electrostatic accelerometer which includes two cylindrical test masses made of different materials. The accelerometers constitute the payload accommodated on board a drag-free micro-satellite which is controlled inertial or rotating about the normal to the orbital plane. The acceleration estimates used for the EP test are disturbed by the instruments physical parameters and by the instrument environment conditions on-board the satellite. These parameters are partially measured with ground tests or during the integration of the instrument in the satellite (alignment). Nevertheless, the ground evaluations are not sufficient with respect to the EP test accuracy objectives. An in-orbit calibration is therefore needed to characterize them finely. The calibration process for each parameter has been defined.
Effects of thermal cycling on composite materials for space structures
NASA Technical Reports Server (NTRS)
Tompkins, Stephen S.
1989-01-01
The effects of thermal cycling on the thermal and mechanical properties of composite materials that are candidates for space structures are briefly described. The results from a thermal analysis of the orbiting Space Station Freedom is used to define a typical thermal environment and the parameters that cause changes in the thermal history. The interactions of this environment with composite materials are shown and described. The effects of this interaction on the integrity as well as the properties of GR/thermoset, Gr/thermoplastic, Gr/metal and Gr/glass composite materials are discussed. Emphasis is placed on the effects of the interaction that are critical to precision spacecraft. Finally, ground test methodology are briefly discussed.
Tuning iteration space slicing based tiled multi-core code implementing Nussinov's RNA folding.
Palkowski, Marek; Bielecki, Wlodzimierz
2018-01-15
RNA folding is an ongoing compute-intensive task of bioinformatics. Parallelization and improving code locality for this kind of algorithms is one of the most relevant areas in computational biology. Fortunately, RNA secondary structure approaches, such as Nussinov's recurrence, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. This allows us to apply powerful polyhedral compilation techniques based on the transitive closure of dependence graphs to generate parallel tiled code implementing Nussinov's RNA folding. Such techniques are within the iteration space slicing framework - the transitive dependences are applied to the statement instances of interest to produce valid tiles. The main problem at generating parallel tiled code is defining a proper tile size and tile dimension which impact parallelism degree and code locality. To choose the best tile size and tile dimension, we first construct parallel parametric tiled code (parameters are variables defining tile size). With this purpose, we first generate two nonparametric tiled codes with different fixed tile sizes but with the same code structure and then derive a general affine model, which describes all integer factors available in expressions of those codes. Using this model and known integer factors present in the mentioned expressions (they define the left-hand side of the model), we find unknown integers in this model for each integer factor available in the same fixed tiled code position and replace in this code expressions, including integer factors, with those including parameters. Then we use this parallel parametric tiled code to implement the well-known tile size selection (TSS) technique, which allows us to discover in a given search space the best tile size and tile dimension maximizing target code performance. For a given search space, the presented approach allows us to choose the best tile size and tile dimension in parallel tiled code implementing Nussinov's RNA folding. Experimental results, received on modern Intel multi-core processors, demonstrate that this code outperforms known closely related implementations when the length of RNA strands is bigger than 2500.
NASA Astrophysics Data System (ADS)
Furfaro, R.; Linares, R.; Gaylor, D.; Jah, M.; Walls, R.
2016-09-01
In this paper, we present an end-to-end approach that employs machine learning techniques and Ontology-based Bayesian Networks (BN) to characterize the behavior of resident space objects. State-of-the-Art machine learning architectures (e.g. Extreme Learning Machines, Convolutional Deep Networks) are trained on physical models to learn the Resident Space Object (RSO) features in the vectorized energy and momentum states and parameters. The mapping from measurements to vectorized energy and momentum states and parameters enables behavior characterization via clustering in the features space and subsequent RSO classification. Additionally, Space Object Behavioral Ontologies (SOBO) are employed to define and capture the domain knowledge-base (KB) and BNs are constructed from the SOBO in a semi-automatic fashion to execute probabilistic reasoning over conclusions drawn from trained classifiers and/or directly from processed data. Such an approach enables integrating machine learning classifiers and probabilistic reasoning to support higher-level decision making for space domain awareness applications. The innovation here is to use these methods (which have enjoyed great success in other domains) in synergy so that it enables a "from data to discovery" paradigm by facilitating the linkage and fusion of large and disparate sources of information via a Big Data Science and Analytics framework.
A hybrid method of estimating pulsating flow parameters in the space-time domain
NASA Astrophysics Data System (ADS)
Pałczyński, Tomasz
2017-05-01
This paper presents a method for estimating pulsating flow parameters in partially open pipes, such as pipelines, internal combustion engine inlets, exhaust pipes and piston compressors. The procedure is based on the method of characteristics, and employs a combination of measurements and simulations. An experimental test rig is described, which enables pressure, temperature and mass flow rate to be measured within a defined cross section. The second part of the paper discusses the main assumptions of a simulation algorithm elaborated in the Matlab/Simulink environment. The simulation results are shown as 3D plots in the space-time domain, and compared with proposed models of phenomena relating to wave propagation, boundary conditions, acoustics and fluid mechanics. The simulation results are finally compared with acoustic phenomena, with an emphasis on the identification of resonant frequencies.
Real Time Correction of Aircraft Flight Fonfiguration
NASA Technical Reports Server (NTRS)
Schipper, John F. (Inventor)
2009-01-01
Method and system for monitoring and analyzing, in real time, variation with time of an aircraft flight parameter. A time-dependent recovery band, defined by first and second recovery band boundaries that are spaced apart at at least one time point, is constructed for a selected flight parameter and for a selected time recovery time interval length .DELTA.t(FP;rec). A flight parameter, having a value FP(t=t.sub.p) at a time t=t.sub.p, is likely to be able to recover to a reference flight parameter value FP(t';ref), lying in a band of reference flight parameter values FP(t';ref;CB), within a time interval given by t.sub.p.ltoreq.t'.ltoreq.t.sub.p.DELTA.t(FP;rec), if (or only if) the flight parameter value lies between the first and second recovery band boundary traces.
Determination of the design space of the HPLC analysis of water-soluble vitamins.
Wagdy, Hebatallah A; Hanafi, Rasha S; El-Nashar, Rasha M; Aboul-Enein, Hassan Y
2013-06-01
Analysis of water-soluble vitamins has been tremendously approached through the last decades. A multitude of HPLC methods have been reported with a variety of advantages/shortcomings, yet, the design space of HPLC analysis of these vitamins was not defined in any of these reports. As per the food and drug administration (FDA), implementing the quality by design approach for the analysis of commercially available mixtures is hypothesized to enhance the pharmaceutical industry via facilitating the process of analytical method development and approval. This work illustrates a multifactorial optimization of three measured plus seven calculated influential HPLC parameters on the analysis of a mixture containing seven common water-soluble vitamins (B1, B2, B6, B12, C, PABA, and PP). These three measured parameters are gradient time, temperature, and ternary eluent composition (B1/B2) and the seven calculated parameters are flow rate, column length, column internal diameter, dwell volume, extracolumn volume, %B (start), and %B (end). The design is based on 12 experiments in which, examining of the multifactorial effects of these 3 + 7 parameters on the critical resolution and selectivity, was carried out by systematical variation of all these parameters simultaneously. The 12 basic runs were based on two different gradient time each at two different temperatures, repeated at three different ternary eluent compositions (methanol or acetonitrile or a mixture of both). Multidimensional robust regions of high critical R(s) were defined and graphically verified. The optimum method was selected based on the best resolution separation in the shortest run time for a synthetic mixture, followed by application on two pharmaceutical preparations available in the market. The predicted retention times of all peaks were found to be in good match with the virtual ones. In conclusion, the presented report offers an accurate determination of the design space for critical resolution in the analysis of water-soluble vitamins by HPLC, which would help the regulatory authorities to judge the validity of presented analytical methods for approval. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
LPV Modeling of a Flexible Wing Aircraft Using Modal Alignment and Adaptive Gridding Methods
NASA Technical Reports Server (NTRS)
Al-Jiboory, Ali Khudhair; Zhu, Guoming; Swei, Sean Shan-Min; Su, Weihua; Nguyen, Nhan T.
2017-01-01
One of the earliest approaches in gain-scheduling control is the gridding based approach, in which a set of local linear time-invariant models are obtained at various gridded points corresponding to the varying parameters within the flight envelop. In order to ensure smooth and effective Linear Parameter-Varying control, aligning all the flexible modes within each local model and maintaining small number of representative local models over the gridded parameter space are crucial. In addition, since the flexible structural models tend to have large dimensions, a tractable model reduction process is necessary. In this paper, the notion of s-shifted H2- and H Infinity-norm are introduced and used as a metric to measure the model mismatch. A new modal alignment algorithm is developed which utilizes the defined metric for aligning all the local models over the entire gridded parameter space. Furthermore, an Adaptive Grid Step Size Determination algorithm is developed to minimize the number of local models required to represent the gridded parameter space. For model reduction, we propose to utilize the concept of Composite Modal Cost Analysis, through which the collective contribution of each flexible mode is computed and ranked. Therefore, a reduced-order model is constructed by retaining only those modes with significant contribution. The NASA Generic Transport Model operating at various flight speeds is studied for verification purpose, and the analysis and simulation results demonstrate the effectiveness of the proposed modeling approach.
Density Estimation with Mercer Kernels
NASA Technical Reports Server (NTRS)
Macready, William G.
2003-01-01
We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.
On The Computation Of The Best-fit Okada-type Tsunami Source
NASA Astrophysics Data System (ADS)
Miranda, J. M. A.; Luis, J. M. F.; Baptista, M. A.
2017-12-01
The forward simulation of earthquake-induced tsunamis usually assumes that the initial sea surface elevation mimics the co-seismic deformation of the ocean bottom described by a simple "Okada-type" source (rectangular fault with constant slip in a homogeneous elastic half space). This approach is highly effective, in particular in far-field conditions. With this assumption, and a given set of tsunami waveforms recorded by deep sea pressure sensors and (or) coastal tide stations it is possible to deduce the set of parameters of the Okada-type solution that best fits a set of sea level observations. To do this, we build a "space of possible tsunami sources-solution space". Each solution consists of a combination of parameters: earthquake magnitude, length, width, slip, depth and angles - strike, rake, and dip. To constrain the number of possible solutions we use the earthquake parameters defined by seismology and establish a range of possible values for each parameter. We select the "best Okada source" by comparison of the results of direct tsunami modeling using the solution space of tsunami sources. However, direct tsunami modeling is a time-consuming process for the whole solution space. To overcome this problem, we use a precomputed database of Empirical Green Functions to compute the tsunami waveforms resulting from unit water sources and search which one best matches the observations. In this study, we use as a test case the Solomon Islands tsunami of 6 February 2013 caused by a magnitude 8.0 earthquake. The "best Okada" source is the solution that best matches the tsunami recorded at six DART stations in the area. We discuss the differences between the initial seismic solution and the final one obtained from tsunami data This publication received funding of FCT-project UID/GEO/50019/2013-Instituto Dom Luiz.
Scaling relations and the fundamental line of the local group dwarf galaxies
NASA Astrophysics Data System (ADS)
Woo, Joanna; Courteau, Stéphane; Dekel, Avishai
2008-11-01
We study the scaling relations between global properties of dwarf galaxies in the local group. In addition to quantifying the correlations between pairs of variables, we explore the `shape' of the distribution of galaxies in log parameter space using standardized principal component analysis, the analysis is performed first in the 3D structural parameter space of stellar mass M*, internal velocity V and characteristic radius R* (or surface brightness μ*). It is then extended to a 4D space that includes a stellar population parameter such as metallicity Z or star formation rate . We find that the local group dwarfs basically define a one-parameter `fundamental line' (FL), primarily driven by stellar mass, M*. A more detailed inspection reveals differences between the star formation properties of dwarf irregulars (dI's) and dwarf ellipticals (dE's), beyond the tendency of the latter to be more massive. In particular, the metallicities of dI's are typically lower by a factor of 3 at a given M* and they grow faster with increasing M*, showing a tighter FL in the 4D space for the dE's. The structural scaling relations of dI's resemble those of the more massive spirals, but the dI's have lower star formation rates for a given M* which also grow faster with increasing M*. On the other hand, the FL of the dE's departs from the fundamental plane of bigger ellipticals. While the one-parameter nature of the FL and the associated slopes of the scaling relations are consistent with the general predictions of supernova feedback from Dekel & Woo, the differences between the FL's of the dE's and the dI's remain a challenge and should serve as a guide for the secondary physical processes responsible for these two types.
Generalization of the photo process window and its application to OPC test pattern design
NASA Astrophysics Data System (ADS)
Eisenmann, Hans; Peter, Kai; Strojwas, Andrzej J.
2003-07-01
From the early development phase up to the production phase, test pattern play a key role for microlithography. The requirement for test pattern is to represent the design well and to cover the space of all process conditions, e.g. to investigate the full process window and all other process parameters. This paper shows that the current state-of-the-art test pattern do not address these requirements sufficiently and makes suggestions for a better selection of test pattern. We present a new methodology to analyze an existing layout (e.g. logic library, test pattern or full chip) for critical layout situations which does not need precise process data. We call this method "process space decomposition", because it is aimed at decomposing the process impact to a layout feature into a sum of single independent contributions, the dimensions of the process space. This is a generalization of the classical process window, which examines defocus and exposure dependency of given test pattern, e.g. CD value of dense and isolated lines. In our process space we additionally define the dimensions resist effects, etch effects, mask error and misalignment, which describe the deviation of the printed silicon pattern from its target. We further extend it by the pattern space using a product based layout (library, full chip or synthetic test pattern). The criticality of pattern is defined by their deviation due to aerial image, their sensitivity to the respective dimension or several combinations of these. By exploring the process space for a given design, the method allows to find the most critical patterns independent of specific process parameters. The paper provides examples for different applications of the method: (1) selection of design oriented test pattern for lithography development (2) test pattern reduction in process characterization (3) verification/optimization of printability and performance of post processing procedures (like OPC) (4) creation of a sensitive process monitor.
ERIC Educational Resources Information Center
Bika, Anastasia
Noting that the design of the classroom used for early childhood and kindergarten classes can contribute in powerful ways to the education of young children, this paper applies principles of architecture to the organization and shaping of the interior classroom space. The paper maintains that five principles, when applied, create a climate of…
VizieR Online Data Catalog: Outliers and similarity in APOGEE (Reis+, 2018)
NASA Astrophysics Data System (ADS)
Reis, I.; Poznanski, D.; Baron, D.; Zasowski, G.; Shahaf, S.
2017-11-01
t-SNE is a dimensionality reduction algorithm that is particularly well suited for the visualization of high-dimensional datasets. We use t-SNE to visualize our distance matrix. A-priori, these distances could define a space with almost as many dimensions as objects, i.e., tens of thousand of dimensions. Obviously, since many stars are quite similar, and their spectra are defined by a few physical parameters, the minimal spanning space might be smaller. By using t-SNE we can examine the structure of our sample projected into 2D. We use our distance matrix as input to the t-SNE algorithm and in return get a 2D map of the objects in our dataset. For each star in a sample of 183232 APOGEE stars, the APOGEE IDs of the 99 stars with most similar spectra (according to the method described in paper), ordered by similarity. (3 data files).
Space Qualification Test of a-Silicon Solar Cell Modules
NASA Technical Reports Server (NTRS)
Kim, Q.; Lawton, R. A.; Manion, S. J.; Okuno, J. O.; Ruiz, R. P.; Vu, D. T.; Vu, D. T.; Kayali, S. A.; Jeffrey, F. R.
2004-01-01
The basic requirements of solar cell modules for space applications are generally described in MIL-S-83576 for the specific needs of the USAF. However, the specifications of solar cells intended for use on space terrestrial applications are not well defined. Therefore, this qualifications test effort was concentrated on critical areas specific to the microseismometer probe which is intended to be included in the Mars microprobe programs. Parameters that were evaluated included performance dependence on: illuminating angles, terrestrial temperatures, lifetime, as well as impact landing conditions. Our qualification efforts were limited to these most critical areas of concern. Most of the tested solar cell modules have met the requirements of the program except the impact tests. Surprisingly, one of the two single PIN 2 x 1 amorphous solar cell modules continued to function even after the 80000G impact tests. The output power parameters, Pout, FF, Isc and Voc, of the single PIN amorphous solar cell module were found to be 3.14 mW, 0.40, 9.98 mA and 0.78 V, respectively. These parameters are good enough to consider the solar module as a possible power source for the microprobe seismometer. Some recommendations were made to improve the usefulness of the amorphous silicon solar cell modules in space terrestrial applications, based on the results obtained from the intensive short term lab test effort.
Highly light-weighted ZERODUR mirrors
NASA Astrophysics Data System (ADS)
Behar-Lafenetre, Stéphanie; Lasic, Thierry; Viale, Roger; Mathieu, Jean-Claude; Ruch, Eric; Tarreau, Michel; Etcheto, Pierre
2017-11-01
Due to more and more stringent requirements for observation missions, diameter of primary mirrors for space telescopes is increasing. Difficulty is then to have a design stiff enough to be able to withstand launch loads and keep a reasonable mass while providing high opto-mechanical performance. Among the possible solutions, Thales Alenia Space France has investigated optimization of ZERODUR mirrors. Indeed this material, although fragile, is very well mastered and its characteristics well known. Moreover, its thermo-elastic properties (almost null CTE) is unequalled yet, in particular at ambient temperature. Finally, this material can be polished down to very low roughness without any coating. Light-weighting can be achieved by two different means : either optimizing manufacturing parameters or optimizing design (or both). Manufacturing parameters such as walls and optical face thickness have been improved and tested on representative breadboards defined on the basis of SAGEM-REOSC and Thales Alenia Space France expertise and realized by SAGEM-REOSC. In the frame of CNES Research and Technology activities, specific mass has been decreased down to 36 kg/m2. Moreover SNAP study dealt with a 2 m diameter primary mirror. Design has been optimized by Thales Alenia Space France while using classical manufacturing parameters - thus ensuring feasibility and costs. Mass was decreased down to 60 kg/m2 for a gravity effect of 52 nm. It is thus demonstrated that high opto-mechanical performance can be guaranteed with large highly lightweighted ZERODUR mirrors.
Dynamical thresholding of pancake models: a promising variant of the HDM picture
NASA Astrophysics Data System (ADS)
Buchert, Thomas
Variants of pancake models are considered which allow for the construction of a phenomenological link to the galaxy formation process. A control parameter space is introduced which defines different scenarios of galaxy formation. The sensibility of statistical measures of the small-scale structure with respect to this parameter freedom is demonstrated. This property of the galaxy formation model, together with the consequences of enlarging the box size of the simulation to a `fair sample scale', form the basis of arguments to support the possible revival of the standard `Hot-Dark-Matter' model.
An Astrobiological Experiment to Explore the Habitability of Tidally Locked M-Dwarf Planets
NASA Astrophysics Data System (ADS)
Angerhausen, Daniel; Sapers, Haley; Simoncini, Eugenio; Lutz, Stefanie; Alexandre, Marcelo da Rosa; Galante, Douglas
2014-04-01
We present a summary of a three-year academic research proposal drafted during the Sao Paulo Advanced School of Astrobiology (SPASA) to prepare for upcoming observations of tidally locked planets orbiting M-dwarf stars. The primary experimental goal of the suggested research is to expose extremophiles from analogue environments to a modified space simulation chamber reproducing the environmental parameters of a tidally locked planet in the habitable zone of a late-type star. Here we focus on a description of the astronomical analysis used to define the parameters for this climate simulation.
Bustamante, Carlos D.; Valero-Cuevas, Francisco J.
2010-01-01
The field of complex biomechanical modeling has begun to rely on Monte Carlo techniques to investigate the effects of parameter variability and measurement uncertainty on model outputs, search for optimal parameter combinations, and define model limitations. However, advanced stochastic methods to perform data-driven explorations, such as Markov chain Monte Carlo (MCMC), become necessary as the number of model parameters increases. Here, we demonstrate the feasibility and, what to our knowledge is, the first use of an MCMC approach to improve the fitness of realistically large biomechanical models. We used a Metropolis–Hastings algorithm to search increasingly complex parameter landscapes (3, 8, 24, and 36 dimensions) to uncover underlying distributions of anatomical parameters of a “truth model” of the human thumb on the basis of simulated kinematic data (thumbnail location, orientation, and linear and angular velocities) polluted by zero-mean, uncorrelated multivariate Gaussian “measurement noise.” Driven by these data, ten Markov chains searched each model parameter space for the subspace that best fit the data (posterior distribution). As expected, the convergence time increased, more local minima were found, and marginal distributions broadened as the parameter space complexity increased. In the 36-D scenario, some chains found local minima but the majority of chains converged to the true posterior distribution (confirmed using a cross-validation dataset), thus demonstrating the feasibility and utility of these methods for realistically large biomechanical problems. PMID:19272906
MAPA: an interactive accelerator design code with GUI
NASA Astrophysics Data System (ADS)
Bruhwiler, David L.; Cary, John R.; Shasharina, Svetlana G.
1999-06-01
The MAPA code is an interactive accelerator modeling and design tool with an X/Motif GUI. MAPA has been developed in C++ and makes full use of object-oriented features. We present an overview of its features and describe how users can independently extend the capabilities of the entire application, including the GUI. For example, a user can define a new model for a focusing or accelerating element. If the appropriate form is followed, and the new element is "registered" with a single line in the specified file, then the GUI will fully support this user-defined element type after it has been compiled and then linked to the existing application. In particular, the GUI will bring up windows for modifying any relevant parameters of the new element type. At present, one can use the GUI for phase space tracking, finding fixed points and generating line plots for the Twiss parameters, the dispersion and the accelerator geometry. The user can define new types of simulations which the GUI will automatically support by providing a menu option to execute the simulation and subsequently rendering line plots of the resulting data.
Realistic simplified gaugino-higgsino models in the MSSM
NASA Astrophysics Data System (ADS)
Fuks, Benjamin; Klasen, Michael; Schmiemann, Saskia; Sunder, Marthijn
2018-03-01
We present simplified MSSM models for light neutralinos and charginos with realistic mass spectra and realistic gaugino-higgsino mixing, that can be used in experimental searches at the LHC. The formerly used naive approach of defining mass spectra and mixing matrix elements manually and independently of each other does not yield genuine MSSM benchmarks. We suggest the use of less simplified, but realistic MSSM models, whose mass spectra and mixing matrix elements are the result of a proper matrix diagonalisation. We propose a novel strategy targeting the design of such benchmark scenarios, accounting for user-defined constraints in terms of masses and particle mixing. We apply it to the higgsino case and implement a scan in the four relevant underlying parameters {μ , tan β , M1, M2} for a given set of light neutralino and chargino masses. We define a measure for the quality of the obtained benchmarks, that also includes criteria to assess the higgsino content of the resulting charginos and neutralinos. We finally discuss the distribution of the resulting models in the MSSM parameter space as well as their implications for supersymmetric dark matter phenomenology.
Energy loss analysis of an integrated space power distribution system
NASA Technical Reports Server (NTRS)
Kankam, M. D.; Ribeiro, P. F.
1992-01-01
The results of studies related to conceptual topologies of an integrated utility-like space power system are described. The system topologies are comparatively analyzed by considering their transmission energy losses as functions of mainly distribution voltage level and load composition. The analysis is expedited by use of a Distribution System Analysis and Simulation (DSAS) software. This recently developed computer program by the Electric Power Research Institute (EPRI) uses improved load models to solve the power flow within the system. However, present shortcomings of the software with regard to space applications, and incompletely defined characteristics of a space power system make the results applicable to only the fundamental trends of energy losses of the topologies studied. Accountability, such as included, for the effects of the various parameters on the system performance can constitute part of a planning tool for a space power distribution system.
You, Daekeun; Kim, Michelle M; Aryal, Madhava P; Parmar, Hemant; Piert, Morand; Lawrence, Theodore S; Cao, Yue
2018-01-01
To create tumor "habitats" from the "signatures" discovered from multimodality metabolic and physiological images, we developed a framework of a processing pipeline. The processing pipeline consists of six major steps: (1) creating superpixels as a spatial unit in a tumor volume; (2) forming a data matrix [Formula: see text] containing all multimodality image parameters at superpixels; (3) forming and clustering a covariance or correlation matrix [Formula: see text] of the image parameters to discover major image "signatures;" (4) clustering the superpixels and organizing the parameter order of the [Formula: see text] matrix according to the one found in step 3; (5) creating "habitats" in the image space from the superpixels associated with the "signatures;" and (6) pooling and clustering a matrix consisting of correlation coefficients of each pair of image parameters from all patients to discover subgroup patterns of the tumors. The pipeline was applied to a dataset of multimodality images in glioblastoma (GBM) first, which consisted of 10 image parameters. Three major image "signatures" were identified. The three major "habitats" plus their overlaps were created. To test generalizability of the processing pipeline, a second image dataset from GBM, acquired on the scanners different from the first one, was processed. Also, to demonstrate the clinical association of image-defined "signatures" and "habitats," the patterns of recurrence of the patients were analyzed together with image parameters acquired prechemoradiation therapy. An association of the recurrence patterns with image-defined "signatures" and "habitats" was revealed. These image-defined "signatures" and "habitats" can be used to guide stereotactic tissue biopsy for genetic and mutation status analysis and to analyze for prediction of treatment outcomes, e.g., patterns of failure.
Illusion optics: Optically transforming the nature and the location of electromagnetic emissions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yi, Jianjia; Tichit, Paul-Henri; Burokur, Shah Nawaz, E-mail: shah-nawaz.burokur@u-psud.fr
Complex electromagnetic structures can be designed by using the powerful concept of transformation electromagnetics. In this study, we define a spatial coordinate transformation that shows the possibility of designing a device capable of producing an illusion on an antenna radiation pattern. Indeed, by compressing the space containing a radiating element, we show that it is able to change the radiation pattern and to make the radiation location appear outside the latter space. Both continuous and discretized models with calculated electromagnetic parameter values are presented. A reduction of the electromagnetic material parameters is also proposed for a possible physical fabrication ofmore » the device with achievable values of permittivity and permeability that can be obtained from existing well-known metamaterials. Following that, the design of the proposed antenna using a layered metamaterial is presented. Full wave numerical simulations using Finite Element Method are performed to demonstrate the performances of such a device.« less
Learning dependence from samples.
Seth, Sohan; Príncipe, José C
2014-01-01
Mutual information, conditional mutual information and interaction information have been widely used in scientific literature as measures of dependence, conditional dependence and mutual dependence. However, these concepts suffer from several computational issues; they are difficult to estimate in continuous domain, the existing regularised estimators are almost always defined only for real or vector-valued random variables, and these measures address what dependence, conditional dependence and mutual dependence imply in terms of the random variables but not finite realisations. In this paper, we address the issue that given a set of realisations in an arbitrary metric space, what characteristic makes them dependent, conditionally dependent or mutually dependent. With this novel understanding, we develop new estimators of association, conditional association and interaction association. Some attractive properties of these estimators are that they do not require choosing free parameter(s), they are computationally simpler, and they can be applied to arbitrary metric spaces.
Virtual walks in spin space: A study in a family of two-parameter models
NASA Astrophysics Data System (ADS)
Mullick, Pratik; Sen, Parongama
2018-05-01
We investigate the dynamics of classical spins mapped as walkers in a virtual "spin" space using a generalized two-parameter family of spin models characterized by parameters y and z [de Oliveira et al., J. Phys. A 26, 2317 (1993), 10.1088/0305-4470/26/10/006]. The behavior of S (x ,t ) , the probability that the walker is at position x at time t , is studied in detail. In general S (x ,t ) ˜t-αf (x /tα) with α ≃1 or 0.5 at large times depending on the parameters. In particular, S (x ,t ) for the point y =1 ,z =0.5 corresponding to the Voter model shows a crossover in time; associated with this crossover, two timescales can be defined which vary with the system size L as L2logL . We also show that as the Voter model point is approached from the disordered regions along different directions, the width of the Gaussian distribution S (x ,t ) diverges in a power law manner with different exponents. For the majority Voter case, the results indicate that the the virtual walk can detect the phase transition perhaps more efficiently compared to other nonequilibrium methods.
Regularized estimation of Euler pole parameters
NASA Astrophysics Data System (ADS)
Aktuğ, Bahadir; Yildirim, Ömer
2013-07-01
Euler vectors provide a unified framework to quantify the relative or absolute motions of tectonic plates through various geodetic and geophysical observations. With the advent of space geodesy, Euler parameters of several relatively small plates have been determined through the velocities derived from the space geodesy observations. However, the available data are usually insufficient in number and quality to estimate both the Euler vector components and the Euler pole parameters reliably. Since Euler vectors are defined globally in an Earth-centered Cartesian frame, estimation with the limited geographic coverage of the local/regional geodetic networks usually results in highly correlated vector components. In the case of estimating the Euler pole parameters directly, the situation is even worse, and the position of the Euler pole is nearly collinear with the magnitude of the rotation rate. In this study, a new method, which consists of an analytical derivation of the covariance matrix of the Euler vector in an ideal network configuration, is introduced and a regularized estimation method specifically tailored for estimating the Euler vector is presented. The results show that the proposed method outperforms the least squares estimation in terms of the mean squared error.
Cost-estimating relationships for space programs
NASA Technical Reports Server (NTRS)
Mandell, Humboldt C., Jr.
1992-01-01
Cost-estimating relationships (CERs) are defined and discussed as they relate to the estimation of theoretical costs for space programs. The paper primarily addresses CERs based on analogous relationships between physical and performance parameters to estimate future costs. Analytical estimation principles are reviewed examining the sources of errors in cost models, and the use of CERs is shown to be affected by organizational culture. Two paradigms for cost estimation are set forth: (1) the Rand paradigm for single-culture single-system methods; and (2) the Price paradigms that incorporate a set of cultural variables. For space programs that are potentially subject to even small cultural changes, the Price paradigms are argued to be more effective. The derivation and use of accurate CERs is important for developing effective cost models to analyze the potential of a given space program.
Scale Space for Camera Invariant Features.
Puig, Luis; Guerrero, José J; Daniilidis, Kostas
2014-09-01
In this paper we propose a new approach to compute the scale space of any central projection system, such as catadioptric, fisheye or conventional cameras. Since these systems can be explained using a unified model, the single parameter that defines each type of system is used to automatically compute the corresponding Riemannian metric. This metric, is combined with the partial differential equations framework on manifolds, allows us to compute the Laplace-Beltrami (LB) operator, enabling the computation of the scale space of any central projection system. Scale space is essential for the intrinsic scale selection and neighborhood description in features like SIFT. We perform experiments with synthetic and real images to validate the generalization of our approach to any central projection system. We compare our approach with the best-existing methods showing competitive results in all type of cameras: catadioptric, fisheye, and perspective.
Three dimensional calculation of thermonuclear ignition conditions for magnetized targets
NASA Astrophysics Data System (ADS)
Cortez, Ross; Cassibry, Jason; Lapointe, Michael; Adams, Robert
2017-10-01
Fusion power balance calculations, often performed using analytic methods, are used to estimate the design space for ignition conditions. In this paper, fusion power balance is calculated utilizing a 3-D smoothed particle hydrodynamics code (SPFMax) incorporating recent stopping power routines. Effects of thermal conduction, multigroup radiation emission and nonlocal absorption, ion/electron thermal equilibration, and compressional work are studied as a function of target and liner parameters and geometry for D-T, D-D, and 6LI-D fuels to identify the potential ignition design space. Here, ignition is defined as the condition when fusion particle deposition equals or exceeds the losses from heat conduction and radiation. The simulations are in support of ongoing research with NASA to develop advanced propulsion systems for rapid interplanetary space travel. Supported by NASA Innovative Advanced Concepts and NASA Marshall Space Flight Center.
Aircraft measurements of electrified clouds at Kennedy Space Center
NASA Technical Reports Server (NTRS)
Jones, J. J.; Winn, W. P.; Hunyady, S. J.; Moore, C. B.; Bullock, J. W.
1990-01-01
The space-vehicle launch commit criteria for weather and atmospheric electrical conditions in us at Cape Canaveral Air Force Station and Kennedy Space Center (KSC) have been made restrictive because of the past difficulties that have arisen when space vehicles have triggered lightning discharge after their launch during cloudy weather. With the present ground-base instrumentation and our limited knowledge of cloud electrification process over this region of Florida, it has not been possible to provide a quantitative index of safe launching conditions. During the fall of 1988, a Schweizer 845 airplane equipped to measure electric field and other meteorological parameters flew over KSC in a program to study clouds defined in the existing launch restriction criteria. All aspects of this program are addressed including planning, method, and results. A case study on the November 4, 1988 flight is also presented.
An automated system for pulmonary function testing
NASA Technical Reports Server (NTRS)
Mauldin, D. G.
1974-01-01
An experiment to quantitate pulmonary function was accepted for the space shuttle concept verification test. The single breath maneuver and the nitrogen washout are combined to reduce the test time. Parameters are defined from the forced vital capacity maneuvers. A spirometer measures the breath volume and a magnetic section mass spectrometer provides definition of gas composition. Mass spectrometer and spirometer data are analyzed by a PDP-81 digital computer.
NASA Astrophysics Data System (ADS)
Ferretti, S.; Amadori, K.; Boccalatte, A.; Alessandrini, M.; Freddi, A.; Persiani, F.; Poli, G.
2002-01-01
The UNIBO team composed of students and professors of the University of Bologna along with technicians and engineers from Alenia Space Division and Siad Italargon Division, took part in the 3rd Student Parabolic Flight Campaign of the European Space Agency in 2000. It won the student competition and went on to take part in the Professional Parabolic Flight Campaign of May 2001. The experiment focused on "dendritic growth in aluminium alloy weldings", and investigated topics related to the welding process of aluminium in microgravity. The purpose of the research is to optimise the process and to define the areas of interest that could be improved by new conceptual designs. The team performed accurate tests in microgravity to determine which phenomena have the greatest impact on the quality of the weldings with respect to penetration, surface roughness and the microstructures that are formed during the solidification. Various parameters were considered in the economic-technical optimisation, such as the type of electrode and its tip angle. Ground and space tests have determined the optimum chemical composition of the electrodes to offer longest life while maintaining the shape of the point. Additionally, the power consumption has been optimised; this offers opportunities for promoting the product to the customer as well as being environmentally friendly. Tests performed on the Al-Li alloys showed a significant influence of some physical phenomena such as the Marangoni effect and thermal diffusion; predictions have been made on the basis of observations of the thermal flux seen in the stereophotos. Space transportation today is a key element in the construction of space stations and future planetary bases, because the volumes available for launch to space are directly related to the payload capacity of rockets or the Space Shuttle. The research performed gives engineers the opportunity to consider completely new concepts for designing structures for space applications. In fact, once the optimised parameters are defined for welding in space, it could be possible to weld different parts directly in orbit to obtain much larger sizes and volumes, for example for space tourism habitation modules. The second relevant aspect is technology transfer obtained by the optimisation of the TIG process on aluminium which is often used in the automotive industry as well as in mass production markets.
Model-based high-throughput design of ion exchange protein chromatography.
Khalaf, Rushd; Heymann, Julia; LeSaout, Xavier; Monard, Florence; Costioli, Matteo; Morbidelli, Massimo
2016-08-12
This work describes the development of a model-based high-throughput design (MHD) tool for the operating space determination of a chromatographic cation-exchange protein purification process. Based on a previously developed thermodynamic mechanistic model, the MHD tool generates a large amount of system knowledge and thereby permits minimizing the required experimental workload. In particular, each new experiment is designed to generate information needed to help refine and improve the model. Unnecessary experiments that do not increase system knowledge are avoided. Instead of aspiring to a perfectly parameterized model, the goal of this design tool is to use early model parameter estimates to find interesting experimental spaces, and to refine the model parameter estimates with each new experiment until a satisfactory set of process parameters is found. The MHD tool is split into four sections: (1) prediction, high throughput experimentation using experiments in (2) diluted conditions and (3) robotic automated liquid handling workstations (robotic workstation), and (4) operating space determination and validation. (1) Protein and resin information, in conjunction with the thermodynamic model, is used to predict protein resin capacity. (2) The predicted model parameters are refined based on gradient experiments in diluted conditions. (3) Experiments on the robotic workstation are used to further refine the model parameters. (4) The refined model is used to determine operating parameter space that allows for satisfactory purification of the protein of interest on the HPLC scale. Each section of the MHD tool is used to define the adequate experimental procedures for the next section, thus avoiding any unnecessary experimental work. We used the MHD tool to design a polishing step for two proteins, a monoclonal antibody and a fusion protein, on two chromatographic resins, in order to demonstrate it has the ability to strongly accelerate the early phases of process development. Copyright © 2016 Elsevier B.V. All rights reserved.
A variational approach to dynamics of flexible multibody systems
NASA Technical Reports Server (NTRS)
Wu, Shih-Chin; Haug, Edward J.; Kim, Sung-Soo
1989-01-01
This paper presents a variational formulation of constrained dynamics of flexible multibody systems, using a vector-variational calculus approach. Body reference frames are used to define global position and orientation of individual bodies in the system, located and oriented by position of its origin and Euler parameters, respectively. Small strain linear elastic deformation of individual components, relative to their body references frames, is defined by linear combinations of deformation modes that are induced by constraint reaction forces and normal modes of vibration. A library of kinematic couplings between flexible and/or rigid bodies is defined and analyzed. Variational equations of motion for multibody systems are obtained and reduced to mixed differential-algebraic equations of motion. A space structure that must deform during deployment is analyzed, to illustrate use of the methods developed.
Alignment limit of the NMSSM Higgs sector
Carena, Marcela; Haber, Howard E.; Low, Ian; ...
2016-02-17
The Next-to-Minimal Supersymmetric extension of the Standard Model (NMSSM) with a Higgs boson of mass 125 GeV can be compatible with stop masses of order of the electroweak scale, thereby reducing the degree of fine-tuning necessary to achieve electroweak symmetry breaking. Moreover, in an attractive region of the NMSSM parameter space, corresponding to the \\alignment limit" in which one of the neutral Higgs fields lies approximately in the same direction in field space as the doublet Higgs vacuum expectation value, the observed Higgs boson is predicted to have Standard- Model-like properties. We derive analytical expressions for the alignment conditions andmore » show that they point toward a more natural region of parameter space for electroweak symmetry breaking, while allowing for perturbativity of the theory up to the Planck scale. Additionally, the alignment limit in the NMSSM leads to a well defined spectrum in the Higgs and Higgsino sectors, and yields a rich and interesting Higgs boson phenomenology that can be tested at the LHC. Here, we discuss the most promising channels for discovery and present several benchmark points for further study.« less
Space-based laser-driven MHD generator: Feasibility study
NASA Technical Reports Server (NTRS)
Choi, S. H.
1986-01-01
The feasibility of a laser-driven MHD generator, as a candidate receiver for a space-based laser power transmission system, was investigated. On the basis of reasonable parameters obtained in the literature, a model of the laser-driven MHD generator was developed with the assumptions of a steady, turbulent, two-dimensional flow. These assumptions were based on the continuous and steady generation of plasmas by the exposure of the continuous wave laser beam thus inducing a steady back pressure that enables the medium to flow steadily. The model considered here took the turbulent nature of plasmas into account in the two-dimensional geometry of the generator. For these conditions with the plasma parameters defining the thermal conductivity, viscosity, electrical conductivity for the plasma flow, a generator efficiency of 53.3% was calculated. If turbulent effects and nonequilibrium ionization are taken into account, the efficiency is 43.2%. The study shows that the laser-driven MHD system has potential as a laser power receiver for space applications because of its high energy conversion efficiency, high energy density and relatively simple mechanism as compared to other energy conversion cycles.
A periodic table of effective field theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Clifford; Kampf, Karol; Novotny, Jiri
We systematically explore the space of scalar effective field theories (EFTs) consistent with a Lorentz invariant and local S-matrix. To do so we define an EFT classification based on four parameters characterizing 1) the number of derivatives per interaction, 2) the soft properties of amplitudes, 3) the leading valency of the interactions, and 4) the spacetime dimension. Carving out the allowed space of EFTs, we prove that exceptional EFTs like the non-linear sigma model, Dirac-Born-Infeld theory, and the special Galileon lie precisely on the boundary of allowed theory space. Using on-shell momentum shifts and recursion relations, we prove that EFTsmore » with arbitrarily soft behavior are forbidden and EFTs with leading valency much greater than the spacetime dimension cannot have enhanced soft behavior. We then enumerate all single scalar EFTs in d < 6 and verify that they correspond to known theories in the literature. Finally, our results suggest that the exceptional theories are the natural EFT analogs of gauge theory and gravity because they are one-parameter theories whose interactions are strictly dictated by properties of the S-matrix.« less
A periodic table of effective field theories
Cheung, Clifford; Kampf, Karol; Novotny, Jiri; ...
2017-02-06
We systematically explore the space of scalar effective field theories (EFTs) consistent with a Lorentz invariant and local S-matrix. To do so we define an EFT classification based on four parameters characterizing 1) the number of derivatives per interaction, 2) the soft properties of amplitudes, 3) the leading valency of the interactions, and 4) the spacetime dimension. Carving out the allowed space of EFTs, we prove that exceptional EFTs like the non-linear sigma model, Dirac-Born-Infeld theory, and the special Galileon lie precisely on the boundary of allowed theory space. Using on-shell momentum shifts and recursion relations, we prove that EFTsmore » with arbitrarily soft behavior are forbidden and EFTs with leading valency much greater than the spacetime dimension cannot have enhanced soft behavior. We then enumerate all single scalar EFTs in d < 6 and verify that they correspond to known theories in the literature. Finally, our results suggest that the exceptional theories are the natural EFT analogs of gauge theory and gravity because they are one-parameter theories whose interactions are strictly dictated by properties of the S-matrix.« less
Taipale-Kovalainen, Krista; Karttunen, Anssi-Pekka; Ketolainen, Jarkko; Korhonen, Ossi
2018-03-30
The objective of this study was to devise robust and stable continuous manufacturing process settings, by exploring the design space after an investigation of the lubrication-based parameters influencing the continuous direct compression tableting of high dose paracetamol tablets. Experimental design was used to generate a structured study plan which involved 19 runs. The formulation variables studied were the type of lubricant (magnesium stearate or stearic acid) and its concentration (0.5, 1.0 and 1.5%). Process variables were total production feed rate (5, 10.5 and 16kg/h), mixer speed rpm (500, 850 and 1200rpm), and mixer inlet port for lubricant (A or B). The continuous direct compression tableting line consisted of loss-in-weight feeders, a continuous mixer and a tablet press. The Quality Target Product Profile (QTPP) was defined for the final product, as the flowability of powder blends (2.5s), tablet strength (147N), dissolution in 2.5min (90%) and ejection force (425N). A design space was identified which fulfilled all the requirements of QTPP. The type and concentration of lubricant exerted the greatest influence on the design space. For example, stearic acid increased the tablet strength. Interestingly, the studied process parameters had only a very minor effect on the quality of the final product and the design space. It is concluded that the continuous direct compression tableting process itself is insensitive and can cope with changes in lubrication, whereas formulation parameters exert a major influence on the end product quality. Copyright © 2017 Elsevier B.V. All rights reserved.
Lago, Laura; Rilo, Benito; Fernández-Formoso, Noelia; DaSilva, Luis
2017-08-01
Rehabilitation with implants is a challenge. Having previous evaluation criteria is key to establishing the best treatment for the patient. In addition to clinical and radiological aspects, the prosthetic parameters must be taken into account in the initial workup, since they allow discrimination between fixed and removable rehabilitation. We present a study protocol that analyzes three basic prosthetic aspects. First, denture space defines the need to replace teeth, tissue, or both. Second, lip support focuses on whether or not to include a flange. Third, the smile line warns of potential risks in esthetic rehabilitation. Combining these parameters allows us to make a decision as to the most suitable type of prosthesis. The proposed protocol is useful for assessing the prosthetic parameters that influence decision making as to the best-suited type of restoration. From this point of view, we think it is appropriate for the initial approach to the patient. In any case, other considerations of study may amend the proposal. © 2016 by the American College of Prosthodontists.
The Gamma-Ray Burst ToolSHED is Open for Business
NASA Astrophysics Data System (ADS)
Giblin, Timothy W.; Hakkila, Jon; Haglin, David J.; Roiger, Richard J.
2004-09-01
The GRB ToolSHED, a Gamma-Ray Burst SHell for Expeditions in Data-Mining, is now online and available via a web browser to all in the scientific community. The ToolSHED is an online web utility that contains pre-processed burst attributes of the BATSE catalog and a suite of induction-based machine learning and statistical tools for classification and cluster analysis. Users create their own login account and study burst properties within user-defined multi-dimensional parameter spaces. Although new GRB attributes are periodically added to the database for user selection, the ToolSHED has a feature that allows users to upload their own burst attributes (e.g. spectral parameters, etc.) so that additional parameter spaces can be explored. A data visualization feature using GNUplot and web-based IDL has also been implemented to provide interactive plotting of user-selected session output. In an era in which GRB observations and attributes are becoming increasingly more complex, a utility such as the GRB ToolSHED may play an important role in deciphering GRB classes and understanding intrinsic burst properties.
Cosmological constraints from galaxy clustering in the presence of massive neutrinos
NASA Astrophysics Data System (ADS)
Zennaro, M.; Bel, J.; Dossett, J.; Carbone, C.; Guzzo, L.
2018-06-01
The clustering ratio is defined as the ratio between the correlation function and the variance of the smoothed overdensity field. In Λ cold dark matter (ΛCDM) cosmologies without massive neutrinos, it has already been proven to be independent of bias and redshift space distortions on a range of linear scales. It therefore can provide us with a direct comparison of predictions (for matter in real space) against measurements (from galaxies in redshift space). In this paper we first extend the applicability of such properties to cosmologies that account for massive neutrinos, by performing tests against simulated data. We then investigate the constraining power of the clustering ratio on cosmological parameters such as the total neutrino mass and the equation of state of dark energy. We analyse the joint posterior distribution of the parameters that satisfy both measurements of the galaxy clustering ratio in the SDSS-DR12, and the angular power spectra of cosmic microwave background temperature and polarization anisotropies measured by the Planck satellite. We find the clustering ratio to be very sensitive to the CDM density parameter, but less sensitive to the total neutrino mass. We also forecast the constraining power the clustering ratio will achieve, predicting the amplitude of its errors with a Euclid-like galaxy survey. First we compute parameter forecasts using the Planck covariance matrix alone, then we add information from the clustering ratio. We find a significant improvement on the constraint of all considered parameters, and in particular an improvement of 40 per cent for the CDM density and 14 per cent for the total neutrino mass.
NASA Technical Reports Server (NTRS)
Markley, F. Landis
2005-01-01
A new method is presented for the simultaneous estimation of the attitude of a spacecraft and an N-vector of bias parameters. This method uses a probability distribution function defined on the Cartesian product of SO(3), the group of rotation matrices, and the Euclidean space W N .The Fokker-Planck equation propagates the probability distribution function between measurements, and Bayes s formula incorporates measurement update information. This approach avoids all the issues of singular attitude representations or singular covariance matrices encountered in extended Kalman filters. In addition, the filter has a consistent initialization for a completely unknown initial attitude, owing to the fact that SO(3) is a compact space.
Probabilistic structural analysis of a truss typical for space station
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.
1990-01-01
A three-bay, space, cantilever truss is probabilistically evaluated using the computer code NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) to identify and quantify the uncertainties and respective sensitivities associated with corresponding uncertainties in the primitive variables (structural, material, and loads parameters) that defines the truss. The distribution of each of these primitive variables is described in terms of one of several available distributions such as the Weibull, exponential, normal, log-normal, etc. The cumulative distribution function (CDF's) for the response functions considered and sensitivities associated with the primitive variables for given response are investigated. These sensitivities help in determining the dominating primitive variables for that response.
Thermodynamical transcription of density functional theory with minimum Fisher information
NASA Astrophysics Data System (ADS)
Nagy, Á.
2018-03-01
Ghosh, Berkowitz and Parr designed a thermodynamical transcription of the ground-state density functional theory and introduced a local temperature that varies from point to point. The theory, however, is not unique because the kinetic energy density is not uniquely defined. Here we derive the expression of the phase-space Fisher information in the GBP theory taking the inverse temperature as the Fisher parameter. It is proved that this Fisher information takes its minimum for the case of constant temperature. This result is consistent with the recently proven theorem that the phase-space Shannon information entropy attains its maximum at constant temperature.
Characterization and control of small-world networks.
Pandit, S A; Amritkar, R E
1999-08-01
Recently, Watts and Strogatz [Nature (London) 393, 440 (1998)] offered an interesting model of small-world networks. Here we concretize the concept of a "faraway" connection in a network by defining a far edge. Our definition is algorithmic and independent of any external parameters such as topology of the underlying space of the network. We show that it is possible to control the spread of an epidemic by using the knowledge of far edges. We also suggest a model for better product advertisement using the far edges. Our findings indicate that the number of far edges can be a good intrinsic parameter to characterize small-world phenomena.
Characterization and control of small-world networks
NASA Astrophysics Data System (ADS)
Pandit, S. A.; Amritkar, R. E.
1999-08-01
Recently, Watts and Strogatz [Nature (London) 393, 440 (1998)] offered an interesting model of small-world networks. Here we concretize the concept of a ``faraway'' connection in a network by defining a far edge. Our definition is algorithmic and independent of any external parameters such as topology of the underlying space of the network. We show that it is possible to control the spread of an epidemic by using the knowledge of far edges. We also suggest a model for better product advertisement using the far edges. Our findings indicate that the number of far edges can be a good intrinsic parameter to characterize small-world phenomena.
The structure and dynamics of tornado-like vortices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nolan, D.S.; Farrell, B.F.
The structure and dynamics of axisymmetric tornado-like vortices are explored with a numerical model of axisymmetric incompressible flow based on recently developed numerical methods. The model is first shown to compare favorably with previous results and is then used to study the effects of varying the major parameters controlling the vortex: the strength of the convective forcing, the strength of the rotational forcing, and the magnitude of the model eddy viscosity. Dimensional analysis of the model problem indicates that the results must depend on only two dimensionless parameters. The natural choices for these two parameters are a convective Reynolds numbermore » (based on the velocity scale associated with the convective forcing) and a parameter analogous to the swirl ratio in laboratory models. However, by examining sets of simulations with different model parameters it is found that a dimensionless parameter known as the vortex Reynolds number, which is the ratio of the far-field circulation to the eddy viscosity, is more effective than the convention swirl ratio for predicting the structure of the vortex. The parameter space defined by the choices for model parameters is further explored with large sets of numerical simulations. For much of this parameter space it is confirmed that the vortex structure and time-dependent behavior depend strongly on the vortex Reynolds number and only weakly on the convective Reynolds number. The authors also find that for higher convective Reynolds numbers, the maximum possible wind speed increases, and the rotational forcing necessary to achieve that wind speed decreases. Physical reasoning is used to explain this behavior, and implications for tornado dynamics are discussed.« less
Anatomical and morphological study of the subcoracoacromial canal.
Le Reun, O; Lebhar, J; Mateos, F; Voisin, J L; Thomazeau, H; Ropars, M
2016-12-01
Many clinical anatomy studies have looked into how variations in the acromion, coracoacromial ligament (CAL) and subacromial space are associated with rotator cuff injuries. However, no study up to now had defined anatomically the fibro-osseous canal that confines the supraspinatus muscle in the subcoracoacromial space. Through an anatomical study of the scapula, we defined the bone-related parameters of this canal and its anatomical variations. This study on dry bones involved 71 scapulas. With standardised photographs in two orthogonal views (superior and lateral), the surface area of the subcoracoacromial canal and the anatomical parameters making up this canal were defined and measured using image analysis software. The primary analysis evaluated the anatomical parameters of the canal as a function of three canal surface area groups; the secondary analysis looked into how variations in the canal surface area were related to the type of acromion according to the Bigliani classification. Relative to glenoid width, the group with a large canal surface area (L) had significantly less lateral overhang of the acromion than the group with a small canal surface area (S), with ratios of 0.41±0.23 and 0.58±0.3, respectively (P=0.04). The mean length of the CAL was 46±8mm in the L group and 39±9mm in the S group (P=0.003). The coracoacromial arch angle was 38°±11° in the L group and 34°±9° in the S group; the canal surface area was smaller in specimens with a smaller coracoacromial arch angle (P=0.20). Apart from acromial morphology, there could be innate anatomical features of the scapula that predispose people to extrinsic lesions to the supraspinatus tendon (lateral overhang, coracoacromial arch angle) by reducing the subcoracoacromial canal's surface area. Anatomical descriptive study. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Short-term capture of the Earth-Moon system
NASA Astrophysics Data System (ADS)
Qi, Yi; de Ruiter, Anton
2018-06-01
In this paper, the short-term capture (STC) of an asteroid in the Earth-Moon system is proposed and investigated. First, the space condition of STC is analysed and five subsets of the feasible region are defined and discussed. Then, the time condition of STC is studied by parameter scanning in the Sun-Earth-Moon-asteroid restricted four-body problem. Numerical results indicate that there is a clear association between the distributions of the time probability of STC and the five subsets. Next, the influence of the Jacobi constant on STC is examined using the space and time probabilities of STC. Combining the space and time probabilities of STC, we propose a STC index to evaluate the probability of STC comprehensively. Finally, three potential STC asteroids are found and analysed.
Parametric study of laser photovoltaic energy converters
NASA Technical Reports Server (NTRS)
Walker, G. H.; Heinbockel, J. H.
1987-01-01
Photovoltaic converters are of interest for converting laser power to electrical power in a space-based laser power system. This paper describes a model for photovoltaic laser converters and the application of this model to a neodymium laser silicon photovoltaic converter system. A parametric study which defines the sensitivity of the photovoltaic parameters is described. An optimized silicon photovoltaic converter has an efficiency greater than 50 percent for 1000 W/sq cm of neodymium laser radiation.
Dynamically hot galaxies. I - Structural properties
NASA Technical Reports Server (NTRS)
Bender, Ralf; Burstein, David; Faber, S. M.
1992-01-01
Results are reported from an analysis of the structural properties of dynamically hot galaxies which combines central velocity dispersion, effective surface brightness, and effective radius into a new 3-space (k), in which the axes are parameters that are physically meaningful. Hot galaxies are found to divide into groups in k-space that closely parallel conventional morphological classifications, namely, luminous ellipticals, compacts, bulges, bright dwarfs, and dwarf spheroidals. A major sequence is defined by luminous ellipticals, bulges, and most compacts, which together constitute a smooth continuum in k-space. Several properties vary smoothly with mass along this continuum, including bulge-to-disk ratio, radio properties, rotation, degree of velocity anisotropy, and 'unrelaxed'. A second major sequence is comprised of dwarf ellipticals and dwarf spheroidals. It is suggested that mass loss is a major factor in hot dwarf galaxies, but the dwarf sequence cannot be simply a mass-loss sequence, as it has the wrong direction in k-space.
System health monitoring using multiple-model adaptive estimation techniques
NASA Astrophysics Data System (ADS)
Sifford, Stanley Ryan
Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.
Fink, Reinhold F
2010-11-07
A rigorous perturbation theory is proposed, which has the same second order energy as the spin-component-scaled Møller-Plesset second order (SCS-MP2) method of Grimme [J. Chem. Phys. 118, 9095 (2003)]. This upgrades SCS-MP2 to a systematically improvable, true wave-function-based method. The perturbation theory is defined by an unperturbed Hamiltonian, Ĥ(0), that contains the ordinary Fock operator and spin operators Ŝ(2) that act either on the occupied or the virtual orbital spaces. Two choices for Ĥ(0) are discussed and the importance of a spin-pure Ĥ((0)) is underlined. Like the SCS-MP2 approach, the theory contains two parameters (c(os) and c(ss)) that scale the opposite-spin and the same-spin contributions to the second order perturbation energy. It is shown that these parameters can be determined from theoretical considerations by a Feenberg scaling approach or a fit of the wave functions from the perturbation theory to the exact one from a full configuration interaction calculation. The parameters c(os)=1.15 and c(ss)=0.75 are found to be optimal for a reasonable test set of molecules. The meaning of these parameters and the consequences following from a well defined improved MP method are discussed.
Defining Exercise Performance Metrics for Flight Hardware Development
NASA Technical Reports Server (NTRS)
Beyene, Nahon M.
2004-01-01
The space industry has prevailed over numerous design challenges in the spirit of exploration. Manned space flight entails creating products for use by humans and the Johnson Space Center has pioneered this effort as NASA's center for manned space flight. NASA Astronauts use a suite of flight exercise hardware to maintain strength for extravehicular activities and to minimize losses in muscle mass and bone mineral density. With a cycle ergometer, treadmill, and the Resistive Exercise Device available on the International Space Station (ISS), the Space Medicine community aspires to reproduce physical loading schemes that match exercise performance in Earth s gravity. The resistive exercise device presents the greatest challenge with the duty of accommodating 20 different exercises and many variations on the core set of exercises. This paper presents a methodology for capturing engineering parameters that can quantify proper resistive exercise performance techniques. For each specified exercise, the method provides engineering parameters on hand spacing, foot spacing, and positions of the point of load application at the starting point, midpoint, and end point of the exercise. As humans vary in height and fitness levels, the methodology presents values as ranges. In addition, this method shows engineers the proper load application regions on the human body. The methodology applies to resistive exercise in general and is in use for the current development of a Resistive Exercise Device. Exercise hardware systems must remain available for use and conducive to proper exercise performance as a contributor to mission success. The astronauts depend on exercise hardware to support extended stays aboard the ISS. Future plans towards exploration of Mars and beyond acknowledge the necessity of exercise. Continuous improvement in technology and our understanding of human health maintenance in space will allow us to support the exploration of Mars and the future of space exploration.
NASA Astrophysics Data System (ADS)
Ghosh, Uttam; Banerjee, Joydip; Sarkar, Susmita; Das, Shantanu
2018-06-01
Klein-Gordon equation is one of the basic steps towards relativistic quantum mechanics. In this paper, we have formulated fractional Klein-Gordon equation via Jumarie fractional derivative and found two types of solutions. Zero-mass solution satisfies photon criteria and non-zero mass satisfies general theory of relativity. Further, we have developed rest mass condition which leads us to the concept of hidden wave. Classical Klein-Gordon equation fails to explain a chargeless system as well as a single-particle system. Using the fractional Klein-Gordon equation, we can overcome the problem. The fractional Klein-Gordon equation also leads to the smoothness parameter which is the measurement of the bumpiness of space. Here, by using this smoothness parameter, we have defined and interpreted the various cases.
Effective theories of universal theories
Wells, James D.; Zhang, Zhengkang
2016-01-20
It is well-known but sometimes overlooked that constraints on the oblique parameters (most notably S and T parameters) are generally speaking only applicable to a special class of new physics scenarios known as universal theories. The oblique parameters should not be associated with Wilson coefficients in a particular operator basis in the effective field theory (EFT) framework, unless restrictions have been imposed on the EFT so that it describes universal theories. Here, we work out these restrictions, and present a detailed EFT analysis of universal theories. We find that at the dimension-6 level, universal theories are completely characterized by 16more » parameters. They are conveniently chosen to be: 5 oblique parameters that agree with the commonly-adopted ones, 4 anomalous triple-gauge couplings, 3 rescaling factors for the h 3, hff, hV V vertices, 3 parameters for hV V vertices absent in the Standard Model, and 1 four-fermion coupling of order yf 2. Furthermore, all these parameters are defined in an unambiguous and basis-independent way, allowing for consistent constraints on the universal theories parameter space from precision electroweak and Higgs data.« less
Effective theories of universal theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wells, James D.; Zhang, Zhengkang
It is well-known but sometimes overlooked that constraints on the oblique parameters (most notably S and T parameters) are generally speaking only applicable to a special class of new physics scenarios known as universal theories. The oblique parameters should not be associated with Wilson coefficients in a particular operator basis in the effective field theory (EFT) framework, unless restrictions have been imposed on the EFT so that it describes universal theories. Here, we work out these restrictions, and present a detailed EFT analysis of universal theories. We find that at the dimension-6 level, universal theories are completely characterized by 16more » parameters. They are conveniently chosen to be: 5 oblique parameters that agree with the commonly-adopted ones, 4 anomalous triple-gauge couplings, 3 rescaling factors for the h 3, hff, hV V vertices, 3 parameters for hV V vertices absent in the Standard Model, and 1 four-fermion coupling of order yf 2. Furthermore, all these parameters are defined in an unambiguous and basis-independent way, allowing for consistent constraints on the universal theories parameter space from precision electroweak and Higgs data.« less
ODEion--a software module for structural identification of ordinary differential equations.
Gennemark, Peter; Wedelin, Dag
2014-02-01
In the systems biology field, algorithms for structural identification of ordinary differential equations (ODEs) have mainly focused on fixed model spaces like S-systems and/or on methods that require sufficiently good data so that derivatives can be accurately estimated. There is therefore a lack of methods and software that can handle more general models and realistic data. We present ODEion, a software module for structural identification of ODEs. Main characteristic features of the software are: • The model space is defined by arbitrary user-defined functions that can be nonlinear in both variables and parameters, such as for example chemical rate reactions. • ODEion implements computationally efficient algorithms that have been shown to efficiently handle sparse and noisy data. It can run a range of realistic problems that previously required a supercomputer. • ODEion is easy to use and provides SBML output. We describe the mathematical problem, the ODEion system itself, and provide several examples of how the system can be used. Available at: http://www.odeidentification.org.
Aspects of the "Design Space" in high pressure liquid chromatography method development.
Molnár, I; Rieger, H-J; Monks, K E
2010-05-07
The present paper describes a multifactorial optimization of 4 critical HPLC method parameters, i.e. gradient time (t(G)), temperature (T), pH and ternary composition (B(1):B(2)) based on 36 experiments. The effect of these experimental variables on critical resolution and selectivity was carried out in such a way as to systematically vary all four factors simultaneously. The basic element is a gradient time-temperature (t(G)-T) plane, which is repeated at three different pH's of the eluent A and at three different ternary compositions of eluent B between methanol and acetonitrile. The so-defined volume enables the investigation of the critical resolution for a part of the Design Space of a given sample. Further improvement of the analysis time, with conservation of the previously optimized selectivity, was possible by reducing the gradient time and increasing the flow rate. Multidimensional robust regions were successfully defined and graphically depicted. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Optimization of Composite Structures with Curved Fiber Trajectories
NASA Astrophysics Data System (ADS)
Lemaire, Etienne; Zein, Samih; Bruyneel, Michael
2014-06-01
This paper studies the problem of optimizing composites shells manufactured using Automated Tape Layup (ATL) or Automated Fiber Placement (AFP) processes. The optimization procedure relies on a new approach to generate equidistant fiber trajectories based on Fast Marching Method. Starting with a (possibly curved) reference fiber direction defined on a (possibly curved) meshed surface, the new method allows determining fibers orientation resulting from a uniform thickness layup. The design variables are the parameters defining the position and the shape of the reference curve which results in very few design variables. Thanks to this efficient parameterization, maximum stiffness optimization numerical applications are proposed. The shape of the design space is discussed, regarding local and global optimal solutions.
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.
1987-01-01
The aircraft parameter estimation problem is used to illustrate the utility of parameter estimation, which applies to many engineering and scientific fields. Maximum likelihood estimation has been used to extract stability and control derivatives from flight data for many years. This paper presents some of the basic concepts of aircraft parameter estimation and briefly surveys the literature in the field. The maximum likelihood estimator is discussed, and the basic concepts of minimization and estimation are examined for a simple simulated aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Some of the major conclusions for the simulated example are also developed for the analysis of flight data from the F-14, highly maneuverable aircraft technology (HiMAT), and space shuttle vehicles.
The predictive consequences of parameterization
NASA Astrophysics Data System (ADS)
White, J.; Hughes, J. D.; Doherty, J. E.
2013-12-01
In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.
Theory of Space Charge Limited Current in Fractional Dimensional Space
NASA Astrophysics Data System (ADS)
Zubair, Muhammad; Ang, L. K.
The concept of fractional dimensional space has been effectively applied in many areas of physics to describe the fractional effects on the physical systems. We will present some recent developments of space charge limited (SCL) current in free space and solid in the framework of fractional dimensional space which may account for the effect of imperfectness or roughness of the electrode surface. For SCL current in free space, the governing law is known as the Child-Langmuir (CL) law. Its analogy in a trap-free solid (or dielectric) is known as Mott-Gurney (MG) law. This work extends the one-dimensional CL Law and MG Law for the case of a D-dimensional fractional space with 0 < D <= 1 where parameter D defines the degree of roughness of the electrode surface. Such a fractional dimensional space generalization of SCL current theory can be used to characterize the charge injection by the imperfectness or roughness of the surface in applications related to high current cathode (CL law), and organic electronics (MG law). In terms of operating regime, the model has included the quantum effects when the spacing between the electrodes is small.
NASA Technical Reports Server (NTRS)
Dhas, Chris
2000-01-01
NASAs Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to examine protocols and architectures for an In-Space Internet Node. CNS has developed a methodology for network reference models to support NASAs four mission areas: Earth Science, Space Science, Human Exploration and Development of Space (REDS), Aerospace Technology. CNS previously developed a report which applied the methodology, to three space Internet-based communications scenarios for future missions. CNS conceptualized, designed, and developed space Internet-based communications protocols and architectures for each of the independent scenarios. GRC selected for further analysis the scenario that involved unicast communications between a Low-Earth-Orbit (LEO) International Space Station (ISS) and a ground terminal Internet node via a Tracking and Data Relay Satellite (TDRS) transfer. This report contains a tradeoff analysis on the selected scenario. The analysis examines the performance characteristics of the various protocols and architectures. The tradeoff analysis incorporates the results of a CNS developed analytical model that examined performance parameters.
Multifaceted Schwinger effect in de Sitter space
NASA Astrophysics Data System (ADS)
Sharma, Ramkishor; Singh, Suprit
2017-07-01
We investigate particle production à la the Schwinger mechanism in an expanding, flat de Sitter patch as is relevant for the inflationary epoch of our Universe. Defining states and particle content in curved spacetime is certainly not a unique process. There being different prescriptions on how that can be done, we have used the Schrödinger formalism to define instantaneous particle content of the state, etc. This allows us to go past the adiabatic regime to which the effect has been restricted in the previous studies and bring out its multifaceted nature in different settings. Each of these settings gives rise to contrasting features and behavior as per the effect of the electric field and expansion rate on the instantaneous mean particle number. We also quantify the degree of classicality of the process during its evolution using a "classicality parameter" constructed out of parameters of the Wigner function to obtain information about the quantum to classical transition in this case.
NASA Technical Reports Server (NTRS)
Priem, Richard J.
1988-01-01
The purpose of this study is to define the requirements of commercially motivated microgravity combustion experiments and the optimal way for space station to accommodate these requirements. Representatives of commercial organizations, universities and government agencies were contacted. Interest in and needs for microgravity combustion studies are identified for commercial/industrial groups involved in fire safety with terrestrial applications, fire safety with space applications, propulsion and power, industrial burners, or pollution control. From these interests and needs experiments involving: (1) no flow with solid or liquid fuels; (2) homogeneous mixtures of fuel and air; (3) low flow with solid or liquid fuels; (4) low flow with gaseous fuel; (5) high pressure combustion; and (6) special burner systems are described and space station resource requirements for each type of experiment provided. Critical technologies involving the creation of a laboratory environment and methods for combining experimental needs into one experiment in order to obtain effective use of space station are discussed. Diagnostic techniques for monitoring combustion process parameters are identified.
Actuators for a space manipulator
NASA Technical Reports Server (NTRS)
Chun, W.; Brunson, P.
1987-01-01
The robotic manipulator can be decomposed into distinct subsytems. One particular area of interest of mechanical subsystems is electromechanical actuators (or drives). A drive is defined as a motor with an appropriate transmission. An overview is given of existing, as well as state-of-the-art drive systems. The scope is limited to space applications. A design philosophy and adequate requirements are the initial steps in designing a space-qualified actuator. The focus is on the d-c motor in conjunction with several types of transmissions (harmonic, tendon, traction, and gear systems). The various transmissions will be evaluated and key performance parameters will be addressed in detail. Included in the assessment is a shuttle RMS joint and a MSFC drive of the Prototype Manipulator Arm. Compound joints are also investigated. Space imposes a set of requirements for designing a high-performance drive assembly. Its inaccessibility and cryogenic conditions warrant special considerations. Some guidelines concerning these conditions are present. The goal is to gain a better understanding in designing a space actuator.
NASA Astrophysics Data System (ADS)
Lefebvre, Eric; Helleur, Christopher; Kashyap, Nathan
2008-03-01
Maritime surveillance of coastal regions requires operational staff to integrate a large amount of information from a variety of military and civilian sources. The diverse nature of the information sources makes complete automation difficult. The volume of vessels tracked and the number of sources makes it difficult for the limited operation centre staff to fuse all the information manually within a reasonable timeframe. In this paper, a conceptual decision space is proposed to provide a framework for automating the process of operators integrating the sources needed to maintain Maritime Domain Awareness. The decision space contains all potential pairs of ship tracks that are candidates for fusion. The location of the candidate pairs in this defined space depends on the value of the parameters used to make a decision. In the application presented, three independent parameters are used: the source detection efficiency, the geo-feasibility, and the track quality. One of three decisions is applied to each candidate track pair based on these three parameters: 1. to accept the fusion, in which case tracks are fused in one track, 2. to reject the fusion, in which case the candidate track pair is removed from the list of potential fusion, and 3. to defer the fusion, in which case no fusion occurs but the candidate track pair remains in the list of potential fusion until sufficient information is provided. This paper demonstrates in an operational setting how a proposed conceptual space is used to optimize the different thresholds for automatic fusion decision while minimizing the list of unresolved cases when the decision is left to the operator.
Rapid production of optimal-quality reduced-resolution representations of very large databases
Sigeti, David E.; Duchaineau, Mark; Miller, Mark C.; Wolinsky, Murray; Aldrich, Charles; Mineev-Weinstein, Mark B.
2001-01-01
View space representation data is produced in real time from a world space database representing terrain features. The world space database is first preprocessed. A database is formed having one element for each spatial region corresponding to a finest selected level of detail. A multiresolution database is then formed by merging elements and a strict error metric is computed for each element at each level of detail that is independent of parameters defining the view space. The multiresolution database and associated strict error metrics are then processed in real time for real time frame representations. View parameters for a view volume comprising a view location and field of view are selected. The error metric with the view parameters is converted to a view-dependent error metric. Elements with the coarsest resolution are chosen for an initial representation. Data set first elements from the initial representation data set are selected that are at least partially within the view volume. The first elements are placed in a split queue ordered by the value of the view-dependent error metric. If the number of first elements in the queue meets or exceeds a predetermined number of elements or whether the largest error metric is less than or equal to a selected upper error metric bound, the element at the head of the queue is force split and the resulting elements are inserted into the queue. Force splitting is continued until the determination is positive to form a first multiresolution set of elements. The first multiresolution set of elements is then outputted as reduced resolution view space data representing the terrain features.
User-Friendly Interface Developed for a Web-Based Service for SpaceCAL Emulations
NASA Technical Reports Server (NTRS)
Liszka, Kathy J.; Holtz, Allen P.
2004-01-01
A team at the NASA Glenn Research Center is developing a Space Communications Architecture Laboratory (SpaceCAL) for protocol development activities for coordinated satellite missions. SpaceCAL will provide a multiuser, distributed system to emulate space-based Internet architectures, backbone networks, formation clusters, and constellations. As part of a new effort in 2003, building blocks are being defined for an open distributed system to make the satellite emulation test bed accessible through an Internet connection. The first step in creating a Web-based service to control the emulation remotely is providing a user-friendly interface for encoding the data into a well-formed and complete Extensible Markup Language (XML) document. XML provides coding that allows data to be transferred between dissimilar systems. Scenario specifications include control parameters, network routes, interface bandwidths, delay, and bit error rate. Specifications for all satellite, instruments, and ground stations in a given scenario are also included in the XML document. For the SpaceCAL emulation, the XML document can be created using XForms, a Webbased forms language for data collection. Contrary to older forms technology, the interactive user interface makes the science prevalent, not the data representation. Required versus optional input fields, default values, automatic calculations, data validation, and reuse will help researchers quickly and accurately define missions. XForms can apply any XML schema defined for the test mission to validate data before forwarding it to the emulation facility. New instrument definitions, facilities, and mission types can be added to the existing schema. The first prototype user interface incorporates components for interactive input and form processing. Internet address, data rate, and the location of the facility are implemented with basic form controls with default values provided for convenience and efficiency using basic XForms operations. Because different emulation scenarios will vary widely in their component structure, more complex operations are used to add and delete facilities.
Earth Orientation and Its Excitations by Atmosphere, Oceans, and Geomagnetic Jerks
NASA Astrophysics Data System (ADS)
Vondrák, J.; Ron, C.
2015-12-01
In addition to torques exerted by the Moon, Sun, and planets, changes of the Earth orientation parameters (EOP) are known to be caused also by excitations by the atmosphere and oceans. Recently appeared studies, hinting that geomagnetic jerks (GMJ, rapid changes of geomagnetic field) might be associated with sudden changes of phase and amplitude of EOP (Holme and de Viron 2005, 2013, Gibert and Le Mouël 2008, Malkin 2013). We (Ron et al. 2015) used additional excitations applied at the epochs of GMJ to derive its influence on motion of the spin axis of the Earth in space (precession-nutation). We demonstrated that this effect, if combined with the influence of the atmosphere and oceans, improves substantially the agreement with celestial pole offsets observed by Very Long-Baseline Interferometry. Here we concentrate our efforts to study possible influence of GMJ on temporal changes of all five Earth orientation parameters defining the complete Earth orientation in space. Numerical integration of Brzeziński's broad-band Liouville equations (Brzeziński 1994) with atmospheric and oceanic excitations, combined with expected GMJ effects, is used to derive EOP and compare them with their observed values. We demonstrate that the agreement between all five Earth orientation parameters integrated by this method and those observed by space geodesy is improved substantially if the influence of additional excitations at GMJ epochs is added to excitations by the atmosphere and oceans.
Population Coding of Visual Space: Modeling
Lehky, Sidney R.; Sereno, Anne B.
2011-01-01
We examine how the representation of space is affected by receptive field (RF) characteristics of the encoding population. Spatial responses were defined by overlapping Gaussian RFs. These responses were analyzed using multidimensional scaling to extract the representation of global space implicit in population activity. Spatial representations were based purely on firing rates, which were not labeled with RF characteristics (tuning curve peak location, for example), differentiating this approach from many other population coding models. Because responses were unlabeled, this model represents space using intrinsic coding, extracting relative positions amongst stimuli, rather than extrinsic coding where known RF characteristics provide a reference frame for extracting absolute positions. Two parameters were particularly important: RF diameter and RF dispersion, where dispersion indicates how broadly RF centers are spread out from the fovea. For large RFs, the model was able to form metrically accurate representations of physical space on low-dimensional manifolds embedded within the high-dimensional neural population response space, suggesting that in some cases the neural representation of space may be dimensionally isomorphic with 3D physical space. Smaller RF sizes degraded and distorted the spatial representation, with the smallest RF sizes (present in early visual areas) being unable to recover even a topologically consistent rendition of space on low-dimensional manifolds. Finally, although positional invariance of stimulus responses has long been associated with large RFs in object recognition models, we found RF dispersion rather than RF diameter to be the critical parameter. In fact, at a population level, the modeling suggests that higher ventral stream areas with highly restricted RF dispersion would be unable to achieve positionally-invariant representations beyond this narrow region around fixation. PMID:21344012
Guaranteed convergence of the Hough transform
NASA Astrophysics Data System (ADS)
Soffer, Menashe; Kiryati, Nahum
1995-01-01
The straight-line Hough Transform using normal parameterization with a continuous voting kernel is considered. It transforms the colinearity detection problem to a problem of finding the global maximum of a two dimensional function above a domain in the parameter space. The principle is similar to robust regression using fixed scale M-estimation. Unlike standard M-estimation procedures the Hough Transform does not rely on a good initial estimate of the line parameters: The global optimization problem is approached by exhaustive search on a grid that is usually as fine as computationally feasible. The global maximum of a general function above a bounded domain cannot be found by a finite number of function evaluations. Only if sufficient a-priori knowledge about the smoothness of the objective function is available, convergence to the global maximum can be guaranteed. The extraction of a-priori information and its efficient use are the main challenges in real global optimization problems. The global optimization problem in the Hough Transform is essentially how fine should the parameter space quantization be in order not to miss the true maximum. More than thirty years after Hough patented the basic algorithm, the problem is still essentially open. In this paper an attempt is made to identify a-priori information on the smoothness of the objective (Hough) function and to introduce sufficient conditions for the convergence of the Hough Transform to the global maximum. An image model with several application dependent parameters is defined. Edge point location errors as well as background noise are accounted for. Minimal parameter space quantization intervals that guarantee convergence are obtained. Focusing policies for multi-resolution Hough algorithms are developed. Theoretical support for bottom- up processing is provided. Due to the randomness of errors and noise, convergence guarantees are probabilistic.
NASA Astrophysics Data System (ADS)
Hellen, Edward H.; Volkov, Evgeny
2018-09-01
We study the dynamical regimes demonstrated by a pair of identical 3-element ring oscillators (reduced version of synthetic 3-gene genetic Repressilator) coupled using the design of the 'quorum sensing (QS)' process natural for interbacterial communications. In this work QS is implemented as an additional network incorporating elements of the ring as both the source and the activation target of the fast diffusion QS signal. This version of indirect nonlinear coupling, in cooperation with the reasonable extension of the parameters which control properties of the isolated oscillators, exhibits the formation of a very rich array of attractors. Using a parameter-space defined by the individual oscillator amplitude and the coupling strength, we found the extended area of parameter-space where the identical oscillators demonstrate quasiperiodicity, which evolves to chaos via the period doubling of either resonant limit cycles or complex antiphase symmetric limit cycles with five winding numbers. The symmetric chaos extends over large parameter areas up to its loss of stability, followed by a system transition to an unexpected mode: an asymmetric limit cycle with a winding number of 1:2. In turn, after long evolution across the parameter-space, this cycle demonstrates a period doubling cascade which restores the symmetry of dynamics by formation of symmetric chaos, which nevertheless preserves the memory of the asymmetric limit cycles in the form of stochastic alternating "polarization" of the time series. All stable attractors coexist with some others, forming remarkable and complex multistability including the coexistence of torus and limit cycles, chaos and regular attractors, symmetric and asymmetric regimes. We traced the paths and bifurcations leading to all areas of chaos, and presented a detailed map of all transformations of the dynamics.
NASA Technical Reports Server (NTRS)
1971-01-01
Preliminary design and analysis of purge system concepts and purge subsystem approaches are defined and evaluated. Acceptable purge subsystem approaches were combined into four predesign layouts which are presented for comparison and evaluation. Two predesigns were selected for further detailed design and evaluation for eventual selection of the best design for a full scale test configuration. An operation plan is included as an appendix for reference to shuttle-oriented operational parameters.
2006-04-01
dosages, seemed to improve the freeze –thaw durability of concrete. Phase II found what appears to be a maximum dosage after which freeze –thaw...durability becomes a concern. That is because cement hydration can only create a finite amount of space to absorb these chemicals. Thus, for freeze ...protection, admixture dosages should be designed according to water content as specified in Phase I, while, for freeze –thaw durability, admixture dosages
NASA Technical Reports Server (NTRS)
Dulac, J.; Latour, J.
1991-01-01
The DSN (Deep Space Network) mission support requirements for Telecom 2-A (TC2A) are summarized. The Telecom 2-A will provide high-speed data link applications, telephone, and television service between France and overseas territories. The mission objectives are outlined and the DSN support requirements are defined through the presentation of tables and narratives describing the spacecraft flight profile; DSN support coverage; frequency assignments; support parameters for telemetry, command and support systems; and tracking support responsibility.
The magnetotelluric response over 2D media with resistivity frequency dispersion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mauriello, P.; Patella, D.; Siniscalchi, A.
1996-09-01
The authors investigate the magnetotelluric response of two-dimensional bodies, characterized by the presence of low-frequency dispersion phenomena of the electrical parameters. The Cole-Cole dispersion model is assumed to represent the frequency dependence of the impedivity complex function, defined as the inverse of Stoyer`s admittivity complex parameter. To simulate real geological situations, they consider three structural models, representing a sedimentary basin, a geothermal system and a magma chamber, assumed to be partially or totally dispersive. From a detailed study of the frequency and space behaviors of the magnetotelluric parameters, taking known non-dispersive results as reference, they outline the main peculiarities ofmore » the local distortion effects, caused by the presence of dispersion in the target media. Finally, they discuss the interpretive errors which can be made by neglecting the dispersion phenomena. The apparent dispersion function, which was defined in a previous paper to describe similar effects in the one-dimensional case, is again used as a reliable indicator of location, shape and spatial extent of the dispersive bodies. The general result of this study is a marked improvement in the resolution power of the magnetotelluric method.« less
A New Approach to Galaxy Morphology. I. Analysis of the Sloan Digital Sky Survey Early Data Release
NASA Astrophysics Data System (ADS)
Abraham, Roberto G.; van den Bergh, Sidney; Nair, Preethi
2003-05-01
In this paper we present a new statistic for quantifying galaxy morphology based on measurements of the Gini coefficient of galaxy light distributions. This statistic is easy to measure and is commonly used in econometrics to measure how wealth is distributed in human populations. When applied to galaxy images, the Gini coefficient provides a quantitative measure of the inequality with which a galaxy's light is distributed among its constituent pixels. We measure the Gini coefficient of local galaxies in the Early Data Release of the Sloan Digital Sky Survey and demonstrate that this quantity is closely correlated with measurements of central concentration, but with significant scatter. This scatter is almost entirely due to variations in the mean surface brightness of galaxies. By exploring the distribution of galaxies in the three-dimensional parameter space defined by the Gini coefficient, central concentration, and mean surface brightness, we show that all nearby galaxies lie on a well-defined two-dimensional surface (a slightly warped plane) embedded within a three-dimensional parameter space. By associating each galaxy sample with the equation of this plane, we can encode the morphological composition of the entire SDSS g*-band sample using the following three numbers: {22.451, 5.366, 7.010}. The i*-band sample is encoded as {22.149, 5.373, and 7.627}.
Process characterization and Design Space definition.
Hakemeyer, Christian; McKnight, Nathan; St John, Rick; Meier, Steven; Trexler-Schmidt, Melody; Kelley, Brian; Zettl, Frank; Puskeiler, Robert; Kleinjans, Annika; Lim, Fred; Wurth, Christine
2016-09-01
Quality by design (QbD) is a global regulatory initiative with the goal of enhancing pharmaceutical development through the proactive design of pharmaceutical manufacturing process and controls to consistently deliver the intended performance of the product. The principles of pharmaceutical development relevant to QbD are described in the ICH guidance documents (ICHQ8-11). An integrated set of risk assessments and their related elements developed at Roche/Genentech were designed to provide an overview of product and process knowledge for the production of a recombinant monoclonal antibody (MAb). This chapter describes the tools used for the characterization and validation of MAb manufacturing process under the QbD paradigm. This comprises risk assessments for the identification of potential Critical Process Parameters (pCPPs), statistically designed experimental studies as well as studies assessing the linkage of the unit operations. Outcome of the studies is the classification of process parameters according to their criticality and the definition of appropriate acceptable ranges of operation. The process and product knowledge gained in these studies can lead to the approval of a Design Space. Additionally, the information gained in these studies are used to define the 'impact' which the manufacturing process can have on the variability of the CQAs, which is used to define the testing and monitoring strategy. Copyright © 2016 International Alliance for Biological Standardization. Published by Elsevier Ltd. All rights reserved.
Robust Multivariable Optimization and Performance Simulation for ASIC Design
NASA Technical Reports Server (NTRS)
DuMonthier, Jeffrey; Suarez, George
2013-01-01
Application-specific-integrated-circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power, and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem, which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques, which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable, are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way that facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as a framework of software modules, templates, and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation.
Vavoulis, Dimitrios V.; Straub, Volko A.; Aston, John A. D.; Feng, Jianfeng
2012-01-01
Traditional approaches to the problem of parameter estimation in biophysical models of neurons and neural networks usually adopt a global search algorithm (for example, an evolutionary algorithm), often in combination with a local search method (such as gradient descent) in order to minimize the value of a cost function, which measures the discrepancy between various features of the available experimental data and model output. In this study, we approach the problem of parameter estimation in conductance-based models of single neurons from a different perspective. By adopting a hidden-dynamical-systems formalism, we expressed parameter estimation as an inference problem in these systems, which can then be tackled using a range of well-established statistical inference methods. The particular method we used was Kitagawa's self-organizing state-space model, which was applied on a number of Hodgkin-Huxley-type models using simulated or actual electrophysiological data. We showed that the algorithm can be used to estimate a large number of parameters, including maximal conductances, reversal potentials, kinetics of ionic currents, measurement and intrinsic noise, based on low-dimensional experimental data and sufficiently informative priors in the form of pre-defined constraints imposed on model parameters. The algorithm remained operational even when very noisy experimental data were used. Importantly, by combining the self-organizing state-space model with an adaptive sampling algorithm akin to the Covariance Matrix Adaptation Evolution Strategy, we achieved a significant reduction in the variance of parameter estimates. The algorithm did not require the explicit formulation of a cost function and it was straightforward to apply on compartmental models and multiple data sets. Overall, the proposed methodology is particularly suitable for resolving high-dimensional inference problems based on noisy electrophysiological data and, therefore, a potentially useful tool in the construction of biophysical neuron models. PMID:22396632
Digital Beamforming Synthetic Aperture Radar Developments at NASA Goddard Space Flight Center
NASA Technical Reports Server (NTRS)
Rincon, Rafael; Fatoyinbo, Temilola; Osmanoglu, Batuhan; Lee, Seung Kuk; Du Toit, Cornelis F.; Perrine, Martin; Ranson, K. Jon; Sun, Guoqing; Deshpande, Manohar; Beck, Jaclyn;
2016-01-01
Advanced Digital Beamforming (DBF) Synthetic Aperture Radar (SAR) technology is an area of research and development pursued at the NASA Goddard Space Flight Center (GSFC). Advanced SAR architectures enhances radar performance and opens a new set of capabilities in radar remote sensing. DBSAR-2 and EcoSAR are two state-of-the-art radar systems recently developed and tested. These new instruments employ multiple input-multiple output (MIMO) architectures characterized by multi-mode operation, software defined waveform generation, digital beamforming, and configurable radar parameters. The instruments have been developed to support several disciplines in Earth and Planetary sciences. This paper describes the radars advanced features and report on the latest SAR processing and calibration efforts.
Model reductions using a projection formulation
NASA Technical Reports Server (NTRS)
De Villemagne, Christian; Skelton, Robert E.
1987-01-01
A new methodology for model reduction of MIMO systems exploits the notion of an oblique projection. A reduced model is uniquely defined by a projector whose range space and orthogonal to the null space are chosen among the ranges of generalized controllability and observability matrices. The reduced order models match various combinations (chosen by the designer) of four types of parameters of the full order system associated with (1) low frequency response, (2) high frequency response, (3) low frequency power spectral density, and (4) high frequency power spectral density. Thus, the proposed method is a computationally simple substitute for many existing methods, has an extreme flexibility to embrace combinations of existing methods and offers some new features.
Estimates of the atmospheric parameters of M-type stars: a machine-learning perspective
NASA Astrophysics Data System (ADS)
Sarro, L. M.; Ordieres-Meré, J.; Bello-García, A.; González-Marcos, A.; Solano, E.
2018-05-01
Estimating the atmospheric parameters of M-type stars has been a difficult task due to the lack of simple diagnostics in the stellar spectra. We aim at uncovering good sets of predictive features of stellar atmospheric parameters (Teff, log (g), [M/H]) in spectra of M-type stars. We define two types of potential features (equivalent widths and integrated flux ratios) able to explain the atmospheric physical parameters. We search the space of feature sets using a genetic algorithm that evaluates solutions by their prediction performance in the framework of the BT-Settl library of stellar spectra. Thereafter, we construct eight regression models using different machine-learning techniques and compare their performances with those obtained using the classical χ2 approach and independent component analysis (ICA) coefficients. Finally, we validate the various alternatives using two sets of real spectra from the NASA Infrared Telescope Facility (IRTF) and Dwarf Archives collections. We find that the cross-validation errors are poor measures of the performance of regression models in the context of physical parameter prediction in M-type stars. For R ˜ 2000 spectra with signal-to-noise ratios typical of the IRTF and Dwarf Archives, feature selection with genetic algorithms or alternative techniques produces only marginal advantages with respect to representation spaces that are unconstrained in wavelength (full spectrum or ICA). We make available the atmospheric parameters for the two collections of observed spectra as online material.
NASA Astrophysics Data System (ADS)
Medina, H.; Romano, N.; Chirico, G. B.
2014-07-01
This study presents a dual Kalman filter (DSUKF - dual standard-unscented Kalman filter) for retrieving states and parameters controlling the soil water dynamics in a homogeneous soil column, by assimilating near-surface state observations. The DSUKF couples a standard Kalman filter for retrieving the states of a linear solver of the Richards equation, and an unscented Kalman filter for retrieving the parameters of the soil hydraulic functions, which are defined according to the van Genuchten-Mualem closed-form model. The accuracy and the computational expense of the DSUKF are compared with those of the dual ensemble Kalman filter (DEnKF) implemented with a nonlinear solver of the Richards equation. Both the DSUKF and the DEnKF are applied with two alternative state-space formulations of the Richards equation, respectively differentiated by the type of variable employed for representing the states: either the soil water content (θ) or the soil water matric pressure head (h). The comparison analyses are conducted with reference to synthetic time series of the true states, noise corrupted observations, and synthetic time series of the meteorological forcing. The performance of the retrieval algorithms are examined accounting for the effects exerted on the output by the input parameters, the observation depth and assimilation frequency, as well as by the relationship between retrieved states and assimilated variables. The uncertainty of the states retrieved with DSUKF is considerably reduced, for any initial wrong parameterization, with similar accuracy but less computational effort than the DEnKF, when this is implemented with ensembles of 25 members. For ensemble sizes of the same order of those involved in the DSUKF, the DEnKF fails to provide reliable posterior estimates of states and parameters. The retrieval performance of the soil hydraulic parameters is strongly affected by several factors, such as the initial guess of the unknown parameters, the wet or dry range of the retrieved states, the boundary conditions, as well as the form (h-based or θ-based) of the state-space formulation. Several analyses are reported to show that the identifiability of the saturated hydraulic conductivity is hindered by the strong correlation with other parameters of the soil hydraulic functions defined according to the van Genuchten-Mualem closed-form model.
Xia, J.; Xu, Y.; Miller, R.D.; Chen, C.
2006-01-01
A Gibson half-space model (a non-layered Earth model) has the shear modulus varying linearly with depth in an inhomogeneous elastic half-space. In a half-space of sedimentary granular soil under a geostatic state of initial stress, the density and the Poisson's ratio do not vary considerably with depth. In such an Earth body, the dynamic shear modulus is the parameter that mainly affects the dispersion of propagating waves. We have estimated shear-wave velocities in the compressible Gibson half-space by inverting Rayleigh-wave phase velocities. An analytical dispersion law of Rayleigh-type waves in a compressible Gibson half-space is given in an algebraic form, which makes our inversion process extremely simple and fast. The convergence of the weighted damping solution is guaranteed through selection of the damping factor using the Levenberg-Marquardt method. Calculation efficiency is achieved by reconstructing a weighted damping solution using singular value decomposition techniques. The main advantage of this algorithm is that only three parameters define the compressible Gibson half-space model. Theoretically, to determine the model by the inversion, only three Rayleigh-wave phase velocities at different frequencies are required. This is useful in practice where Rayleigh-wave energy is only developed in a limited frequency range or at certain frequencies as data acquired at manmade structures such as dams and levees. Two real examples are presented and verified by borehole S-wave velocity measurements. The results of these real examples are also compared with the results of the layered-Earth model. ?? Springer 2006.
NASA Technical Reports Server (NTRS)
Wissinger, A.; Scott, R. M.; Peters, W.; Augustyn, W., Jr.; Arnold, R.; Offner, A.; Damast, M.; Boyce, B.; Kinnaird, R.; Mangus, J. D.
1971-01-01
A means is presented whereby the effect of various changes in the most important parameters of a three meter aperature space astronomy telescope can be evaluated to determine design trends and to optimize the optical design configuration. Methods are defined for evaluating the theoretical optical performance of axisymmetric, centrally obscured telescopes based upon the intended astronomy research usage. A series of design parameter variations is presented to determine the optimum telescope configuration. The design optimum requires very fast primary mirrors, so the study also examines the current state of the art in fabricating large, fast primary mirrors. The conclusion is that a 3-meter primary mirror having a focal ratio as low as f/2 is feasible using currently established techniques.
Phenotypic models of evolution and development: geometry as destiny.
François, Paul; Siggia, Eric D
2012-12-01
Quantitative models of development that consider all relevant genes typically are difficult to fit to embryonic data alone and have many redundant parameters. Computational evolution supplies models of phenotype with relatively few variables and parameters that allows the patterning dynamics to be reduced to a geometrical picture for how the state of a cell moves. The clock and wavefront model, that defines the phenotype of somitogenesis, can be represented as a sequence of two discrete dynamical transitions (bifurcations). The expression-time to space map for Hox genes and the posterior dominance rule are phenotypes that naturally follow from computational evolution without considering the genetics of Hox regulation. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Suit, William T.; Schiess, James R.
1988-01-01
The Discovery vehicle was found to have longitudinal and lateral aerodynamic characteristics similar to those of the Columbia and Challenger vehicles. The values of the lateral and longitudinal parameters are compared with the preflight data book. The lateral parameters showed the same trends as the data book. With the exception of C sub l sub Beta for Mach numbers greater than 15, C sub n sub delta r for Mach numbers greater than 2 and for Mach numbers less than 1.5, where the variation boundaries were not well defined, ninety percent of the extracted values of the lateral parameters fell within the predicted variations. The longitudinal parameters showed more scatter, but scattered about the preflight predictions. With the exception of the Mach 1.5 to .5 region of the flight envelope, the preflight predictions seem a reasonable representation of the Shuttle aerodynamics. The models determined accounted for ninety percent of the actual flight time histories.
Identification of the Parameters of Menétrey -Willam Failure Surface of Calcium Silicate Units
NASA Astrophysics Data System (ADS)
Radosław, Jasiński
2017-10-01
The identification of parameters of Menétrey-Willamsurface made of concrete, masonry or autoclaved aerated concrete is not complicated. It is much more difficult to identify failure parameters of masonry units with cavities. This paper describes the concept of identifying the parameters of Menétrey-Willam failure surface (M-W-3) with reference to masonry units with vertical cavities. The M-W-3 surface is defined by uniaxial compressive strength fc, uniaxial tensile strength ft and eccentricity of elliptical function e. A test stand was built to identify surface parameters. It was used to test behaviour of masonry units under triaxial stress and conduct tests on whole masonry units in the uniaxial state. Results from tests on tens of silicate masonry units are presented in the Haigh-Westergaard (H-W) space. Statistical analyses were used to identify the shape of surface meridian, and then to determine eccentricity of the elliptical function.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barszez, Anne-Marie; Camelbeeck, Thierry; Plumier, Andre
Northwest Europe is a region in which damaging earthquakes exist. Assessing the risks of damages is useful, but this is not an easy work based on exact science.In this paper, we propose a general tool for a first level assessment of seismic risks (rapid diagnosis). General methodological aspects are presented. For a given building, the risk is represented by a volume in a multi-dimensional space. This space is defined by axes representing the main parameters that have an influence on the risk. We notably express the importance of including a parameter to consider the specific value of cultural heritage.Then wemore » apply the proposed tool to analyze and compare methods of seismic risk assessment used in Belgium. They differ by the spatial scale of the studied area. Study cases for the whole Belgian Territory and for part of cities in Liege and Mons (Be) aim also to give some sense of the overall risk in Belgium.« less
NASA Astrophysics Data System (ADS)
Janidarmian, Majid; Fekr, Atena Roshan; Bokharaei, Vahhab Samadi
2011-08-01
Mapping algorithm which means which core should be linked to which router is one of the key issues in the design flow of network-on-chip. To achieve an application-specific NoC design procedure that minimizes the communication cost and improves the fault tolerant property, first a heuristic mapping algorithm that produces a set of different mappings in a reasonable time is presented. This algorithm allows the designers to identify the set of most promising solutions in a large design space, which has low communication costs while yielding optimum communication costs in some cases. Another evaluated parameter, vulnerability index, is then considered as a principle of estimating the fault-tolerance property in all produced mappings. Finally, in order to yield a mapping which considers trade-offs between these two parameters, a linear function is defined and introduced. It is also observed that more flexibility to prioritize solutions within the design space is possible by adjusting a set of if-then rules in fuzzy logic.
Regional Differences in Tropical Lightning Distributions.
NASA Astrophysics Data System (ADS)
Boccippio, Dennis J.; Goodman, Steven J.; Heckman, Stan
2000-12-01
Observations from the National Aeronautics and Space Administration Optical Transient Detector (OTD) and Tropical Rainfall Measuring Mission (TRMM)-based Lightning Imaging Sensor (LIS) are analyzed for variability between land and ocean, various geographic regions, and different (objectively defined) convective `regimes.' The bulk of the order-of-magnitude differences between land and ocean regional flash rates are accounted for by differences in storm spacing (density) and/or frequency of occurrence, rather than differences in storm instantaneous flash rates, which only vary by a factor of 2 on average. Regional variability in cell density and cell flash rates closely tracks differences in 85-GHz microwave brightness temperatures. Monotonic relationships are found with the gross moist stability of the tropical atmosphere, a large-scale `adjusted state' parameter. This result strongly suggests that it will be possible, using TRMM observations, to objectively test numerical or theoretical predictions of how mesoscale convective organization interacts with the larger-scale environment. Further parameters are suggested for a complete objective definition of tropical convective regimes.
Renormalization group approach to symmetry protected topological phases
NASA Astrophysics Data System (ADS)
van Nieuwenburg, Evert P. L.; Schnyder, Andreas P.; Chen, Wei
2018-04-01
A defining feature of a symmetry protected topological phase (SPT) in one dimension is the degeneracy of the Schmidt values for any given bipartition. For the system to go through a topological phase transition separating two SPTs, the Schmidt values must either split or cross at the critical point in order to change their degeneracies. A renormalization group (RG) approach based on this splitting or crossing is proposed, through which we obtain an RG flow that identifies the topological phase transitions in the parameter space. Our approach can be implemented numerically in an efficient manner, for example, using the matrix product state formalism, since only the largest first few Schmidt values need to be calculated with sufficient accuracy. Using several concrete models, we demonstrate that the critical points and fixed points of the RG flow coincide with the maxima and minima of the entanglement entropy, respectively, and the method can serve as a numerically efficient tool to analyze interacting SPTs in the parameter space.
NASA Astrophysics Data System (ADS)
Naseralavi, S. S.; Salajegheh, E.; Fadaee, M. J.; Salajegheh, J.
2014-06-01
This paper presents a technique for damage detection in structures under unknown periodic excitations using the transient displacement response. The method is capable of identifying the damage parameters without finding the input excitations. We first define the concept of displacement space as a linear space in which each point represents displacements of structure under an excitation and initial condition. Roughly speaking, the method is based on the fact that structural displacements under free and forced vibrations are associated with two parallel subspaces in the displacement space. Considering this novel geometrical viewpoint, an equation called kernel parallelization equation (KPE) is derived for damage detection under unknown periodic excitations and a sensitivity-based algorithm for solving KPE is proposed accordingly. The method is evaluated via three case studies under periodic excitations, which confirm the efficiency of the proposed method.
Multi-Objective Reinforcement Learning-based Deep Neural Networks for Cognitive Space Communications
NASA Technical Reports Server (NTRS)
Ferreria, Paulo; Paffenroth, Randy; Wyglinski, Alexander M.; Hackett, Timothy; Bilen, Sven; Reinhart, Richard; Mortensen, Dale
2017-01-01
Future communication subsystems of space exploration missions can potentially benefit from software-defined radios (SDRs) controlled by machine learning algorithms. In this paper, we propose a novel hybrid radio resource allocation management control algorithm that integrates multi-objective reinforcement learning and deep artificial neural networks. The objective is to efficiently manage communications system resources by monitoring performance functions with common dependent variables that result in conflicting goals. The uncertainty in the performance of thousands of different possible combinations of radio parameters makes the trade-off between exploration and exploitation in reinforcement learning (RL) much more challenging for future critical space-based missions. Thus, the system should spend as little time as possible on exploring actions, and whenever it explores an action, it should perform at acceptable levels most of the time. The proposed approach enables on-line learning by interactions with the environment and restricts poor resource allocation performance through virtual environment exploration. Improvements in the multiobjective performance can be achieved via transmitter parameter adaptation on a packet-basis, with poorly predicted performance promptly resulting in rejected decisions. Simulations presented in this work considered the DVB-S2 standard adaptive transmitter parameters and additional ones expected to be present in future adaptive radio systems. Performance results are provided by analysis of the proposed hybrid algorithm when operating across a satellite communication channel from Earth to GEO orbit during clear sky conditions. The proposed approach constitutes part of the core cognitive engine proof-of-concept to be delivered to the NASA Glenn Research Center SCaN Testbed located onboard the International Space Station.
Multi-Objective Reinforcement Learning-Based Deep Neural Networks for Cognitive Space Communications
NASA Technical Reports Server (NTRS)
Ferreria, Paulo Victor R.; Paffenroth, Randy; Wyglinski, Alexander M.; Hackett, Timothy M.; Bilen, Sven G.; Reinhart, Richard C.; Mortensen, Dale J.
2017-01-01
Future communication subsystems of space exploration missions can potentially benefit from software-defined radios (SDRs) controlled by machine learning algorithms. In this paper, we propose a novel hybrid radio resource allocation management control algorithm that integrates multi-objective reinforcement learning and deep artificial neural networks. The objective is to efficiently manage communications system resources by monitoring performance functions with common dependent variables that result in conflicting goals. The uncertainty in the performance of thousands of different possible combinations of radio parameters makes the trade-off between exploration and exploitation in reinforcement learning (RL) much more challenging for future critical space-based missions. Thus, the system should spend as little time as possible on exploring actions, and whenever it explores an action, it should perform at acceptable levels most of the time. The proposed approach enables on-line learning by interactions with the environment and restricts poor resource allocation performance through virtual environment exploration. Improvements in the multiobjective performance can be achieved via transmitter parameter adaptation on a packet-basis, with poorly predicted performance promptly resulting in rejected decisions. Simulations presented in this work considered the DVB-S2 standard adaptive transmitter parameters and additional ones expected to be present in future adaptive radio systems. Performance results are provided by analysis of the proposed hybrid algorithm when operating across a satellite communication channel from Earth to GEO orbit during clear sky conditions. The proposed approach constitutes part of the core cognitive engine proof-of-concept to be delivered to the NASA Glenn Research Center SCaN Testbed located onboard the International Space Station.
NASA Astrophysics Data System (ADS)
Zhuk, Alexander; Chopovsky, Alexey; Fakhr, Seyed Hossein; Shulga, Valerii; Wei, Han
2017-11-01
In a multidimensional Kaluza-Klein model with Ricci-flat internal space, we study the gravitational field in the weak-field limit. This field is created by two coupled sources. First, this is a point-like massive body which has a dust-like equation of state in the external space and an arbitrary parameter Ω of equation of state in the internal space. The second source is a static spherically symmetric massive scalar field centered at the origin where the point-like massive body is. The found perturbed metric coefficients are used to calculate the parameterized post-Newtonian (PPN) parameter γ . We define under which conditions γ can be very close to unity in accordance with the relativistic gravitational tests in the solar system. This can take place for both massive or massless scalar fields. For example, to have γ ≈ 1 in the solar system, the mass of scalar field should be μ ≳ 5.05× 10^{-49}g ˜ 2.83× 10^{-16}eV. In all cases, we arrive at the same conclusion that to be in agreement with the relativistic gravitational tests, the gravitating mass should have tension: Ω = - 1/2.
A global solution to the Schrödinger equation: From Henstock to Feynman
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nathanson, Ekaterina S., E-mail: enathanson@ggc.edu; Jørgensen, Palle E. T., E-mail: palle-jorgensen@uiowa.edu
2015-09-15
One of the key elements of Feynman’s formulation of non-relativistic quantum mechanics is a so-called Feynman path integral. It plays an important role in the theory, but it appears as a postulate based on intuition, rather than a well-defined object. All previous attempts to supply Feynman’s theory with rigorous mathematics underpinning, based on the physical requirements, have not been satisfactory. The difficulty comes from the need to define a measure on the infinite dimensional space of paths and to create an integral that would possess all of the properties requested by Feynman. In the present paper, we consider a newmore » approach to defining the Feynman path integral, based on the theory developed by Muldowney [A Modern Theory of Random Variable: With Applications in Stochastic Calcolus, Financial Mathematics, and Feynman Integration (John Wiley & Sons, Inc., New Jersey, 2012)]. Muldowney uses the Henstock integration technique and deals with non-absolute integrability of the Fresnel integrals, in order to obtain a representation of the Feynman path integral as a functional. This approach offers a mathematically rigorous definition supporting Feynman’s intuitive derivations. But in his work, Muldowney gives only local in space-time solutions. A physical solution to the non-relativistic Schrödinger equation must be global, and it must be given in the form of a unitary one-parameter group in L{sup 2}(ℝ{sup n}). The purpose of this paper is to show that a system of one-dimensional local Muldowney’s solutions may be extended to yield a global solution. Moreover, the global extension can be represented by a unitary one-parameter group acting in L{sup 2}(ℝ{sup n})« less
Population Synthesis of Radio & Gamma-Ray Millisecond Pulsars
NASA Astrophysics Data System (ADS)
Frederick, Sara; Gonthier, P. L.; Harding, A. K.
2014-01-01
In recent years, the number of known gamma-ray millisecond pulsars (MSPs) in the Galactic disk has risen substantially thanks to confirmed detections by Fermi Gamma-ray Space Telescope (Fermi). We have developed a new population synthesis of gamma-ray and radio MSPs in the galaxy which uses Markov Chain Monte Carlo techniques to explore the large and small worlds of the model parameter space and allows for comparisons of the simulated and detected MSP distributions. The simulation employs empirical radio and gamma-ray luminosity models that are dependent upon the pulsar period and period derivative with freely varying exponents. Parameters associated with the birth distributions are also free to vary. The computer code adjusts the magnitudes of the model luminosities to reproduce the number of MSPs detected by a group of ten radio surveys, thus normalizing the simulation and predicting the MSP birth rates in the Galaxy. Computing many Markov chains leads to preferred sets of model parameters that are further explored through two statistical methods. Marginalized plots define confidence regions in the model parameter space using maximum likelihood methods. A secondary set of confidence regions is determined in parallel using Kuiper statistics calculated from comparisons of cumulative distributions. These two techniques provide feedback to affirm the results and to check for consistency. Radio flux and dispersion measure constraints have been imposed on the simulated gamma-ray distributions in order to reproduce realistic detection conditions. The simulated and detected distributions agree well for both sets of radio and gamma-ray pulsar characteristics, as evidenced by our various comparisons.
Correlation and agreement of a digital and conventional method to measure arch parameters.
Nawi, Nes; Mohamed, Alizae Marny; Marizan Nor, Murshida; Ashar, Nor Atika
2018-01-01
The aim of the present study was to determine the overall reliability and validity of arch parameters measured digitally compared to conventional measurement. A sample of 111 plaster study models of Down syndrome (DS) patients were digitized using a blue light three-dimensional (3D) scanner. Digital and manual measurements of defined parameters were performed using Geomagic analysis software (Geomagic Studio 2014 software, 3D Systems, Rock Hill, SC, USA) on digital models and with a digital calliper (Tuten, Germany) on plaster study models. Both measurements were repeated twice to validate the intraexaminer reliability based on intraclass correlation coefficients (ICCs) using the independent t test and Pearson's correlation, respectively. The Bland-Altman method of analysis was used to evaluate the agreement of the measurement between the digital and plaster models. No statistically significant differences (p > 0.05) were found between the manual and digital methods when measuring the arch width, arch length, and space analysis. In addition, all parameters showed a significant correlation coefficient (r ≥ 0.972; p < 0.01) between all digital and manual measurements. Furthermore, a positive agreement between digital and manual measurements of the arch width (90-96%), arch length and space analysis (95-99%) were also distinguished using the Bland-Altman method. These results demonstrate that 3D blue light scanning and measurement software are able to precisely produce 3D digital model and measure arch width, arch length, and space analysis. The 3D digital model is valid to be used in various clinical applications.
Local random configuration-tree theory for string repetition and facilitated dynamics of glass
NASA Astrophysics Data System (ADS)
Lam, Chi-Hang
2018-02-01
We derive a microscopic theory of glassy dynamics based on the transport of voids by micro-string motions, each of which involves particles arranged in a line hopping simultaneously displacing one another. Disorder is modeled by a random energy landscape quenched in the configuration space of distinguishable particles, but transient in the physical space as expected for glassy fluids. We study the evolution of local regions with m coupled voids. At a low temperature, energetically accessible local particle configurations can be organized into a random tree with nodes and edges denoting configurations and micro-string propagations respectively. Such trees defined in the configuration space naturally describe systems defined in two- or three-dimensional physical space. A micro-string propagation initiated by a void can facilitate similar motions by other voids via perturbing the random energy landscape, realizing path interactions between voids or equivalently string interactions. We obtain explicit expressions of the particle diffusion coefficient and a particle return probability. Under our approximation, as temperature decreases, random trees of energetically accessible configurations exhibit a sequence of percolation transitions in the configuration space, with local regions containing fewer coupled voids entering the non-percolating immobile phase first. Dynamics is dominated by coupled voids of an optimal group size, which increases as temperature decreases. Comparison with a distinguishable-particle lattice model (DPLM) of glass shows very good quantitative agreements using only two adjustable parameters related to typical energy fluctuations and the interaction range of the micro-strings.
NASA Astrophysics Data System (ADS)
Alterman, B. L.; Klein, K. G.; Verscharen, D.; Stevens, M. L.; Kasper, J. C.
2017-12-01
Long duration, in situ data sets enable large-scale statistical analysis of free-energy-driven instabilities in the solar wind. The plasma beta and temperature anisotropy plane provides a well-defined parameter space in which a single-fluid plasma's stability can be represented. Because this reduced parameter space can only represent instability thresholds due to the free energy of one ion species - typically the bulk protons - the true impact of instabilities on the solar wind is under estimated. Nyquist's instability criterion allows us to systematically account for other sources of free energy including beams, drifts, and additional temperature anisotropies. Utilizing over 20 years of Wind Faraday cup and magnetic field observations, we have resolved the bulk parameters for three ion populations: the bulk protons, beam protons, and alpha particles. Applying Nyquist's criterion, we calculate the number of linearly growing modes supported by each spectrum and provide a more nuanced consideration of solar wind stability. Using collisional age measurements, we predict the stability of the solar wind close to the sun. Accounting for the free-energy from the three most common ion populations in the solar wind, our approach provides a more complete characterization of solar wind stability.
Impact of orbit modeling on DORIS station position and Earth rotation estimates
NASA Astrophysics Data System (ADS)
Štěpánek, Petr; Rodriguez-Solano, Carlos Javier; Hugentobler, Urs; Filler, Vratislav
2014-04-01
The high precision of estimated station coordinates and Earth rotation parameters (ERP) obtained from satellite geodetic techniques is based on the precise determination of the satellite orbit. This paper focuses on the analysis of the impact of different orbit parameterizations on the accuracy of station coordinates and the ERPs derived from DORIS observations. In a series of experiments the DORIS data from the complete year 2011 were processed with different orbit model settings. First, the impact of precise modeling of the non-conservative forces on geodetic parameters was compared with results obtained with an empirical-stochastic modeling approach. Second, the temporal spacing of drag scaling parameters was tested. Third, the impact of estimating once-per-revolution harmonic accelerations in cross-track direction was analyzed. And fourth, two different approaches for solar radiation pressure (SRP) handling were compared, namely adjusting SRP scaling parameter or fixing it on pre-defined values. Our analyses confirm that the empirical-stochastic orbit modeling approach, which does not require satellite attitude information and macro models, results for most of the monitored station parameters in comparable accuracy as the dynamical model that employs precise non-conservative force modeling. However, the dynamical orbit model leads to a reduction of the RMS values for the estimated rotation pole coordinates by 17% for x-pole and 12% for y-pole. The experiments show that adjusting atmospheric drag scaling parameters each 30 min is appropriate for DORIS solutions. Moreover, it was shown that the adjustment of cross-track once-per-revolution empirical parameter increases the RMS of the estimated Earth rotation pole coordinates. With recent data it was however not possible to confirm the previously known high annual variation in the estimated geocenter z-translation series as well as its mitigation by fixing the SRP parameters on pre-defined values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaikuad, Apirat, E-mail: apirat.chaikuad@sgc.ox.ac.uk; Knapp, Stefan; Johann Wolfgang Goethe-University, Building N240 Room 3.03, Max-von-Laue-Strasse 9, 60438 Frankfurt am Main
An alternative strategy for PEG sampling is suggested through the use of four newly defined PEG smears to enhance chemical space in reduced screens with a benefit towards protein crystallization. The quest for an optimal limited set of effective crystallization conditions remains a challenge in macromolecular crystallography, an issue that is complicated by the large number of chemicals which have been deemed to be suitable for promoting crystal growth. The lack of rational approaches towards the selection of successful chemical space and representative combinations has led to significant overlapping conditions, which are currently present in a multitude of commercially availablemore » crystallization screens. Here, an alternative approach to the sampling of widely used PEG precipitants is suggested through the use of PEG smears, which are mixtures of different PEGs with a requirement of either neutral or cooperatively positive effects of each component on crystal growth. Four newly defined smears were classified by molecular-weight groups and enabled the preservation of specific properties related to different polymer sizes. These smears not only allowed a wide coverage of properties of these polymers, but also reduced PEG variables, enabling greater sampling of other parameters such as buffers and additives. The efficiency of the smear-based screens was evaluated on more than 220 diverse recombinant human proteins, which overall revealed a good initial crystallization success rate of nearly 50%. In addition, in several cases successful crystallizations were only obtained using PEG smears, while various commercial screens failed to yield crystals. The defined smears therefore offer an alternative approach towards PEG sampling, which will benefit the design of crystallization screens sampling a wide chemical space of this key precipitant.« less
Combined Global Navigation Satellite Systems in the Space Service Volume
NASA Technical Reports Server (NTRS)
Force, Dale A.; Miller, James J.
2015-01-01
Besides providing position, navigation, and timing (PNT) services to traditional terrestrial and airborne users, GPS is also being increasingly used as a tool to enable precision orbit determination, precise time synchronization, real-time spacecraft navigation, and three-axis attitude control of Earth orbiting satellites. With additional Global Navigation Satellite System (GNSS) constellations being replenished and coming into service (GLONASS, Beidou, and Galileo), it will become possible to benefit from greater signal availability and robustness by using evolving multi-constellation receivers. The paper, "GPS in the Space Service Volume," presented at the ION GNSS 19th International Technical Meeting in 2006 (Ref. 1), defined the Space Service Volume, and analyzed the performance of GPS out to seventy thousand kilometers. This paper will report a similar analysis of the signal coverage of GPS in the space domain; however, the analyses will also consider signal coverage from each of the additional GNSS constellations noted earlier to specifically demonstrate the expected benefits to be derived from using GPS in conjunction with other foreign systems. The Space Service Volume is formally defined as the volume of space between three thousand kilometers altitude and geosynchronous altitude circa 36,000 km, as compared with the Terrestrial Service Volume between 3,000 km and the surface of the Earth. In the Terrestrial Service Volume, GNSS performance is the same as on or near the Earth's surface due to satellite vehicle availability and geometry similarities. The core GPS system has thereby established signal requirements for the Space Service Volume as part of technical Capability Development Documentation (CDD) that specifies system performance. Besides the technical discussion, we also present diplomatic efforts to extend the GPS Space Service Volume concept to other PNT service providers in an effort to assure that all space users will benefit from the enhanced interoperability of GNSS services in the space domain. A separate paper presented at the conference covers the individual GNSS performance parameters for respective Space Service Volumes.
PDF investigations of turbulent non-premixed jet flames with thin reaction zones
NASA Astrophysics Data System (ADS)
Wang, Haifeng; Pope, Stephen
2012-11-01
PDF (probability density function) modeling studies are carried out for the Sydney piloted jet flames. These Sydney flames feature much thinner reaction zones in the mixture fraction space compared to those in the well-studied Sandia piloted jet flames. The performance of the different turbulent combustion models in the Sydney flames with thin reaction zones has not been examined extensively before, and this work aims at evaluating the capability of the PDF method to represent the thin turbulent flame structures in the Sydney piloted flames. Parametric and sensitivity PDF studies are performed with respect to the different models and model parameters. A global error parameter is defined to quantify the departure of the simulation results from the experimental data, and is used to assess the performance of the different set of models and model parameters.
User interfaces in space science instrumentation
NASA Astrophysics Data System (ADS)
McCalden, Alec John
This thesis examines user interaction with instrumentation in the specific context of space science. It gathers together existing practice in machine interfaces with a look at potential future usage and recommends a new approach to space science projects with the intention of maximising their science return. It first takes a historical perspective on user interfaces and ways of defining and measuring the science return of a space instrument. Choices of research methodology are considered. Implementation details such as the concepts of usability, mental models, affordance and presentation of information are described, and examples of existing interfaces in space science are given. A set of parameters for use in analysing and synthesizing a user interface is derived by using a set of case studies of diverse failures and from previous work. A general space science user analysis is made by looking at typical practice, and an interview plus persona technique is used to group users with interface designs. An examination is made of designs in the field of astronomical instrumentation interfaces, showing the evolution of current concepts and including ideas capable of sustaining progress in the future. The parameters developed earlier are then tested against several established interfaces in the space science context to give a degree of confidence in their use. The concept of a simulator that is used to guide the development of an instrument over the whole lifecycle is described, and the idea is proposed that better instrumentation would result from more efficient use of the resources available. The previous ideas in this thesis are then brought together to describe a proposed new approach to a typical development programme, with an emphasis on user interaction. The conclusion shows that there is significant room for improvement in the science return from space instrumentation by attention to the user interface.
NASA Astrophysics Data System (ADS)
Xu, Z.; Mace, G. G.; Posselt, D. J.
2017-12-01
As we begin to contemplate the next generation atmospheric observing systems, it will be critically important that we are able to make informed decisions regarding the trade space between scientific capability and the need to keep complexity and cost within definable limits. To explore this trade space as it pertains to understanding key cloud and precipitation processes, we are developing a Markov Chain Monte Carlo (MCMC) algorithm suite that allows us to arbitrarily define the specifications of candidate observing systems and then explore how the uncertainties in key retrieved geophysical parameters respond to that observing system. MCMC algorithms produce a more complete posterior solution space, and allow for an objective examination of information contained in measurements. In our initial implementation, MCMC experiments are performed to retrieve vertical profiles of cloud and precipitation properties from a spectrum of active and passive measurements collected by aircraft during the ACE Radiation Definition Experiments (RADEX). Focusing on shallow cumulus clouds observed during the Integrated Precipitation and Hydrology EXperiment (IPHEX), observing systems in this study we consider W and Ka-band radar reflectivity, path-integrated attenuation at those frequencies, 31 and 94 GHz brightness temperatures as well as visible and near-infrared reflectance. By varying the sensitivity and uncertainty of these measurements, we quantify the capacity of various combinations of observations to characterize the physical properties of clouds and precipitation.
Loop quantum cosmology with self-dual variables
NASA Astrophysics Data System (ADS)
Wilson-Ewing, Edward
2015-12-01
Using the complex-valued self-dual connection variables, the loop quantum cosmology of a closed Friedmann space-time coupled to a massless scalar field is studied. It is shown how the reality conditions can be imposed in the quantum theory by choosing a particular inner product for the kinematical Hilbert space. While holonomies of the self-dual Ashtekar connection are not well defined in the kinematical Hilbert space, it is possible to introduce a family of generalized holonomylike operators of which some are well defined; these operators in turn are used in the definition of the Hamiltonian constraint operator where the scalar field can be used as a relational clock. The resulting quantum theory is closely related, although not identical, to standard loop quantum cosmology constructed from the Ashtekar-Barbero variables with a real Immirzi parameter. Effective Friedmann equations are derived which provide a good approximation to the full quantum dynamics for sharply peaked states whose volume remains much larger than the Planck volume, and they show that for these states quantum gravity effects resolve the big-bang and big-crunch singularities and replace them by a nonsingular bounce. Finally, the loop quantization in self-dual variables of a flat Friedmann space-time is recovered in the limit of zero spatial curvature and is identical to the standard loop quantization in terms of the real-valued Ashtekar-Barbero variables.
A potential hyperspectral remote sensing imager for water quality measurements
NASA Astrophysics Data System (ADS)
Zur, Yoav; Braun, Ofer; Stavitsky, David; Blasberger, Avigdor
2003-04-01
Utilization of Pan Chromatic and Multi Spectral Remote Sensing Imagery is wide spreading and becoming an established business for commercial suppliers of such imagery like ISI and others. Some emerging technologies are being used to generate Hyper-Spectral imagery (HSI) by aircraft as well as other platforms. The commercialization of such technology for Remote Sensing from space is still questionable and depends upon several parameters including maturity, cost, market reception and many others. HSI can be used in a variety of applications in agriculture, urban mapping, geology and others. One outstanding potential usage of HSI is for water quality monitoring, a subject studied in this paper. Water quality monitoring is becoming a major area of interest in HSI due to the increase in water demand around the globe. The ability to monitor water quality in real time having both spatial and temporal resolution is one of the advantages of Remote Sensing. This ability is not limited only for measurements of oceans and inland water, but can be applied for drinking and irrigation water reservoirs as well. HSI in the UV-VNIR has the ability to measure a wide range of constituents that define water quality. Among the constituents that can be measured are the pigment concentration of various algae, chlorophyll a and c, carotenoids and phycocyanin, thus enabling to define the algal phyla. Other parameters that can be measured are TSS (Total Suspended Solids), turbidity, BOD (Biological Oxygen Demand), hydrocarbons, oxygen demand. The study specifies the properties of such a space borne device that results from the spectral signatures and the absorption bands of the constituents in question. Other parameters considered are the repetition of measurements, the spatial aspects of the sensor and the SNR of the sensor in question.
On-off intermittency and intermingledlike basins in a granular medium.
Schmick, Malte; Goles, Eric; Markus, Mario
2002-12-01
Molecular dynamic simulations of a medium consisting of disks in a periodically tilted box yield two dynamic modes differing considerably in the total potential and kinetic energies of the disks. Depending on parameters, these modes display the following features: (i) hysteresis (coexistence of the two modes in phase space); (ii) intermingledlike basins of attraction (uncertainty exponent indistinguishable from zero); (iii) two-state on-off intermittency; and (iv) bimodal velocity distributions. Bifurcations are defined by a cross-shaped phase diagram.
Ultra-Wideband Radar: Research and Development Considerations
1989-06-05
ballistic missile IEEE Institute of Electrical and Electronics Engineers3 ISRD Institutional Supporting Research and Development LAMPF Los Alamos Meson ...s is the propagation velocity of light in free space. The parameter P, is the effective bandwidth of the signal defined by ,6 = ’R(0) (2.2) =f R2(r...6raveling-wave antenna are likely to be much greater than the distance light travels in the rise time of the antenna current; 10-100 ps =4. 3-30 mm
Determination of burning area and port volume in complex burning regions of a solid rocket motor
NASA Technical Reports Server (NTRS)
Kingsbury, J. A.
1977-01-01
An analysis of the geometry of the burning in both star-cylindrical port interface regions and regions of partially inhibited slots is presented. Some characteristics parameters are defined and illustrated. Methods are proposed for calculating burning areas which functionally depend only on the total distance burned. According to this method, several points are defined where abrupt changes in geometry occur, and these are tracked throughout the burn. Equations are developed for computing port perimeter and port area at pre-established longitudinal positions. Some common formulas and some newly developed formulas are then used to compute burning surface area and port volume. Some specific results are presented for the solid rocket motor committed to the space shuttle project.
The Scattering Properties of Natural Terrestrial Snows versus Icy Satellite Surfaces
NASA Technical Reports Server (NTRS)
Domingue, Deborah; Hartman, Beth; Verbiscer, Anne
1997-01-01
Our comparisons of the single particle scattering behavior of terrestrial snows and icy satellite regoliths to the laboratory particle scattering measurements of McGuire and Hapke demonstrate that the differences between icy satellite regoliths and their terrestrial counterparts are due to particle structures and textures. Terrestrial snow particle structures define a region in the single particle scattering function parameter space separate from the regions defined by the McGuire and Hapke artificial laboratory particles. The particle structures and textures of the grains composing icy satellites regoliths are not simple or uniform but consist of a variety of particle structure and texture types, some of which may be a combination of the particle types investigated by McGuire and Hapke.
Dual ant colony operational modal analysis parameter estimation method
NASA Astrophysics Data System (ADS)
Sitarz, Piotr; Powałka, Bartosz
2018-01-01
Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.
Changes of Space Debris Orbits After LDR Operation
NASA Astrophysics Data System (ADS)
Wnuk, E.; Golebiewska, J.; Jacquelard, C.; Haag, H.
2013-09-01
A lot of technical studies are currently developing concepts of active removal of space debris to protect space assets from on orbit collision. For small objects, such concepts include the use of ground-based lasers to remove or reduce the momentum of the objects thereby lowering their orbit in order to facilitate their decay by re-entry into the Earth's atmosphere. The concept of the Laser Debris Removal (LDR) system is the main subject of the CLEANSPACE project. One of the CLEANSPACE objectives is to define a global architecture (including surveillance, identification and tracking) for an innovative ground-based laser solution, which can remove hazardous medium debris around selected space assets. The CLEANSPACE project is realized by a European consortium in the frame of the European Commission Seventh Framework Programme (FP7), Space topic. The use of sequence of laser operations to remove space debris, needs very precise predictions of future space debris orbital positions, on a level even better than 1 meter. Orbit determination, tracking (radar, optical and laser) and orbit prediction have to be performed with accuracy much better than so far. For that, the applied prediction tools have to take into account all perturbation factors that influence object orbit. The expected object's trajectory after the LDR operation is a lowering of its perigee. To prevent the debris with this new trajectory to collide with another object, a precise trajectory prediction after the LDR sequence is therefore the main task allowing also to estimate re-entry parameters. The LDR laser pulses change the debris object velocity v. The future orbit and re-entry parameters of the space debris after the LDR engagement can be calculated if the resulting ?v vector is known with the sufficient accuracy. The value of the ?v may be estimated from the parameters of the LDR station and from the characteristics of the orbital debris. However, usually due to the poor knowledge of the debris object's size, mass, spin and chemical composition the value and the direction of the vector ?v cannot be estimated with the high accuracy. Therefore, a high precise tracking of the debris will be necessary immediately before the engagement of the LDR and also during this engagement. By extending this tracking and ranging for a few seconds after engagement, the necessary data to evaluate the orbital modification can be produced in the same way as it is done for the catalogue generation. In our paper we discuss the object's orbit changes due to LDR operation for different locations of LDR station and different parameters of the laser energy and telescope diameter. We estimate the future orbit and re-entry parameters taking into account the influence of all important perturbation factors on the space debris orbital motion after LDR.
General Metropolis-Hastings jump diffusions for automatic target recognition in infrared scenes
NASA Astrophysics Data System (ADS)
Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.
1997-04-01
To locate and recognize ground-based targets in forward- looking IR (FLIR) images, 3D faceted models with associated pose parameters are formulated to accommodate the variability found in FLIR imagery. Taking a Bayesian approach, scenes are simulated from the emissive characteristics of the CAD models and compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. To accommodate scenes with variable numbers of targets, the posterior distribution is defined over parameter vectors of varying dimension. An inference algorithm based on Metropolis-Hastings jump- diffusion processes empirically samples from the posterior distribution, generating configurations of templates and transformations that match the collected sensor data with high probability. The jumps accommodate the addition and deletion of targets and the estimation of target identities; diffusions refine the hypotheses by drifting along the gradient of the posterior distribution with respect to the orientation and position parameters. Previous results on jumps strategies analogous to the Metropolis acceptance/rejection algorithm, with proposals drawn from the prior and accepted based on the likelihood, are extended to encompass general Metropolis-Hastings proposal densities. In particular, the algorithm proposes moves by drawing from the posterior distribution over computationally tractible subsets of the parameter space. The algorithm is illustrated by an implementation on a Silicon Graphics Onyx/Reality Engine.
Defining process design space for monoclonal antibody cell culture.
Abu-Absi, Susan Fugett; Yang, LiYing; Thompson, Patrick; Jiang, Canping; Kandula, Sunitha; Schilling, Bernhard; Shukla, Abhinav A
2010-08-15
The concept of design space has been taking root as a foundation of in-process control strategies for biopharmaceutical manufacturing processes. During mapping of the process design space, the multidimensional combination of operational variables is studied to quantify the impact on process performance in terms of productivity and product quality. An efficient methodology to map the design space for a monoclonal antibody cell culture process is described. A failure modes and effects analysis (FMEA) was used as the basis for the process characterization exercise. This was followed by an integrated study of the inoculum stage of the process which includes progressive shake flask and seed bioreactor steps. The operating conditions for the seed bioreactor were studied in an integrated fashion with the production bioreactor using a two stage design of experiments (DOE) methodology to enable optimization of operating conditions. A two level Resolution IV design was followed by a central composite design (CCD). These experiments enabled identification of the edge of failure and classification of the operational parameters as non-key, key or critical. In addition, the models generated from the data provide further insight into balancing productivity of the cell culture process with product quality considerations. Finally, process and product-related impurity clearance was evaluated by studies linking the upstream process with downstream purification. Production bioreactor parameters that directly influence antibody charge variants and glycosylation in CHO systems were identified.
NASA Technical Reports Server (NTRS)
Papadopoulos, Michael; Tolson, Robert H.
1993-01-01
The Modal Identification Experiment (MIE) is a proposed experiment to define the dynamic characteristics of Space Station Freedom. Previous studies emphasized free-decay modal identification. The feasibility of using a forced response method (Observer/Kalman Filter Identification (OKID)) is addressed. The interest in using OKID is to determine the input mode shape matrix which can be used for controller design or control-structure interaction analysis, and investigate if forced response methods may aid in separating closely spaced modes. A model of the SC-7 configuration of Space Station Freedom was excited using simulated control system thrusters to obtain acceleration output. It is shown that an 'optimum' number of outputs exists for OKID. To recover global mode shapes, a modified method called Global-Local OKID was developed. This study shows that using data from a long forced response followed by free-decay leads to the 'best' modal identification. Twelve out of the thirteen target modes were identified for such an output.
Primary and secondary electrical space power based on advanced PEM systems
NASA Technical Reports Server (NTRS)
Vanderborgh, N. E.; Hedstrom, J. C.; Stroh, K. R.; Huff, J. R.
1993-01-01
For new space ventures, power continues to be a pacing function for mission planning and experiment endurance. Although electrochemical power is a well demonstrated space power technology, current hardware limitations impact future mission viability. In order to document and augment electrochemical technology, a series of experiments for the National Aeronautics and Space Administration Lewis Research Center (NASA LeRC) are underway at the Los Alamos National Laboratory that define operational parameters on contemporary proton exchange membrane (PEM) hardware operating with hydrogen and oxygen reactants. Because of the high efficiency possible for water electrolysis, this hardware is also thought part of a secondary battery design built around stored reactants - the so-called regenerative fuel cell. An overview of stack testing at Los Alamos and of analyses related to regenerative fuel cell systems are provided in this paper. Finally, this paper describes work looking at innovative concepts that remove complexity from stack hardware with the specific intent of higher system reliability. This new concept offers the potential for unprecedented electrochemical power system energy densities.
Group theoretical formulation of free fall and projectile motion
NASA Astrophysics Data System (ADS)
Düztaş, Koray
2018-07-01
In this work we formulate the group theoretical description of free fall and projectile motion. We show that the kinematic equations for constant acceleration form a one parameter group acting on a phase space. We define the group elements ϕ t by their action on the points in the phase space. We also generalize this approach to projectile motion. We evaluate the group orbits regarding their relations to the physical orbits of particles and unphysical solutions. We note that the group theoretical formulation does not apply to more general cases involving a time-dependent acceleration. This method improves our understanding of the constant acceleration problem with its global approach. It is especially beneficial for students who want to pursue a career in theoretical physics.
Problems experienced and envisioned for dynamical physical systems
NASA Technical Reports Server (NTRS)
Ryan, R. S.
1985-01-01
The use of high performance systems, which is the trend of future space systems, naturally leads to lower margins and a higher sensitivity to parameter variations and, therefore, more problems of dynamical physical systems. To circumvent dynamic problems of these systems, appropriate design, verification analysis, and tests must be planned and conducted. The basic design goal is to define the problem before it occurs. The primary approach for meeting this goal is a good understanding and reviewing of the problems experienced in the past in terms of the system under design. This paper reviews many of the dynamic problems experienced in space systems design and operation, categorizes them as to causes, and envisions future program implications, developing recommendations for analysis and test approaches.
An arena for model building in the Cohen-Glashow very special relativity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheikh-Jabbari, M. M., E-mail: jabbari@theory.ipm.ac.i; Tureanu, A., E-mail: anca.tureanu@helsinki.f
2010-02-15
The Cohen-Glashow Very Special Relativity (VSR) algebra is defined as the part of the Lorentz algebra which upon addition of CP or T invariance enhances to the full Lorentz group, plus the space-time translations. We show that noncommutative space-time, in particular noncommutative Moyal plane, with light- like noncommutativity provides a robust mathematical setting for quantum field theories which are VSR invariant and hence set the stage for building VSR invariant particle physics models. In our setting the VSR invariant theories are specified with a single deformation parameter, the noncommutativity scale {Lambda}{sub NC}. Preliminary analysis with the available data leads tomore » {Lambda}{sub NC} {>=} 1-10 TeV.« less
Cornering and wear behavior of the Space Shuttle Orbiter main gear tire
NASA Technical Reports Server (NTRS)
Daugherty, Robert H.; Stubbs, Sandy M.
1987-01-01
One of the factors needed to describe the handling characteristics of the Space Shuttle Orbiter during the landing rollout is the response of the vehicle's tires to variations in load and yaw angle. An experimental investigation of the cornering characteristics of the Orbiter main gear tires was conducted at the NASA Langley Research Center Aircraft Landing Dynamics Facility. This investigation compliments earlier work done to define the Orbiter nose tire cornering characteristics. In the investigation, the effects of load and yaw angle were evaluated by measuring parameters such as side load and drag load, and obtaining measurements of aligning torque. Because the tire must operate on an extremely rough runway at the Shuttle Landing Facility at Kennedy Space Center (KSC), tests were also conducted to describe the wear behavior of the tire under various conditions on a simulated KSC runway surface. Mathematical models for both the cornering and the wear behavior are discussed.
Urzhumtseva, Ludmila; Lunina, Natalia; Fokine, Andrei; Samama, Jean Pierre; Lunin, Vladimir Y; Urzhumtsev, Alexandre
2004-09-01
The connectivity-based phasing method has been demonstrated to be capable of finding molecular packing and envelopes even for difficult cases of structure determination, as well as of identifying, in favorable cases, secondary-structure elements of protein molecules in the crystal. This method uses a single set of structure factor magnitudes and general topological features of a crystallographic image of the macromolecule under study. This information is expressed through a number of parameters. Most of these parameters are easy to estimate, and the results of phasing are practically independent of these parameters when they are chosen within reasonable limits. By contrast, the correct choice for such parameters as the expected number of connected regions in the unit cell is sometimes ambiguous. To study these dependencies, numerous tests were performed with simulated data, experimental data and mixed data sets, where several reflections missed in the experiment were completed by computed data. This paper demonstrates that the procedure is able to control this choice automatically and helps in difficult cases to identify the correct number of molecules in the asymmetric unit. In addition, the procedure behaves abnormally if the space group is defined incorrectly and therefore may distinguish between the rotation and screw axes even when high-resolution data are not available.
NASA Astrophysics Data System (ADS)
Kamimoto, Shingo; Kawai, Takahiro; Koike, Tatsuya
2016-12-01
Inspired by the symbol calculus of linear differential operators of infinite order applied to the Borel transformed WKB solutions of simple-pole type equation [Kamimoto et al. (RIMS Kôkyûroku Bessatsu B 52:127-146, 2014)], which is summarized in Section 1, we introduce in Section 2 the space of simple resurgent functions depending on a parameter with an infra-exponential type growth order, and then we define the assigning operator A which acts on the space and produces resurgent functions with essential singularities. In Section 3, we apply the operator A to the Borel transforms of the Voros coefficient and its exponentiation for the Whittaker equation with a large parameter so that we may find the Borel transforms of the Voros coefficient and its exponentiation for the boosted Whittaker equation with a large parameter. In Section 4, we use these results to find the explicit form of the alien derivatives of the Borel transformed WKB solutions of the boosted Whittaker equation with a large parameter. The results in this paper manifest the importance of resurgent functions with essential singularities in developing the exact WKB analysis, the WKB analysis based on the resurgent function theory. It is also worth emphasizing that the concrete form of essential singularities we encounter is expressed by the linear differential operators of infinite order.
Major Design Drivers for LEO Space Surveillance in Europe and Solution Concepts
NASA Astrophysics Data System (ADS)
Krag, Holger; Flohrer, Tim; Klinkrad, Heiner
Europe is preparing for the development of an autonomous system for space situational aware-ness. One important segment of this new system will be dedicated to surveillance and tracking of space objects in Earth orbits. First concept and capability analysis studies have led to a draft system proposal. This proposal foresees, in a first deployment step, a groundbased system consisting of radar sensors and a network of optical telescopes. These sensors will be designed to have the capability of building up and maintaining a catalogue of space objects. A number of related services will be provided, including collision avoidance and the prediction of uncontrolled reentry events. Currently, the user requirements are consolidated, defining the different services, and the related accuracy and timeliness of the derived products. In this consolidation process parameters like the lower diameter limit above which catalogue coverage is to be achieved, the degree of population coverage in various orbital regions and the accuracy of the orbit data maintained in the catalogue are important design drivers for the selection of number and location of the sensors, and the definition of the required sensor performance. Further, the required minimum time for the detection of a manoeuvre, a newly launched object or a fragmentation event, significantly determines the required surveillance performance. In the requirement consolidation process the performance to be specified has to be based on a careful analysis which takes into account accuracy constraints of the services to be provided, the technical feasibility, complexity and costs. User requirements can thus not be defined with-out understanding the consequences they would pose on the system design. This paper will outline the design definition process for the surveillance and tracking segment of the European space situational awareness system. The paper will focus on the low-Earth orbits (LEO). It will present the core user requirements and the definition of the derived services. The de-sired performance parameters will be explained together with presenting their rationale and justification. This will be followed by an identification of the resulting major design drivers. The influence of these drivers on the system design will be analysed, including limiting object diameter, population coverage, orbit maintenance accuracy, and the minimum time to detect events like manoeuvres or breakups. The underlying simulation and verification concept will be explained. Finally, a first compilation of performance parameters for the surveillance and tracking segment will be presented and discussed.
Space-Based Reconfigurable Software Defined Radio Test Bed Aboard International Space Station
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Lux, James P.
2014-01-01
The National Aeronautical and Space Administration (NASA) recently launched a new software defined radio research test bed to the International Space Station. The test bed, sponsored by the Space Communications and Navigation (SCaN) Office within NASA is referred to as the SCaN Testbed. The SCaN Testbed is a highly capable communications system, composed of three software defined radios, integrated into a flight system, and mounted to the truss of the International Space Station. Software defined radios offer the future promise of in-flight reconfigurability, autonomy, and eventually cognitive operation. The adoption of software defined radios offers space missions a new way to develop and operate space transceivers for communications and navigation. Reconfigurable or software defined radios with communications and navigation functions implemented in software or VHDL (Very High Speed Hardware Description Language) provide the capability to change the functionality of the radio during development or after launch. The ability to change the operating characteristics of a radio through software once deployed to space offers the flexibility to adapt to new science opportunities, recover from anomalies within the science payload or communication system, and potentially reduce development cost and risk by adapting generic space platforms to meet specific mission requirements. The software defined radios on the SCaN Testbed are each compliant to NASA's Space Telecommunications Radio System (STRS) Architecture. The STRS Architecture is an open, non-proprietary architecture that defines interfaces for the connections between radio components. It provides an operating environment to abstract the communication waveform application from the underlying platform specific hardware such as digital-to-analog converters, analog-to-digital converters, oscillators, RF attenuators, automatic gain control circuits, FPGAs, general-purpose processors, etc. and the interconnections among different radio components.
Navigating the Decision Space: Shared Medical Decision Making as Distributed Cognition.
Lippa, Katherine D; Feufel, Markus A; Robinson, F Eric; Shalin, Valerie L
2017-06-01
Despite increasing prominence, little is known about the cognitive processes underlying shared decision making. To investigate these processes, we conceptualize shared decision making as a form of distributed cognition. We introduce a Decision Space Model to identify physical and social influences on decision making. Using field observations and interviews, we demonstrate that patients and physicians in both acute and chronic care consider these influences when identifying the need for a decision, searching for decision parameters, making actionable decisions Based on the distribution of access to information and actions, we then identify four related patterns: physician dominated; physician-defined, patient-made; patient-defined, physician-made; and patient-dominated decisions. Results suggests that (a) decision making is necessarily distributed between physicians and patients, (b) differential access to information and action over time requires participants to transform a distributed task into a shared decision, and (c) adverse outcomes may result from failures to integrate physician and patient reasoning. Our analysis unifies disparate findings in the medical decision-making literature and has implications for improving care and medical training.
Swarm formation control utilizing elliptical surfaces and limiting functions.
Barnes, Laura E; Fields, Mary Anne; Valavanis, Kimon P
2009-12-01
In this paper, we present a strategy for organizing swarms of unmanned vehicles into a formation by utilizing artificial potential fields that were generated from normal and sigmoid functions. These functions construct the surface on which swarm members travel, controlling the overall swarm geometry and the individual member spacing. Nonlinear limiting functions are defined to provide tighter swarm control by modifying and adjusting a set of control variables that force the swarm to behave according to set constraints, formation, and member spacing. The artificial potential functions and limiting functions are combined to control swarm formation, orientation, and swarm movement as a whole. Parameters are chosen based on desired formation and user-defined constraints. This approach is computationally efficient and scales well to different swarm sizes, to heterogeneous systems, and to both centralized and decentralized swarm models. Simulation results are presented for a swarm of 10 and 40 robots that follow circle, ellipse, and wedge formations. Experimental results are included to demonstrate the applicability of the approach on a swarm of four custom-built unmanned ground vehicles (UGVs).
The Planetary and Space Simulation Facilities at DLR Cologne
NASA Astrophysics Data System (ADS)
Rabbow, Elke; Parpart, André; Reitz, Günther
2016-06-01
Astrobiology strives to increase our knowledge on the origin, evolution and distribution of life, on Earth and beyond. In the past centuries, life has been found on Earth in environments with extreme conditions that were expected to be uninhabitable. Scientific investigations of the underlying metabolic mechanisms and strategies that lead to the high adaptability of these extremophile organisms increase our understanding of evolution and distribution of life on Earth. Life as we know it depends on the availability of liquid water. Exposure of organisms to defined and complex extreme environmental conditions, in particular those that limit the water availability, allows the investigation of the survival mechanisms as well as an estimation of the possibility of the distribution to and survivability on other celestial bodies of selected organisms. Space missions in low Earth orbit (LEO) provide access for experiments to complex environmental conditions not available on Earth, but studies on the molecular and cellular mechanisms of adaption to these hostile conditions and on the limits of life cannot be performed exclusively in space experiments. Experimental space is limited and allows only the investigation of selected endpoints. An additional intensive ground based program is required, with easy to access facilities capable to simulate space and planetary environments, in particular with focus on temperature, pressure, atmospheric composition and short wavelength solar ultraviolet radiation (UV). DLR Cologne operates a number of Planetary and Space Simulation facilities (PSI) where microorganisms from extreme terrestrial environments or known for their high adaptability are exposed for mechanistic studies. Space or planetary parameters are simulated individually or in combination in temperature controlled vacuum facilities equipped with a variety of defined and calibrated irradiation sources. The PSI support basic research and were recurrently used for pre-flight test programs for several astrobiological space missions. Parallel experiments on ground provided essential complementary data supporting the scientific interpretation of the data received from the space missions.
Autoresonant excitation of Bose-Einstein condensates
NASA Astrophysics Data System (ADS)
Batalov, S. V.; Shagalov, A. G.; Friedland, L.
2018-03-01
Controlling the state of a Bose-Einstein condensate driven by a chirped frequency perturbation in a one-dimensional anharmonic trapping potential is discussed. By identifying four characteristic time scales in this chirped-driven problem, three dimensionless parameters P1 ,2 ,3 are defined describing the driving strength, the anharmonicity of the trapping potential, and the strength of the particles interaction, respectively. As the driving frequency passes the linear resonance in the problem, and depending on the location in the P1 ,2 ,3 parameter space, the system may exhibit two very different evolutions, i.e., the quantum energy ladder climbing (LC) and the classical autoresonance (AR). These regimes are analyzed both in theory and simulations with the emphasis on the effect of the interaction parameter P3. In particular, the transition thresholds on the driving parameter P1 and their width in P1 in both the AR and LC regimes are discussed. Different driving protocols are also illustrated, showing efficient control of excitation and deexcitation of the condensate.
Error Modeling of Multibaseline Optical Truss: Part 1: Modeling of System Level Performance
NASA Technical Reports Server (NTRS)
Milman, Mark H.; Korechoff, R. E.; Zhang, L. D.
2004-01-01
Global astrometry is the measurement of stellar positions and motions. These are typically characterized by five parameters, including two position parameters, two proper motion parameters, and parallax. The Space Interferometry Mission (SIM) will derive these parameters for a grid of approximately 1300 stars covering the celestial sphere to an accuracy of approximately 4uas, representing a two orders of magnitude improvement over the most precise current star catalogues. Narrow angle astrometry will be performed to a 1uas accuracy. A wealth of scientific information will be obtained from these accurate measurements encompassing many aspects of both galactic (and extragalactic science. SIM will be subject to a number of instrument errors that can potentially degrade performance. Many of these errors are systematic in that they are relatively static and repeatable with respect to the time frame and direction of the observation. This paper and its companion define the modeling of the, contributing factors to these errors and the analysis of how they impact SIM's ability to perform astrometric science.
Acoustic interference and recognition space within a complex assemblage of dendrobatid frogs
Amézquita, Adolfo; Flechas, Sandra Victoria; Lima, Albertina Pimentel; Gasser, Herbert; Hödl, Walter
2011-01-01
In species-rich assemblages of acoustically communicating animals, heterospecific sounds may constrain not only the evolution of signal traits but also the much less-studied signal-processing mechanisms that define the recognition space of a signal. To test the hypothesis that the recognition space is optimally designed, i.e., that it is narrower toward the species that represent the higher potential for acoustic interference, we studied an acoustic assemblage of 10 diurnally active frog species. We characterized their calls, estimated pairwise correlations in calling activity, and, to model the recognition spaces of five species, conducted playback experiments with 577 synthetic signals on 531 males. Acoustic co-occurrence was not related to multivariate distance in call parameters, suggesting a minor role for spectral or temporal segregation among species uttering similar calls. In most cases, the recognition space overlapped but was greater than the signal space, indicating that signal-processing traits do not act as strictly matched filters against sounds other than homospecific calls. Indeed, the range of the recognition space was strongly predicted by the acoustic distance to neighboring species in the signal space. Thus, our data provide compelling evidence of a role of heterospecific calls in evolutionarily shaping the frogs' recognition space within a complex acoustic assemblage without obvious concomitant effects on the signal. PMID:21969562
Construction of CASCI-type wave functions for very large active spaces.
Boguslawski, Katharina; Marti, Konrad H; Reiher, Markus
2011-06-14
We present a procedure to construct a configuration-interaction expansion containing arbitrary excitations from an underlying full-configuration-interaction-type wave function defined for a very large active space. Our procedure is based on the density-matrix renormalization group (DMRG) algorithm that provides the necessary information in terms of the eigenstates of the reduced density matrices to calculate the coefficient of any basis state in the many-particle Hilbert space. Since the dimension of the Hilbert space scales binomially with the size of the active space, a sophisticated Monte Carlo sampling routine is employed. This sampling algorithm can also construct such configuration-interaction-type wave functions from any other type of tensor network states. The configuration-interaction information obtained serves several purposes. It yields a qualitatively correct description of the molecule's electronic structure, it allows us to analyze DMRG wave functions converged for the same molecular system but with different parameter sets (e.g., different numbers of active-system (block) states), and it can be considered a balanced reference for the application of a subsequent standard multi-reference configuration-interaction method.
NASA Technical Reports Server (NTRS)
Gasiewski, A. J.; Skofronick, G. M.
1992-01-01
Progress by investigators at Georgia Tech in defining the requirements for large space antennas for passive microwave Earth imaging systems is reviewed. In order to determine antenna constraints (e.g., the aperture size, illumination taper, and gain uncertainty limits) necessary for the retrieval of geophysical parameters (e.g., rain rate) with adequate spatial resolution and accuracy, a numerical simulation of the passive microwave observation and retrieval process is being developed. Due to the small spatial scale of precipitation and the nonlinear relationships between precipitation parameters (e.g., rain rate, water density profile) and observed brightness temperatures, the retrieval of precipitation parameters are of primary interest in the simulation studies. Major components of the simulation are described as well as progress and plans for completion. The overall goal of providing quantitative assessments of the accuracy of candidate geosynchronous and low-Earth orbiting imaging systems will continue under a separate grant.
Flight Operations Analysis Tool
NASA Technical Reports Server (NTRS)
Easter, Robert; Herrell, Linda; Pomphrey, Richard; Chase, James; Wertz Chen, Julie; Smith, Jeffrey; Carter, Rebecca
2006-01-01
Flight Operations Analysis Tool (FLOAT) is a computer program that partly automates the process of assessing the benefits of planning spacecraft missions to incorporate various combinations of launch vehicles and payloads. Designed primarily for use by an experienced systems engineer, FLOAT makes it possible to perform a preliminary analysis of trade-offs and costs of a proposed mission in days, whereas previously, such an analysis typically lasted months. FLOAT surveys a variety of prior missions by querying data from authoritative NASA sources pertaining to 20 to 30 mission and interface parameters that define space missions. FLOAT provides automated, flexible means for comparing the parameters to determine compatibility or the lack thereof among payloads, spacecraft, and launch vehicles, and for displaying the results of such comparisons. Sparseness, typical of the data available for analysis, does not confound this software. FLOAT effects an iterative process that identifies modifications of parameters that could render compatible an otherwise incompatible mission set.
Reducing Design Risk Using Robust Design Methods: A Dual Response Surface Approach
NASA Technical Reports Server (NTRS)
Unal, Resit; Yeniay, Ozgur; Lepsch, Roger A. (Technical Monitor)
2003-01-01
Space transportation system conceptual design is a multidisciplinary process containing considerable element of risk. Risk here is defined as the variability in the estimated (output) performance characteristic of interest resulting from the uncertainties in the values of several disciplinary design and/or operational parameters. Uncertainties from one discipline (and/or subsystem) may propagate to another, through linking parameters and the final system output may have a significant accumulation of risk. This variability can result in significant deviations from the expected performance. Therefore, an estimate of variability (which is called design risk in this study) together with the expected performance characteristic value (e.g. mean empty weight) is necessary for multidisciplinary optimization for a robust design. Robust design in this study is defined as a solution that minimizes variability subject to a constraint on mean performance characteristics. Even though multidisciplinary design optimization has gained wide attention and applications, the treatment of uncertainties to quantify and analyze design risk has received little attention. This research effort explores the dual response surface approach to quantify variability (risk) in critical performance characteristics (such as weight) during conceptual design.
Spatial Rack Drives Pitch Configurations: Essence and Content
NASA Astrophysics Data System (ADS)
Abadjieva, Emilia; Abadjiev, Valentin; Naganawa, Akihiro
2018-03-01
The practical realization of all types of mechanical motions converters is preceded by solving the task of their kinematic synthesis. In this way, the determination of the optimal values of the constant geometrical parameters of the chosen structure of the created mechanical system is achieved. The searched result is a guarantee of the preliminary defined kinematic characteristics of the synthesized transmission and in the first place, to guarantee the law of motions transformation. The kinematic synthesis of mechanical transmissions is based on adequate mathematical modelling of the process of motions transformation and on the object, realizing this transformation. Basic primitives of the mathematical models for synthesis upon a pitch contact point are geometric and kinematic pitch configurations. Their dimensions and mutual position in space are the input parameters for the processes of design and elaboration of the synthesized mechanical device. The study presented here is a brief review of the theory of pitch configurations. It is an independent scientific branch of the spatial gearing theory (theory of hyperboloid gears). On this basis, the essence and content of the corresponding primitives, applicable to the synthesis of spatial rack drives, are defined.
Evolution simulation of lightning discharge based on a magnetohydrodynamics method
NASA Astrophysics Data System (ADS)
Fusheng, WANG; Xiangteng, MA; Han, CHEN; Yao, ZHANG
2018-07-01
In order to solve the load problem for aircraft lightning strikes, lightning channel evolution is simulated under the key physical parameters for aircraft lightning current component C. A numerical model of the discharge channel is established, based on magnetohydrodynamics (MHD) and performed by FLUENT software. With the aid of user-defined functions and a user-defined scalar, the Lorentz force, Joule heating and material parameters of an air thermal plasma are added. A three-dimensional lightning arc channel is simulated and the arc evolution in space is obtained. The results show that the temperature distribution of the lightning channel is symmetrical and that the hottest region occurs at the center of the lightning channel. The distributions of potential and current density are obtained, showing that the difference in electric potential or energy between two points tends to make the arc channel develop downwards. The arc channel comes into expansion on the anode surface due to stagnation of the thermal plasma and there exists impingement on the copper plate when the arc channel comes into contact with the anode plate.
NASA Astrophysics Data System (ADS)
Chan, C. H.; Brown, G.; Rikvold, P. A.
2017-05-01
A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.
Behavioral Health and Performance Laboratory Standard Measures (BHP-SM)
NASA Technical Reports Server (NTRS)
Williams, Thomas J.; Cromwell, Ronita
2017-01-01
The Spaceflight Standard Measures is a NASA Johnson Space Center Human Research Project (HRP) project that proposes to collect a set of core measurements, representative of many of the human spaceflight risks, from astronauts before, during and after long-duration International Space Station (ISS) missions. The term "standard measures" is defined as a set of core measurements, including physiological, biochemical, psychosocial, cognitive, and functional, that are reliable, valid, and accepted in terrestrial science, are associated with a specific and measurable outcome known to occur as a consequence of spaceflight, that will be collected in a standardized fashion from all (or most) crewmembers. While such measures might be used to define standards of health and performance or readiness for flight, the prime intent in their collection is to allow longitudinal analysis of multiple parameters in order to answer a variety of operational, occupational, and research-based questions. These questions are generally at a high level, and the approach for this project is to populate the standard measures database with the smallest set of data necessary to indicate further detailed research is required. Also included as standard measures are parameters that are not outcome-based in and of-themselves, but provide ancillary information that supports interpretation of the outcome measures, e.g., nutritional assessment, vehicle environmental parameters, crew debriefs, etc. The project's main aim is to ensure that an optimized minimal set of measures is consistently captured from all ISS crewmembers until the end of Station in order to characterize the human in space. -This allows the HRP to identify, establish, and evaluate a common set of measures for use in spaceflight and analog research to: develop baselines, systematically characterize risk likelihood and consequences, and assess effectiveness of countermeasures that work for behavioral health and performance risk factors. -By standardizing the battery of measures on all crewmembers, it will allow the HRP to evaluate countermeasures that work for one physiological system and ensure another system is not negatively affected. -These measures, named "Standard Measures," will serve as a data repository and be available to other studies under data sharing agreements.
Quantitative analysis of eyes and other optical systems in linear optics.
Harris, William F; Evans, Tanya; van Gool, Radboud D
2017-05-01
To show that 14-dimensional spaces of augmented point P and angle Q characteristics, matrices obtained from the ray transference, are suitable for quantitative analysis although only the latter define an inner-product space and only on it can one define distances and angles. The paper examines the nature of the spaces and their relationships to other spaces including symmetric dioptric power space. The paper makes use of linear optics, a three-dimensional generalization of Gaussian optics. Symmetric 2 × 2 dioptric power matrices F define a three-dimensional inner-product space which provides a sound basis for quantitative analysis (calculation of changes, arithmetic means, etc.) of refractive errors and thin systems. For general systems the optical character is defined by the dimensionally-heterogeneous 4 × 4 symplectic matrix S, the transference, or if explicit allowance is made for heterocentricity, the 5 × 5 augmented symplectic matrix T. Ordinary quantitative analysis cannot be performed on them because matrices of neither of these types constitute vector spaces. Suitable transformations have been proposed but because the transforms are dimensionally heterogeneous the spaces are not naturally inner-product spaces. The paper obtains 14-dimensional spaces of augmented point P and angle Q characteristics. The 14-dimensional space defined by the augmented angle characteristics Q is dimensionally homogenous and an inner-product space. A 10-dimensional subspace of the space of augmented point characteristics P is also an inner-product space. The spaces are suitable for quantitative analysis of the optical character of eyes and many other systems. Distances and angles can be defined in the inner-product spaces. The optical systems may have multiple separated astigmatic and decentred refracting elements. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.
NASA Astrophysics Data System (ADS)
Murray, J. R.
2017-12-01
Earth surface displacements measured at Global Navigation Satellite System (GNSS) sites record crustal deformation due, for example, to slip on faults underground. A primary objective in designing geodetic networks to study crustal deformation is to maximize the ability to recover parameters of interest like fault slip. Given Green's functions (GFs) relating observed displacement to motion on buried dislocations representing a fault, one can use various methods to estimate spatially variable slip. However, assumptions embodied in the GFs, e.g., use of a simplified elastic structure, introduce spatially correlated model prediction errors (MPE) not reflected in measurement uncertainties (Duputel et al., 2014). In theory, selection algorithms should incorporate inter-site correlations to identify measurement locations that give unique information. I assess the impact of MPE on site selection by expanding existing methods (Klein et al., 2017; Reeves and Zhe, 1999) to incorporate this effect. Reeves and Zhe's algorithm sequentially adds or removes a predetermined number of data according to a criterion that minimizes the sum of squared errors (SSE) on parameter estimates. Adapting this method to GNSS network design, Klein et al. select new sites that maximize model resolution, using trade-off curves to determine when additional resolution gain is small. Their analysis uses uncorrelated data errors and GFs for a uniform elastic half space. I compare results using GFs for spatially variable strike slip on a discretized dislocation in a uniform elastic half space, a layered elastic half space, and a layered half space with inclusion of MPE. I define an objective criterion to terminate the algorithm once the next site removal would increase SSE more than the expected incremental SSE increase if all sites had equal impact. Using a grid of candidate sites with 8 km spacing, I find the relative value of the selected sites (defined by the percent increase in SSE that further removal of each site would cause) is more uniform when MPE is included. However, the number and distribution of selected sites depends primarily on site location relative to the fault. For this test case, inclusion of MPE has minimal practical impact; I will investigate whether these findings hold for more densely spaced candidate grids and dipping faults.
Renormalization group evolution of the universal theories EFT
Wells, James D.; Zhang, Zhengkang
2016-06-21
The conventional oblique parameters analyses of precision electroweak data can be consistently cast in the modern framework of the Standard Model effective field theory (SMEFT) when restrictions are imposed on the SMEFT parameter space so that it describes universal theories. However, the usefulness of such analyses is challenged by the fact that universal theories at the scale of new physics, where they are matched onto the SMEFT, can flow to nonuniversal theories with renormalization group (RG) evolution down to the electroweak scale, where precision observables are measured. The departure from universal theories at the electroweak scale is not arbitrary, butmore » dictated by the universal parameters at the matching scale. But to define oblique parameters, and more generally universal parameters at the electroweak scale that directly map onto observables, additional prescriptions are needed for the treatment of RG-induced nonuniversal effects. Finally, we perform a RG analysis of the SMEFT description of universal theories, and discuss the impact of RG on simplified, universal-theories-motivated approaches to fitting precision electroweak and Higgs data.« less
Renormalization group evolution of the universal theories EFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wells, James D.; Zhang, Zhengkang
The conventional oblique parameters analyses of precision electroweak data can be consistently cast in the modern framework of the Standard Model effective field theory (SMEFT) when restrictions are imposed on the SMEFT parameter space so that it describes universal theories. However, the usefulness of such analyses is challenged by the fact that universal theories at the scale of new physics, where they are matched onto the SMEFT, can flow to nonuniversal theories with renormalization group (RG) evolution down to the electroweak scale, where precision observables are measured. The departure from universal theories at the electroweak scale is not arbitrary, butmore » dictated by the universal parameters at the matching scale. But to define oblique parameters, and more generally universal parameters at the electroweak scale that directly map onto observables, additional prescriptions are needed for the treatment of RG-induced nonuniversal effects. Finally, we perform a RG analysis of the SMEFT description of universal theories, and discuss the impact of RG on simplified, universal-theories-motivated approaches to fitting precision electroweak and Higgs data.« less
Fidelity deviation in quantum teleportation
NASA Astrophysics Data System (ADS)
Bang, Jeongho; Ryu, Junghee; Kaszlikowski, Dagomir
2018-04-01
We analyze the performance of quantum teleportation in terms of average fidelity and fidelity deviation. The average fidelity is defined as the average value of the fidelities over all possible input states and the fidelity deviation is their standard deviation, which is referred to as a concept of fluctuation or universality. In the analysis, we find the condition to optimize both measures under a noisy quantum channel—we here consider the so-called Werner channel. To characterize our results, we introduce a 2D space defined by the aforementioned measures, in which the performance of the teleportation is represented as a point with the channel noise parameter. Through further analysis, we specify some regions drawn for different channel conditions, establishing the connection to the dissimilar contributions of the entanglement to the teleportation and the Bell inequality violation.
Suppression of friction by mechanical vibrations.
Capozza, Rosario; Vanossi, Andrea; Vezzani, Alessandro; Zapperi, Stefano
2009-08-21
Mechanical vibrations are known to affect frictional sliding and the associated stick-slip patterns causing sometimes a drastic reduction of the friction force. This issue is relevant for applications in nanotribology and to understand earthquake triggering by small dynamic perturbations. We study the dynamics of repulsive particles confined between a horizontally driven top plate and a vertically oscillating bottom plate. Our numerical results show a suppression of the high dissipative stick-slip regime in a well-defined range of frequencies that depends on the vibrating amplitude, the normal applied load, the system inertia and the damping constant. We propose a theoretical explanation of the numerical results and derive a phase diagram indicating the region of parameter space where friction is suppressed. Our results allow to define better strategies for the mechanical control of friction.
Horobin, R W; Stockert, J C; Rashid-Doubell, F
2015-05-01
We discuss a variety of biological targets including generic biomembranes and the membranes of the endoplasmic reticulum, endosomes/lysosomes, Golgi body, mitochondria (outer and inner membranes) and the plasma membrane of usual fluidity. For each target, we discuss the access of probes to the target membrane, probe uptake into the membrane and the mechanism of selectivity of the probe uptake. A statement of the QSAR decision rule that describes the required physicochemical features of probes that enable selective staining also is provided, followed by comments on exceptions and limits. Examples of probes typically used to demonstrate each target structure are noted and decision rule tabulations are provided for probes that localize in particular targets; these tabulations show distribution of probes in the conceptual space defined by the relevant structure parameters ("parameter space"). Some general implications and limitations of the QSAR models for probe targeting are discussed including the roles of certain cell and protocol factors that play significant roles in lipid staining. A case example illustrates the predictive ability of QSAR models. Key limiting values of the head group hydrophilicity parameter associated with membrane-probe interactions are discussed in an appendix.
Order parameter aided efficient phase space exploration under extreme conditions
NASA Astrophysics Data System (ADS)
Samanta, Amit
Physical processes in nature exhibit disparate time-scales, for example time scales associated with processes like phase transitions, various manifestations of creep, sintering of particles etc. are often much higher than time the system spends in the metastable states. The transition times associated with such events are also orders of magnitude higher than time-scales associated with vibration of atoms. Thus, an atomistic simulation of such transition events is a challenging task. Consequently, efficient exploration of configuration space and identification of metastable structures in condensed phase systems is challenging. In this talk I will illustrate how we can define a set of coarse-grained variables or order parameters and use these to systematically and efficiently steer a system containing thousands or millions of atoms over different parts of the configuration. This order parameter aided sampling can be used to identify metastable states, transition pathways and understand the mechanistic details of complex transition processes. I will illustrate how this sampling scheme can be used to study phase transition pathways and phase boundaries in prototypical materials, like SiO2 and Cu under high-pressure conditions. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Optimum design of structures subject to general periodic loads
NASA Technical Reports Server (NTRS)
Reiss, Robert; Qian, B.
1989-01-01
A simplified version of Icerman's problem regarding the design of structures subject to a single harmonic load is discussed. The nature of the restrictive conditions that must be placed on the design space in order to ensure an analytic optimum are discussed in detail. Icerman's problem is then extended to include multiple forcing functions with different driving frequencies. And the conditions that now must be placed upon the design space to ensure an analytic optimum are again discussed. An important finding is that all solutions to the optimality condition (analytic stationary design) are local optima, but the global optimum may well be non-analytic. The more general problem of distributing the fixed mass of a linear elastic structure subject to general periodic loads in order to minimize some measure of the steady state deflection is also considered. This response is explicitly expressed in terms of Green's functional and the abstract operators defining the structure. The optimality criterion is derived by differentiating the response with respect to the design parameters. The theory is applicable to finite element as well as distributed parameter models.
NASA Technical Reports Server (NTRS)
Tatnall, Chistopher R.
1998-01-01
The counter-rotating pair of wake vortices shed by flying aircraft can pose a threat to ensuing aircraft, particularly on landing approach. To allow adequate time for the vortices to disperse/decay, landing aircraft are required to maintain certain fixed separation distances. The Aircraft Vortex Spacing System (AVOSS), under development at NASA, is designed to prescribe safe aircraft landing approach separation distances appropriate to the ambient weather conditions. A key component of the AVOSS is a ground sensor, to ensure, safety by making wake observations to verify predicted behavior. This task requires knowledge of a flowfield strength metric which gauges the severity of disturbance an encountering aircraft could potentially experience. Several proposed strength metric concepts are defined and evaluated for various combinations of metric parameters and sensor line-of-sight elevation angles. Representative populations of generating and following aircraft types are selected, and their associated wake flowfields are modeled using various wake geometry definitions. Strength metric candidates are then rated and compared based on the correspondence of their computed values to associated aircraft response values, using basic statistical analyses.
NASA Astrophysics Data System (ADS)
Daya Sagar, B. S.
2005-01-01
Spatio-temporal patterns of small water bodies (SWBs) under the influence of temporally varied stream flow discharge are simulated in discrete space by employing geomorphologically realistic expansion and contraction transformations. Cascades of expansion-contraction are systematically performed by synchronizing them with stream flow discharge simulated via the logistic map. Templates with definite characteristic information are defined from stream flow discharge pattern as the basis to model the spatio-temporal organization of randomly situated surface water bodies of various sizes and shapes. These spatio-temporal patterns under varied parameters (λs) controlling stream flow discharge patterns are characterized by estimating their fractal dimensions. At various λs, nonlinear control parameters, we show the union of boundaries of water bodies that traverse the water body and non-water body spaces as geomorphic attractors. The computed fractal dimensions of these attractors are 1.58, 1.53, 1.78, 1.76, 1.84, and 1.90, respectively, at λs of 1, 2, 3, 3.46, 3.57, and 3.99. These values are in line with general visual observations.
Precision calculations for h → WW/ZZ → 4 fermions in the Two-Higgs-Doublet Model with Prophecy4f
NASA Astrophysics Data System (ADS)
Altenkamp, Lukas; Dittmaier, Stefan; Rzehak, Heidi
2018-03-01
We have calculated the next-to-leading-order electroweak and QCD corrections to the decay processes h → WW/ZZ → 4 fermions of the light CP-even Higgs boson h of various types of Two-Higgs-Doublet Models (Types I and II, "lepton-specific" and "flipped" models). The input parameters are defined in four different renormalization schemes, where parameters that are not directly accessible by experiments are defined in the \\overline{MS} scheme. Numerical results are presented for the corrections to partial decay widths for various benchmark scenarios previously motivated in the literature, where we investigate the dependence on the \\overline{MS} renormalization scale and on the choice of the renormalization scheme in detail. We find that it is crucial to be precise with these issues in parameter analyses, since parameter conversions between different schemes can involve sizeable or large corrections, especially in scenarios that are close to experimental exclusion limits or theoretical bounds. It even turns out that some renormalization schemes are not applicable in specific regions of parameter space. Our investigation of differential distributions shows that corrections beyond the Standard Model are mostly constant offsets induced by the mixing between the light and heavy CP-even Higgs bosons, so that differential analyses of h→4 f decay observables do not help to identify Two-Higgs-Doublet Models. Moreover, the decay widths do not significantly depend on the specific type of those models. The calculations are implemented in the public Monte Carlo generator Prophecy4f and ready for application.
Cantrell, Keri B; Martin, Jerry H
2012-02-01
The concept of a designer biochar that targets the improvement of a specific soil property imposes the need for production processes to generate biochars with both high consistency and quality. These important production parameters can be affected by variations in process temperature that must be taken into account when controlling the pyrolysis of agricultural residues such as manures and other feedstocks. A novel stochastic state-space temperature regulator was developed to accurately match biochar batch production to a defined temperature input schedule. This was accomplished by describing the system's state-space with five temperature variables--four directly measured and one change in temperature. Relationships were derived between the observed state and the desired, controlled state. When testing the unit at two different temperatures, the actual pyrolytic temperature was within 3 °C of the control with no overshoot. This state-space regulator simultaneously controlled the indirect heat source and sample temperature by employing difficult-to-measure variables such as temperature stability in the description of the pyrolysis system's state-space. These attributes make a state-space controller an optimum control scheme for the production of a predictable, repeatable designer biochar. Published 2011 by John Wiley & Sons, Ltd.
Health Issues and Space Weather
NASA Astrophysics Data System (ADS)
Crosby, N.
2009-04-01
The possibility that solar activity and variations in the Earth's magnetic field may affect human health has been debated for many decades but is still a "scientific topic" in its infancy. By learning whether and, if so, how much the Earth's space weather can influence the daily health of people will be of practical importance. Knowing whether human genetics, include regulating factors that take into account fluctuations of the Earth's magnetic field and solar disturbances, indeed exist will also benefit future interplanetary space travelers. Because the atmospheres on other planets are different from ours, as well as their interaction with the space environment, one may ask whether we are equipped with the genetics necessary to take this variability into account. The goal of this presentation is to define what is meant by space weather as a health risk and identify the long-term socio-economic effects on society that such health risks would have. Identifying the physical links between space weather sources and different effects on human health, as well as the parameters (direct and indirect) to be monitored, the potential for such a cross-disciplinary study will be invaluable, for scientists and medical doctors, as well as for engineers.
Ensemble Kalman filter inference of spatially-varying Manning's n coefficients in the coastal ocean
NASA Astrophysics Data System (ADS)
Siripatana, Adil; Mayo, Talea; Knio, Omar; Dawson, Clint; Maître, Olivier Le; Hoteit, Ibrahim
2018-07-01
Ensemble Kalman (EnKF) filtering is an established framework for large scale state estimation problems. EnKFs can also be used for state-parameter estimation, using the so-called "Joint-EnKF" approach. The idea is simply to augment the state vector with the parameters to be estimated and assign invariant dynamics for the time evolution of the parameters. In this contribution, we investigate the efficiency of the Joint-EnKF for estimating spatially-varying Manning's n coefficients used to define the bottom roughness in the Shallow Water Equations (SWEs) of a coastal ocean model. Observation System Simulation Experiments (OSSEs) are conducted using the ADvanced CIRCulation (ADCIRC) model, which solves a modified form of the Shallow Water Equations. A deterministic EnKF, the Singular Evolutive Interpolated Kalman (SEIK) filter, is used to estimate a vector of Manning's n coefficients defined at the model nodal points by assimilating synthetic water elevation data. It is found that with reasonable ensemble size (O (10)) , the filter's estimate converges to the reference Manning's field. To enhance performance, we have further reduced the dimension of the parameter search space through a Karhunen-Loéve (KL) expansion. We have also iterated on the filter update step to better account for the nonlinearity of the parameter estimation problem. We study the sensitivity of the system to the ensemble size, localization scale, dimension of retained KL modes, and number of iterations. The performance of the proposed framework in term of estimation accuracy suggests that a well-tuned Joint-EnKF provides a promising robust approach to infer spatially varying seabed roughness parameters in the context of coastal ocean modeling.
Self-quartic interaction for a scalar field in an extended DFR noncommutative space-time
NASA Astrophysics Data System (ADS)
Abreu, Everton M. C.; Neves, M. J.
2014-07-01
The framework of Dopliche-Fredenhagen-Roberts (DFR) for a noncommutative (NC) space-time is considered as an alternative approach to study the NC space-time of the early Universe. Concerning this formalism, the NC constant parameter, θ, is promoted to coordinate of the space-time and consequently we can describe a field theory in a space-time with extra-dimensions. We will see that there is a canonical momentum associated with this new coordinate in which the effects of a new physics can emerge in the propagation of the fields along the extra-dimensions. The Fourier space of this framework is automatically extended by the addition of the new momenta components. The main concept that we would like to emphasize from the outset is that the formalism demonstrated here will not be constructed by introducing a NC parameter in the system, as usual. It will be generated naturally from an already NC space. We will review that when the components of the new momentum are zero, the (extended) DFR approach is reduced to the usual (canonical) NC case, in which θ is an antisymmetric constant matrix. In this work we will study a scalar field action with self-quartic interaction ϕ4⋆ defined in the DFR NC space-time. We will obtain the Feynman rules in the Fourier space for the scalar propagator and vertex of the model. With these rules we are able to build the radiative corrections to one loop order of the model propagator. The consequences of the NC scale, as well as the propagation of the field in extra-dimensions, will be analyzed in the ultraviolet divergences scenario. We will investigate about the actual possibility that this kμν conjugate momentum has the property of healing the combination of IR/UV divergences that emerges in this recently new NC spacetime quantum field theory.
Performance Analysis of Sensor Systems for Space Situational Awareness
NASA Astrophysics Data System (ADS)
Choi, Eun-Jung; Cho, Sungki; Jo, Jung Hyun; Park, Jang-Hyun; Chung, Taejin; Park, Jaewoo; Jeon, Hocheol; Yun, Ami; Lee, Yonghui
2017-12-01
With increased human activity in space, the risk of re-entry and collision between space objects is constantly increasing. Hence, the need for space situational awareness (SSA) programs has been acknowledged by many experienced space agencies. Optical and radar sensors, which enable the surveillance and tracking of space objects, are the most important technical components of SSA systems. In particular, combinations of radar systems and optical sensor networks play an outstanding role in SSA programs. At present, Korea operates the optical wide field patrol network (OWL-Net), the only optical system for tracking space objects. However, due to their dependence on weather conditions and observation time, it is not reasonable to use optical systems alone for SSA initiatives, as they have limited operational availability. Therefore, the strategies for developing radar systems should be considered for an efficient SSA system using currently available technology. The purpose of this paper is to analyze the performance of a radar system in detecting and tracking space objects. With the radar system investigated, the minimum sensitivity is defined as detection of a 1-m2 radar cross section (RCS) at an altitude of 2,000 km, with operating frequencies in the L, S, C, X or Ku-band. The results of power budget analysis showed that the maximum detection range of 2,000 km, which includes the low earth orbit (LEO) environment, can be achieved with a transmission power of 900 kW, transmit and receive antenna gains of 40 dB and 43 dB, respectively, a pulse width of 2 ms, and a signal processing gain of 13.3 dB, at a frequency of 1.3 GHz. We defined the key parameters of the radar following a performance analysis of the system. This research can thus provide guidelines for the conceptual design of radar systems for national SSA initiatives.
NASA Astrophysics Data System (ADS)
Parsons, Mark; Grindrod, Peter
2012-06-01
We introduce a model for a pair of nonlinear evolving networks, defined over a common set of vertices, subject to edgewise competition. Each network may grow new edges spontaneously or through triad closure. Both networks inhibit the other's growth and encourage the other's demise. These nonlinear stochastic competition equations yield to a mean field analysis resulting in a nonlinear deterministic system. There may be multiple equilibria; and bifurcations of different types are shown to occur within a reduced parameter space. This situation models competitive communication networks such as BlackBerry Messenger displacing SMS; or instant messaging displacing emails.
AFE ion mass spectrometer design study
NASA Technical Reports Server (NTRS)
Wright, Willie
1989-01-01
This final technical report covers the activities engaged in by the University of Texas at Dallas, Center for Space Sciences in conjunction with the NASA Langley Research Center, Systems Engineering Division in design studies directed towards defining a suitable ion mass spectrometer to determine the plasma parameter around the Aeroassisted Flight Experiment vehicle during passage through the earth's upper atmosphere. Additional studies relate to the use of a Langmuir probe to measure windward ion/electron concentrations and temperatures. Selected instrument inlet subsystems were tested in the NASA Ames Arc-Jet Facility.
NASA Technical Reports Server (NTRS)
Wahba, G.
1982-01-01
Vector smoothing splines on the sphere are defined. Theoretical properties are briefly alluded to. The appropriate Hilbert space norms used in a specific meteorological application are described and justified via a duality theorem. Numerical procedures for computing the splines as well as the cross validation estimate of two smoothing parameters are given. A Monte Carlo study is described which suggests the accuracy with which upper air vorticity and divergence can be estimated using measured wind vectors from the North American radiosonde network.
Advanced Numerical Methods for Simulating Nonlinear Multirate Lumped Parameter Models
1991-05-01
defining a Waveform: typedef struct Waveform char *name; /* character string of the name of the variable */ double tn; /* time of the beginning of the...A State-Space Approach, Reprinted from Proc. Third Ann. Allerton Conf. Circuits and Systems Thoery , 659-668, in Computer-Aided Circuit Design...kg/n 3) 1025.9 kg/m3 @ 15* C. v Kinematic Viscosity of Water (m2/sec) 1.19x10-6 m2/sec @ 15* C. G Acceleration of Gravity (m/sec2) 9.80665 m/sec 2 L
Nonlinear wave particle interaction in the Earth's foreshock
NASA Technical Reports Server (NTRS)
Mazelle, C.; LeQueau, D.; Meziane, K.; Lin, R. P.; Parks, G.; Reme, H.; Sanderson, T.; Lepping, R. P.
1997-01-01
The possibility that ion beams could provide a free energy source for driving an ion/ion instability responsible for the ULF wave occurrence is investigated. For this, the wave dispersion relation with the observed parameters is solved. Secondly, it is shown that the ring-like distributions could then be produced by a coherent nonlinear wave-particle interaction. It tends to trap the ions into narrow cells in velocity space centered around a well-defined pitch-angle, directly related to the saturation wave amplitude in the analytical theory. The theoretical predictions with the observations are compared.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai Ronggen; Cao Liming; Pang Dawei
Recently Gibbons et al. in [G. W. Gibbons et al. Class. Quant. Grav. 22, 1503 (2005)] defined a set of conserved quantities for Kerr-AdS black holes with the maximal number of rotation parameters in arbitrary dimension. This set of conserved quantities is defined with respect to a frame which is nonrotating at infinity. On the other hand, there is another set of conserved quantities for Kerr-AdS black holes, defined by Hawking et al. in [Hawking et al. Phys. Rev. D 59, 064005 (1999)], which is measured relative to a frame rotating at infinity. Gibbons et al. explicitly showed that themore » quantities defined by them satisfy the first law of black hole thermodynamics, while those quantities defined by Hawking et al. do not obey the first law. In this paper we discuss thermodynamics of dual CFTs to the Kerr-AdS black holes by mapping the bulk thermodynamic quantities to the boundary of the AdS space. We find that thermodynamic quantities of dual CFTs satisfy the first law of thermodynamics and Cardy-Verlinde formula only when these thermodynamic quantities result from the set of bulk quantities given by Hawking et al.. We discuss the implication of our results.« less
NASA Technical Reports Server (NTRS)
Metcalf, David
1995-01-01
Multimedia Information eXchange (MIX) is a multimedia information system that accommodates multiple data types and provides consistency across platforms. Information from all over the world can be accessed quickly and efficiently with the Internet-based system. I-NET's MIX uses the World Wide Web and Mosaic graphical user interface. Mosaic is available on all platforms used at I-NET's Kennedy Space Center (KSC) facilities. Key information system design concepts and benefits are reviewed. The MIX system also defines specific configuration and helper application parameters to ensure consistent operations across the entire organization. Guidelines and procedures for other areas of importance in information systems design are also addressed. Areas include: code of ethics, content, copyright, security, system administration, and support.
Definition of technology development missions for early space stations: Large space structures
NASA Technical Reports Server (NTRS)
1983-01-01
The testbed role of an early (1990-95) manned space station in large space structures technology development is defined and conceptual designs for large space structures development missions to be conducted at the space station are developed. Emphasis is placed on defining requirements and benefits of development testing on a space station in concert with ground and shuttle tests.
Voxel inversion of airborne electromagnetic data
NASA Astrophysics Data System (ADS)
Auken, E.; Fiandaca, G.; Kirkegaard, C.; Vest Christiansen, A.
2013-12-01
Inversion of electromagnetic data usually refers to a model space being linked to the actual observation points, and for airborne surveys the spatial discretization of the model space reflects the flight lines. On the contrary, geological and groundwater models most often refer to a regular voxel grid, not correlated to the geophysical model space. This means that incorporating the geophysical data into the geological and/or hydrological modelling grids involves a spatial relocation of the models, which in itself is a subtle process where valuable information is easily lost. Also the integration of prior information, e.g. from boreholes, is difficult when the observation points do not coincide with the position of the prior information, as well as the joint inversion of airborne and ground-based surveys. We developed a geophysical inversion algorithm working directly in a voxel grid disconnected from the actual measuring points, which then allows for informing directly geological/hydrogeological models, for easier incorporation of prior information and for straightforward integration of different data types in joint inversion. The new voxel model space defines the soil properties (like resistivity) on a set of nodes, and the distribution of the properties is computed everywhere by means of an interpolation function f (e.g. inverse distance or kriging). The position of the nodes is fixed during the inversion and is chosen to sample the soil taking into account topography and inversion resolution. Given this definition of the voxel model space, both 1D and 2D/3D forward responses can be computed. The 1D forward responses are computed as follows: A) a 1D model subdivision, in terms of model thicknesses and direction of the "virtual" horizontal stratification, is defined for each 1D data set. For EM soundings the "virtual" horizontal stratification is set up parallel to the topography at the sounding position. B) the "virtual" 1D models are constructed by interpolating the soil properties in the medium point of the "virtual" layers. For 2D/3D forward responses the algorithm operates similarly, simply filling the 2D/3D meshes of the forward responses by computing the interpolation values in the centres of the mesh cells. The new definition of the voxel model space allows for incorporating straightforwardly the geophysical information into geological and/or hydrological models, just by using for defining the geophysical model space a voxel (hydro)geological grid. This simplify also the propagation of the uncertainty of geophysical parameters into the (hydro)geological models. Furthermore, prior information from boreholes, like resistivity logs, can be applied directly to the voxel model space, even if the borehole positions do not coincide with the actual observation points. In fact, the prior information is constrained to the model parameters through the interpolation function at the borehole locations. The presented algorithm is a further development of the AarhusInv program package developed at Aarhus University (formerly em1dinv), which manages both large scale AEM surveys and ground-based data. This work has been carried out as part of the HyGEM project, supported by the Danish Council of Strategic Research under grant number DSF 11-116763.
Examining a Thermodynamic Order Parameter of Protein Folding.
Chong, Song-Ho; Ham, Sihyun
2018-05-08
Dimensionality reduction with a suitable choice of order parameters or reaction coordinates is commonly used for analyzing high-dimensional time-series data generated by atomistic biomolecular simulations. So far, geometric order parameters, such as the root mean square deviation, fraction of native amino acid contacts, and collective coordinates that best characterize rare or large conformational transitions, have been prevailing in protein folding studies. Here, we show that the solvent-averaged effective energy, which is a thermodynamic quantity but unambiguously defined for individual protein conformations, serves as a good order parameter of protein folding. This is illustrated through the application to the folding-unfolding simulation trajectory of villin headpiece subdomain. We rationalize the suitability of the effective energy as an order parameter by the funneledness of the underlying protein free energy landscape. We also demonstrate that an improved conformational space discretization is achieved by incorporating the effective energy. The most distinctive feature of this thermodynamic order parameter is that it works in pointing to near-native folded structures even when the knowledge of the native structure is lacking, and the use of the effective energy will also find applications in combination with methods of protein structure prediction.
Parameter Estimation as a Problem in Statistical Thermodynamics.
Earle, Keith A; Schneider, David J
2011-03-14
In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Kacpura, Thomas J.; Johnson, Sandra K.; Lux, James P.
2010-01-01
NASA is developing an experimental flight payload (referred to as the Space Communication and Navigation (SCAN) Test Bed) to investigate software defined radio (SDR), networking, and navigation technologies, operationally in the space environment. The payload consists of three software defined radios each compliant to NASA s Space Telecommunications Radio System Architecture, a common software interface description standard for software defined radios. The software defined radios are new technology developments underway by NASA and industry partners. Planned for launch in early 2012, the payload will be externally mounted to the International Space Station truss and conduct experiments representative of future mission capability.
Estimation and Optimization of the Parameters Preserving the Lustre of the Fabrics
NASA Astrophysics Data System (ADS)
Prodanova, Krasimira
2009-11-01
The paper discusses the optimization of the continuance of the Damp-Heating Process of a steaming iron press machine, and the preserving of the lustre of the fabrics. In order to be obtained high qualitative damp-heating processing, it is necessary to monitor parameters such as temperature, damp, and pressure during the process. The purpose of the present paper is a mathematical model to be constructed that adequately describes the technological process using multivariate data analysis. It was established that the full factorial design of type 23 is not adequate. The research has proceeded with central rotatable design of experiment. The obtained model adequately describes the technological process of damp-heating treatment in the defined factor space. The present investigation is helpful to the technological improvement and modernization in sewing companies.
Disordered λ φ4+ρ φ6 Landau-Ginzburg model
NASA Astrophysics Data System (ADS)
Diaz, R. Acosta; Svaiter, N. F.; Krein, G.; Zarro, C. A. D.
2018-03-01
We discuss a disordered λ φ4+ρ φ6 Landau-Ginzburg model defined in a d -dimensional space. First we adopt the standard procedure of averaging the disorder-dependent free energy of the model. The dominant contribution to this quantity is represented by a series of the replica partition functions of the system. Next, using the replica-symmetry ansatz in the saddle-point equations, we prove that the average free energy represents a system with multiple ground states with different order parameters. For low temperatures we show the presence of metastable equilibrium states for some replica fields for a range of values of the physical parameters. Finally, going beyond the mean-field approximation, the one-loop renormalization of this model is performed, in the leading-order replica partition function.
Dilution jet configurations in a reverse flow combustor. M.S. Thesis Final Report
NASA Technical Reports Server (NTRS)
Zizelman, J.
1985-01-01
Results of measurements of both temperature and velocity fields within a reverse flow combustor are presented. Flow within the combustor is acted upon by perpendicularly injected cooling jets introduced at three different locations along the inner and outer walls of the combustor. Each experiment is typified by a group of parameters: density ratio, momentum ratio, spacing ratio, and confinement parameter. Measurements of both temperature and velocity are presented in terms of normalized profiles at azimuthal positions through the turn section of the combustion chamber. Jet trajectories defined by minimum temperature and maximum velocity give a qualitative indication of the location of the jet within the cross flow. Results of a model from a previous temperature study are presented in some of the plots of data from this work.
Positive signs in massive gravity
NASA Astrophysics Data System (ADS)
Cheung, Clifford; Remmen, Grant N.
2016-04-01
We derive new constraints on massive gravity from unitarity and analyticity of scattering amplitudes. Our results apply to a general effective theory defined by Einstein gravity plus the leading soft diffeomorphism-breaking corrections. We calculate scattering amplitudes for all combinations of tensor, vector, and scalar polarizations. The high-energy behavior of these amplitudes prescribes a specific choice of couplings that ameliorates the ultraviolet cutoff, in agreement with existing literature. We then derive consistency conditions from analytic dispersion relations, which dictate positivity of certain combinations of parameters appearing in the forward scattering amplitudes. These constraints exclude all but a small island in the parameter space of ghost-free massive gravity. While the theory of the "Galileon" scalar mode alone is known to be inconsistent with positivity constraints, this is remedied in the full massive gravity theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Clifford; Remmen, Grant N.
Here, we derive new constraints on massive gravity from unitarity and analyticity of scattering amplitudes. Our results apply to a general effective theory defined by Einstein gravity plus the leading soft diffeomorphism-breaking corrections. We calculate scattering amplitudes for all combinations of tensor, vector, and scalar polarizations. Furthermore, the high-energy behavior of these amplitudes prescribes a specific choice of couplings that ameliorates the ultraviolet cutoff, in agreement with existing literature. We then derive consistency conditions from analytic dispersion relations, which dictate positivity of certain combinations of parameters appearing in the forward scattering amplitudes. These constraints exclude all but a small islandmore » in the parameter space of ghost-free massive gravity. And while the theory of the "Galileon" scalar mode alone is known to be inconsistent with positivity constraints, this is remedied in the full massive gravity theory.« less
Modular space station Phase B extension preliminary performance specification. Volume 2: Project
NASA Technical Reports Server (NTRS)
1971-01-01
The four systems of the modular space station project are described, and the interfaces between this project and the shuttle project, the tracking and data relay satellite project, and an arbitrarily defined experiment project are defined. The experiment project was synthesized from internal experiments, detached research and application modules, and attached research and application modules to derive a set of interface requirements which will support multiple combinations of these elements expected during the modular space station mission. The modular space station project element defines a 6-man orbital program capable of growth to a 12-man orbital program capability. The modular space station project element specification defines the modular space station system, the premission operations support system, the mission operations support system, and the cargo module system and their interfaces.
NASA Astrophysics Data System (ADS)
Kuznetsova, Maria
The Community Coordinated Modeling Center (CCMC, http://ccmc.gsfc.nasa.gov) was established at the dawn of the new millennium as a long-term flexible solution to the problem of transition of progress in space environment modeling to operational space weather forecasting. CCMC hosts an expanding collection of state-of-the-art space weather models developed by the international space science community. Over the years the CCMC acquired the unique experience in preparing complex models and model chains for operational environment and developing and maintaining custom displays and powerful web-based systems and tools ready to be used by researchers, space weather service providers and decision makers. In support of space weather needs of NASA users CCMC is developing highly-tailored applications and services that target specific orbits or locations in space and partnering with NASA mission specialists on linking CCMC space environment modeling with impacts on biological and technological systems in space. Confidence assessment of model predictions is an essential element of space environment modeling. CCMC facilitates interaction between model owners and users in defining physical parameters and metrics formats relevant to specific applications and leads community efforts to quantify models ability to simulate and predict space environment events. Interactive on-line model validation systems developed at CCMC make validation a seamless part of model development circle. The talk will showcase innovative solutions for space weather research, validation, anomaly analysis and forecasting and review on-going community-wide model validation initiatives enabled by CCMC applications.
A design space exploration for control of Critical Quality Attributes of mAb.
Bhatia, Hemlata; Read, Erik; Agarabi, Cyrus; Brorson, Kurt; Lute, Scott; Yoon, Seongkyu
2016-10-15
A unique "design space (DSp) exploration strategy," defined as a function of four key scenarios, was successfully integrated and validated to enhance the DSp building exercise, by increasing the accuracy of analyses and interpretation of processed data. The four key scenarios, defining the strategy, were based on cumulative analyses of individual models developed for the Critical Quality Attributes (23 Glycan Profiles) considered for the study. The analyses of the CQA estimates and model performances were interpreted as (1) Inside Specification/Significant Model (2) Inside Specification/Non-significant Model (3) Outside Specification/Significant Model (4) Outside Specification/Non-significant Model. Each scenario was defined and illustrated through individual models of CQA aligning the description. The R(2), Q(2), Model Validity and Model Reproducibility estimates of G2, G2FaGbGN, G0 and G2FaG2, respectively, signified the four scenarios stated above. Through further optimizations, including the estimation of Edge of Failure and Set Point Analysis, wider and accurate DSps were created for each scenario, establishing critical functional relationship between Critical Process Parameters (CPPs) and Critical Quality Attributes (CQAs). A DSp provides the optimal region for systematic evaluation, mechanistic understanding and refining of a QbD approach. DSp exploration strategy will aid the critical process of consistently and reproducibly achieving predefined quality of a product throughout its lifecycle. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Grootes, M. W.; Tuffs, R. J.; Popescu, C. C.; Robotham, A. S. G.; Seibert, M.; Kelvin, L. S.
2014-02-01
We present a non-parametric cell-based method of selecting highly pure and largely complete samples of spiral galaxies using photometric and structural parameters as provided by standard photometric pipelines and simple shape fitting algorithms. The performance of the method is quantified for different parameter combinations, using purely human-based classifications as a benchmark. The discretization of the parameter space allows a markedly superior selection than commonly used proxies relying on a fixed curve or surface of separation. Moreover, we find structural parameters derived using passbands longwards of the g band and linked to older stellar populations, especially the stellar mass surface density μ* and the r-band effective radius re, to perform at least equally well as parameters more traditionally linked to the identification of spirals by means of their young stellar populations, e.g. UV/optical colours. In particular, the distinct bimodality in the parameter μ*, consistent with expectations of different evolutionary paths for spirals and ellipticals, represents an often overlooked yet powerful parameter in differentiating between spiral and non-spiral/elliptical galaxies. We use the cell-based method for the optical parameter set including re in combination with the Sérsic index n and the i-band magnitude to investigate the intrinsic specific star formation rate-stellar mass relation (ψ*-M*) for a morphologically defined volume-limited sample of local Universe spiral galaxies. The relation is found to be well described by ψ _* ∝ M_*^{-0.5} over the range of 109.5 ≤ M* ≤ 1011 M⊙ with a mean interquartile range of 0.4 dex. This is somewhat steeper than previous determinations based on colour-selected samples of star-forming galaxies, primarily due to the inclusion in the sample of red quiescent discs.
NASA Astrophysics Data System (ADS)
Romano, M.; Mays, M. L.; Taktakishvili, A.; MacNeice, P. J.; Zheng, Y.; Pulkkinen, A. A.; Kuznetsova, M. M.; Odstrcil, D.
2013-12-01
Modeling coronal mass ejections (CMEs) is of great interest to the space weather research and forecasting communities. We present recent validation work of real-time CME arrival time predictions at different satellites using the WSA-ENLIL+Cone three-dimensional MHD heliospheric model available at the Community Coordinated Modeling Center (CCMC) and performed by the Space Weather Research Center (SWRC). SWRC is an in-house research-based operations team at the CCMC which provides interplanetary space weather forecasting for NASA's robotic missions and performs real-time model validation. The quality of model operation is evaluated by comparing its output to a measurable parameter of interest such as the CME arrival time and geomagnetic storm strength. The Kp index is calculated from the relation given in Newell et al. (2007), using solar wind parameters predicted by the WSA-ENLIL+Cone model at Earth. The CME arrival time error is defined as the difference between the predicted arrival time and the observed in-situ CME shock arrival time at the ACE, STEREO A, or STEREO B spacecraft. This study includes all real-time WSA-ENLIL+Cone model simulations performed between June 2011-2013 (over 400 runs) at the CCMC/SWRC. We report hit, miss, false alarm, and correct rejection statistics for all three spacecraft. For hits we show the average absolute CME arrival time error, and the dependence of this error on CME input parameters such as speed, width, and direction. We also present the predicted geomagnetic storm strength (using the Kp index) error for Earth-directed CMEs.
NASA Astrophysics Data System (ADS)
Sheikholeslami, R.; Hosseini, N.; Razavi, S.
2016-12-01
Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.
Definition of fractal topography to essential understanding of scale-invariance
NASA Astrophysics Data System (ADS)
Jin, Yi; Wu, Ying; Li, Hui; Zhao, Mengyu; Pan, Jienan
2017-04-01
Fractal behavior is scale-invariant and widely characterized by fractal dimension. However, the cor-respondence between them is that fractal behavior uniquely determines a fractal dimension while a fractal dimension can be related to many possible fractal behaviors. Therefore, fractal behavior is independent of the fractal generator and its geometries, spatial pattern, and statistical properties in addition to scale. To mathematically describe fractal behavior, we propose a novel concept of fractal topography defined by two scale-invariant parameters, scaling lacunarity (P) and scaling coverage (F). The scaling lacunarity is defined as the scale ratio between two successive fractal generators, whereas the scaling coverage is defined as the number ratio between them. Consequently, a strictly scale-invariant definition for self-similar fractals can be derived as D = log F /log P. To reflect the direction-dependence of fractal behaviors, we introduce another parameter Hxy, a general Hurst exponent, which is analytically expressed by Hxy = log Px/log Py where Px and Py are the scaling lacunarities in the x and y directions, respectively. Thus, a unified definition of fractal dimension is proposed for arbitrary self-similar and self-affine fractals by averaging the fractal dimensions of all directions in a d-dimensional space, which . Our definitions provide a theoretical, mechanistic basis for understanding the essentials of the scale-invariant property that reduces the complexity of modeling fractals.
NASA Astrophysics Data System (ADS)
Kitt, R.; Kalda, J.
2006-03-01
The question of optimal portfolio is addressed. The conventional Markowitz portfolio optimisation is discussed and the shortcomings due to non-Gaussian security returns are outlined. A method is proposed to minimise the likelihood of extreme non-Gaussian drawdowns of the portfolio value. The theory is called Leptokurtic, because it minimises the effects from “fat tails” of returns. The leptokurtic portfolio theory provides an optimal portfolio for investors, who define their risk-aversion as unwillingness to experience sharp drawdowns in asset prices. Two types of risks in asset returns are defined: a fluctuation risk, that has Gaussian distribution, and a drawdown risk, that deals with distribution tails. These risks are quantitatively measured by defining the “noise kernel” — an ellipsoidal cloud of points in the space of asset returns. The size of the ellipse is controlled with the threshold parameter: the larger the threshold parameter, the larger return are accepted for investors as normal fluctuations. The return vectors falling into the kernel are used for calculation of fluctuation risk. Analogously, the data points falling outside the kernel are used for the calculation of drawdown risks. As a result the portfolio optimisation problem becomes three-dimensional: in addition to the return, there are two types of risks involved. Optimal portfolio for drawdown-averse investors is the portfolio minimising variance outside the noise kernel. The theory has been tested with MSCI North America, Europe and Pacific total return stock indices.
Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations
NASA Astrophysics Data System (ADS)
Bang, Youngsuk
Reduced order modeling (ROM) has been recognized as an indispensable approach when the engineering analysis requires many executions of high fidelity simulation codes. Examples of such engineering analyses in nuclear reactor core calculations, representing the focus of this dissertation, include the functionalization of the homogenized few-group cross-sections in terms of the various core conditions, e.g. burn-up, fuel enrichment, temperature, etc. This is done via assembly calculations which are executed many times to generate the required functionalization for use in the downstream core calculations. Other examples are sensitivity analysis used to determine important core attribute variations due to input parameter variations, and uncertainty quantification employed to estimate core attribute uncertainties originating from input parameter uncertainties. ROM constructs a surrogate model with quantifiable accuracy which can replace the original code for subsequent engineering analysis calculations. This is achieved by reducing the effective dimensionality of the input parameter, the state variable, or the output response spaces, by projection onto the so-called active subspaces. Confining the variations to the active subspace allows one to construct an ROM model of reduced complexity which can be solved more efficiently. This dissertation introduces a new algorithm to render reduction with the reduction errors bounded based on a user-defined error tolerance which represents the main challenge of existing ROM techniques. Bounding the error is the key to ensuring that the constructed ROM models are robust for all possible applications. Providing such error bounds represents one of the algorithmic contributions of this dissertation to the ROM state-of-the-art. Recognizing that ROM techniques have been developed to render reduction at different levels, e.g. the input parameter space, the state space, and the response space, this dissertation offers a set of novel hybrid ROM algorithms which can be readily integrated into existing methods and offer higher computational efficiency and defendable accuracy of the reduced models. For example, the snapshots ROM algorithm is hybridized with the range finding algorithm to render reduction in the state space, e.g. the flux in reactor calculations. In another implementation, the perturbation theory used to calculate first order derivatives of responses with respect to parameters is hybridized with a forward sensitivity analysis approach to render reduction in the parameter space. Reduction at the state and parameter spaces can be combined to render further reduction at the interface between different physics codes in a multi-physics model with the accuracy quantified in a similar manner to the single physics case. Although the proposed algorithms are generic in nature, we focus here on radiation transport models used in support of the design and analysis of nuclear reactor cores. In particular, we focus on replacing the traditional assembly calculations by ROM models to facilitate the generation of homogenized cross-sections for downstream core calculations. The implication is that assembly calculations could be done instantaneously therefore precluding the need for the expensive evaluation of the few-group cross-sections for all possible core conditions. Given the generic natures of the algorithms, we make an effort to introduce the material in a general form to allow non-nuclear engineers to benefit from this work.
Investigation of Space Based Solid State Coherent Lidar
NASA Technical Reports Server (NTRS)
Amzajerdian, Farzin
2002-01-01
This report describes the work performed over the period of October 1, 1997 through March 31, 2001. Under this contract, UAH/CAO participated in defining and designing the SPAce Readiness Coherent Lidar Experiment (SPARCLE) mission, and developed the instrument's optical subsystem. This work was performed in collaborative fashion with NASA/MSFC engineers at both UAH/CAO and NASA/MSFC facilities. Earlier work by the UAH/CAO had produced a preliminary top-level system design for the Shuttle lidar instrument meeting the proposed mission performance requirements and the Space Shuttle Hitchhiker canister volume constraints. The UAH/CAO system design efforts had concentrated on the optical and mechanical designs of the instrument. The instrument electronics were also addressed, and the major electronic components and their interfaces defined. The instrument design concept was mainly based on the state of the transmitter and local oscillator laser development at NASA Langley Research Center and Jet Propulsion Laboratory, and utilized several lidar-related technologies that were either developed or evaluated by the NASA/MSFC and UAH/CAO scientists. UAH/CAO has developed a comprehensive coherent lidar numerical model capable of analyzing the performance of different instrument and mission concepts. This model uses the instrument configuration, atmospheric conditions and current velocity estimation theory to provide prediction of instrument performance during different phases of operation. This model can also optimize the design parameters of the instrument.
International Docking Standard (IDSS) Interface Definition Document (IDD) . E; Revision
NASA Technical Reports Server (NTRS)
Kelly, Sean M.; Cryan, Scott P.
2016-01-01
This International Docking System Standard (IDSS) Interface Definition Document (IDD) is the result of a collaboration by the International Space Station membership to establish a standard docking interface to enable on-orbit crew rescue operations and joint collaborative endeavors utilizing different spacecraft. This IDSS IDD details the physical geometric mating interface and design loads requirements. The physical geometric interface requirements must be strictly followed to ensure physical spacecraft mating compatibility. This includes both defined components and areas that are void of components. The IDD also identifies common design parameters as identified in section 3.0, e.g., docking initial conditions and vehicle mass properties. This information represents a recommended set of design values enveloping a broad set of design reference missions and conditions, which if accommodated in the docking system design, increases the probability of successful docking between different spacecraft. This IDD does not address operational procedures or off-nominal situations, nor does it dictate implementation or design features behind the mating interface. It is the responsibility of the spacecraft developer to perform all hardware verification and validation, and to perform final docking analyses to ensure the needed docking performance and to develop the final certification loads for their application. While there are many other critical requirements needed in the development of a docking system such as fault tolerance, reliability, and environments (e.g. vibration, etc.), it is not the intent of the IDSS IDD to mandate all of these requirements; these requirements must be addressed as part of the specific developer's unique program, spacecraft and mission needs. This approach allows designers the flexibility to design and build docking mechanisms to their unique program needs and requirements. The purpose of the IDSS IDD is to provide basic common design parameters to allow developers to independently design compatible docking systems. The IDSS is intended for uses ranging from crewed to autonomous space vehicles, and from Low Earth Orbit (LEO) to deep-space exploration missions.The purpose of the IDSS IDD is to provide basic common design parameters to allow developers to independently design compatible docking systems. The IDSS is intended for uses ranging from crewed to autonomous space vehicles, and from Low Earth Orbit (LEO) to deep-space exploration missions. The purpose of the IDSS IDD is to provide basic common design parameters to allow developers to independently design compatible docking systems. The IDSS is intended for uses ranging from crewed to autonomous space vehicles, and from Low Earth Orbit (LEO) to deep-space exploration missions.
NASA Technical Reports Server (NTRS)
1979-01-01
The performance, design and verification requirements for the space Construction Automated Fabrication Experiment (SCAFE) are defined. The SCAFE program defines, develops, and demonstrates the techniques, processes, and equipment required for the automatic fabrication of structural elements in space and for the assembly of such elements into a large, lightweight structure. The program defines a large structural platform to be constructed in orbit using the space shuttle as a launch vehicle and construction base.
Preliminary study of the space adaptation of the MELiSSA life support system
NASA Astrophysics Data System (ADS)
Mas-Albaigès, Joan L.; Duatis, Jordi; Podhajsky, Sandra; Guirado, Víctor; Poughon, Laurent
MELiSSA (Micro-Ecological Life Support System Alternative) is an European Space Agency (ESA) project focused on the development of a closed regenerative life support system to aid the development of technologies for future life support systems for long term manned planetary missions, e.g. a lunar base or missions to Mars. In order to understand the potential evolution of the MELiSSA concept towards its future use in the referred manned planetary mission context the MELiSSA Space Adaptation (MSA) activity has been undertaken. MSA's main objective is to model the different MELiSSA compartments using EcosimPro R , a specialized simulation tool for life support applications, in order to define a preliminary MELiSSA implementation for service in a man-tended lunar base scenario, with a four-member crew rotating in six-month increments, and performing the basic LSS functions of air revitalization, food production, and waste and water recycling. The MELiSSA EcosimPro R Model features a dedicated library for the different MELiSSA elements (bioreactors, greenhouse, crew, interconnecting elements, etc.). It is used to dimension the MELiSSA system in terms of major parameters like mass, volume and energy needs, evaluate the accuracy of the results and define the strategy for a progressive loop closure from the initial required performance (approx.100 The MELiSSA configuration(s) obtained through the EcosimPro R simulation are further analysed using the Advanced Life Support System Evaluation (ALISSE) metric, relying on mass, energy, efficiency, human risk, system reliability and crew time, for trade-off and optimization of results. The outcome of the MSA activity is, thus, a potential Life Support System architecture description, based on combined MELiSSA and other physico-chemical technologies, defining its expected performance, associated operational conditions and logistic needs.
NASA Astrophysics Data System (ADS)
Tramutoli, V.; Eleftheriou, A.; Filizzola, C.; Genzano, N.; Lacava, T.; Lisi, M.; Paciello, R.; Pergola, N.; Vallianatos, F.
2016-12-01
From an appropriate identification and real-time integration of independent observations we expect to significantly improve our present capability of dynamically assess Seismic Hazard. Sometime one specific observation (e.g. anomaly in one parameter) can be used as a trigger or as a reference point (in the space and/or time domain) for activating/improving analysis on other independent parameters (e.g. b-value computation and/or Natural Time Analysis on seismic data) whose systematic computation could result otherwise very computationally expensive or impossible. In this paper one of these parameter (the Earth's emitted radiation in the Thermal Infra-Red spectral region) will be used to drive the application of Natural Time Analysis of seismic data in order to verify possible improvements in the forecast of earthquakes (with M≥4) occurred in Greece in the 10 years period 2004-2013. The RST (Robust Satellite Technique) data analysis approach and RETIRA (Robust Estimator of TIR Anomalies) index were used to preliminarily define, and then to identify, Significant Sequences of TIR Anomalies (SSTAs) in 10 years (2004-2013) of daily TIR images acquired by the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on board the Meteosat Second Generation (MSG) satellite. A previous paper already demonstrated that more than 93% of all identified SSTAs occurred in a pre-fixed space-time window around earthquakes time (30 days before up to 15 after) and location (within 150 km or Dorbrovolsky distance) with a false positive rate smaller than 7%. In this paper just the barycenter of (and not all the alerted area) SSTAs is used to define the center of the circular area from which collect seismic data required for NTA analysis. Changes in the quality of earthquake forecast achieved by using each individual parameter in different configurations as well as the improvement rising by their joint use will be presented with reference to the 10 years considered period and to several recent events occurred in Greece.
Generalized Reich-Moore R-matrix approximation
NASA Astrophysics Data System (ADS)
Arbanas, Goran; Sobes, Vladimir; Holcomb, Andrew; Ducru, Pablo; Pigni, Marco; Wiarda, Dorothea
2017-09-01
A conventional Reich-Moore approximation (RMA) of R-matrix is generalized into a manifestly unitary form by introducing a set of resonant capture channels treated explicitly in a generalized, reduced R-matrix. A dramatic reduction of channel space witnessed in conventional RMA, from Nc × Nc full R-matrix to Np × Np reduced R-matrix, where Nc = Np + Nγ, Np and Nγ denoting the number of particle and γ-ray channels, respectively, is due to Np < Nγ. A corresponding reduction of channel space in generalized RMA (GRMA) is from Nc × Nc full R-matrix to N × N, where N = Np + N, and where N is the number of capture channels defined in GRMA. We show that N = Nλ where Nλ is the number of R-matrix levels. This reduction in channel space, although not as dramatic as in the conventional RMA, could be significant for medium and heavy nuclides where N < Nγ. The resonant capture channels defined by GRMA accommodate level-level interference (via capture channels) neglected in conventional RMA. The expression for total capture cross section in GRMA is formally equal to that of the full Nc × NcR-matrix. This suggests that GRMA could yield improved nuclear data evaluations in the resolved resonance range at a cost of introducing N(N - 1)/2 resonant capture width parameters relative to conventional RMA. Manifest unitarity of GRMA justifies a method advocated by Fröhner and implemented in the SAMMY nuclear data evaluation code for enforcing unitarity of conventional RMA. Capture widths of GRMA are exactly convertible into alternative R-matrix parameters via Brune tranform. Application of idealized statistical methods to GRMA shows that variance among conventional RMA capture widths in extant RMA evaluations could be used to estimate variance among off-diagonal elements neglected by conventional RMA. Significant departure of capture widths from an idealized distribution may indicate the presence of underlying doorway states.
The NASA Advanced Space Power Systems Project
NASA Technical Reports Server (NTRS)
Mercer, Carolyn R.; Hoberecht, Mark A.; Bennett, William R.; Lvovich, Vadim F.; Bugga, Ratnakumar
2015-01-01
The goal of the NASA Advanced Space Power Systems Project is to develop advanced, game changing technologies that will provide future NASA space exploration missions with safe, reliable, light weight and compact power generation and energy storage systems. The development effort is focused on maturing the technologies from a technology readiness level of approximately 23 to approximately 56 as defined in the NASA Procedural Requirement 7123.1B. Currently, the project is working on two critical technology areas: High specific energy batteries, and regenerative fuel cell systems with passive fluid management. Examples of target applications for these technologies are: extending the duration of extravehicular activities (EVA) with high specific energy and energy density batteries; providing reliable, long-life power for rovers with passive fuel cell and regenerative fuel cell systems that enable reduced system complexity. Recent results from the high energy battery and regenerative fuel cell technology development efforts will be presented. The technical approach, the key performance parameters and the technical results achieved to date in each of these new elements will be included. The Advanced Space Power Systems Project is part of the Game Changing Development Program under NASAs Space Technology Mission Directorate.
Method for early detection of cooling-loss events
Bermudez, Sergio A.; Hamann, Hendrik; Marianno, Fernando J.
2015-06-30
A method of detecting cooling-loss event early is provided. The method includes defining a relative humidity limit and change threshold for a given space, measuring relative humidity in the given space, determining, with a processing unit, whether the measured relative humidity is within the defined relative humidity limit, generating a warning in an event the measured relative humidity is outside the defined relative humidity limit and determining whether a change in the measured relative humidity is less than the defined change threshold for the given space and generating an alarm in an event the change is greater than the defined change threshold.
Method for early detection of cooling-loss events
Bermudez, Sergio A.; Hamann, Hendrik F.; Marianno, Fernando J.
2015-12-22
A method of detecting cooling-loss event early is provided. The method includes defining a relative humidity limit and change threshold for a given space, measuring relative humidity in the given space, determining, with a processing unit, whether the measured relative humidity is within the defined relative humidity limit, generating a warning in an event the measured relative humidity is outside the defined relative humidity limit and determining whether a change in the measured relative humidity is less than the defined change threshold for the given space and generating an alarm in an event the change is greater than the defined change threshold.
Environmental Controls on Space-Time Biodiversity Patterns in the Amazon
NASA Astrophysics Data System (ADS)
Porporato, A. M.; Bonetti, S.; Feng, X.
2014-12-01
The Amazon/Andes territory is characterized by the highest biodiversity on Earth and understanding how all these ecological niches and different species originated and developed is an open challenge. The niche perspective assumes that species have evolved and occupy deterministically different roles within its environment. This view differs from that of the neutral theories, which assume ecological equivalence between all species but incorporates stochastic demographic processes along with long-term migration and speciation rates. Both approaches have demonstrated tremendous power in predicting aspects species biodiversity. By combining tools from both approaches, we use modified birth and death processes to simulate plant species diversification in the Amazon/Andes and their space-time ecohydrological controls. By defining parameters related to births and deaths as functions of available resources, we incorporate the role of space-time resource variability on niche formation and community composition. We also explicitly include the role of a heterogeneous landscape and topography. The results are discussed in relation to transect datasets from neotropical forests.
Some characteristics of the international space channel
NASA Technical Reports Server (NTRS)
Noack, T. L.; Poland, W. B., Jr.
1975-01-01
Some physical characteristics of radio transmission links and the technology of PCM modulation combine with the Radio Regulations of the International Telecommunications Union to define a communications channel having a determinable channel capacity, error rate, and sensitivity to interference. These characteristics and the corresponding limitations on EIRP, power flux density, and power spectral density for space service applications are described. The ITU regulations create a critical height of 1027 km where some parameters of the limitation rules change. The nature of restraints on power spectral density are discussed and an approach to a standardized representation of Necessary Bandwidth for the Space Services is described. It is shown that, given the PFD (power flux density) and PSD (power spectral density) limitations of radio regulations, the channel performance is determined by the ratio of effective receiving antenna aperture to system noise temperature. Based on this approach, the method for a quantitative trade-off between spectrum spreading and system performance is presented. Finally, the effects of radio frequency interference between standard systems is analyzed.
Optimization of Compton Source Performance through Electron Beam Shaping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malyzhenkov, Alexander; Yampolsky, Nikolai
2016-09-26
We investigate a novel scheme for significantly increasing the brightness of x-ray light sources based on inverse Compton scattering (ICS) - scattering laser pulses off relativistic electron beams. The brightness of ICS sources is limited by the electron beam quality since electrons traveling at different angles, and/or having different energies, produce photons with different energies. Therefore, the spectral brightness of the source is defined by the 6d electron phase space shape and size, as well as laser beam parameters. The peak brightness of the ICS source can be maximized then if the electron phase space is transformed in a waymore » so that all electrons scatter off the x-ray photons of same frequency in the same direction, arriving to the observer at the same time. We describe the x-ray photon beam quality through the Wigner function (6d photon phase space distribution) and derive it for the ICS source when the electron and laser rms matrices are arbitrary.« less
Stickiness in Hamiltonian systems: From sharply divided to hierarchical phase space
NASA Astrophysics Data System (ADS)
Altmann, Eduardo G.; Motter, Adilson E.; Kantz, Holger
2006-02-01
We investigate the dynamics of chaotic trajectories in simple yet physically important Hamiltonian systems with nonhierarchical borders between regular and chaotic regions with positive measures. We show that the stickiness to the border of the regular regions in systems with such a sharply divided phase space occurs through one-parameter families of marginally unstable periodic orbits and is characterized by an exponent γ=2 for the asymptotic power-law decay of the distribution of recurrence times. Generic perturbations lead to systems with hierarchical phase space, where the stickiness is apparently enhanced due to the presence of infinitely many regular islands and Cantori. In this case, we show that the distribution of recurrence times can be composed of a sum of exponentials or a sum of power laws, depending on the relative contribution of the primary and secondary structures of the hierarchy. Numerical verification of our main results are provided for area-preserving maps, mushroom billiards, and the newly defined magnetic mushroom billiards.
Multiple Scattering in Random Mechanical Systems and Diffusion Approximation
NASA Astrophysics Data System (ADS)
Feres, Renato; Ng, Jasmine; Zhang, Hong-Kun
2013-10-01
This paper is concerned with stochastic processes that model multiple (or iterated) scattering in classical mechanical systems of billiard type, defined below. From a given (deterministic) system of billiard type, a random process with transition probabilities operator P is introduced by assuming that some of the dynamical variables are random with prescribed probability distributions. Of particular interest are systems with weak scattering, which are associated to parametric families of operators P h , depending on a geometric or mechanical parameter h, that approaches the identity as h goes to 0. It is shown that ( P h - I)/ h converges for small h to a second order elliptic differential operator on compactly supported functions and that the Markov chain process associated to P h converges to a diffusion with infinitesimal generator . Both P h and are self-adjoint (densely) defined on the space of square-integrable functions over the (lower) half-space in , where η is a stationary measure. This measure's density is either (post-collision) Maxwell-Boltzmann distribution or Knudsen cosine law, and the random processes with infinitesimal generator respectively correspond to what we call MB diffusion and (generalized) Legendre diffusion. Concrete examples of simple mechanical systems are given and illustrated by numerically simulating the random processes.
Higgs mass from D-terms: a litmus test
NASA Astrophysics Data System (ADS)
Cheung, Clifford; Roberts, Hannes L.
2013-12-01
We explore supersymmetric theories in which the Higgs mass is boosted by the non-decoupling D-terms of an extended U(1) X gauge symmetry, defined here to be a general linear combination of hypercharge, baryon number, and lepton number. Crucially, the gauge coupling, g X , is bounded from below to accommodate the Higgs mass, while the quarks and leptons are required by gauge invariance to carry non-zero charge under U(1) X . This induces an irreducible rate, σBR, for pp → X → ℓℓ relevant to existing and future resonance searches, and gives rise to higher dimension operators that are stringently constrained by precision electroweak measurements. Combined, these bounds define a maximally allowed region in the space of observables, ( σBR, m X ), outside of which is excluded by naturalness and experimental limits. If natural supersymmetry utilizes non-decoupling D-terms, then the associated X boson can only be observed within this window, providing a model independent `litmus test' for this broad class of scenarios at the LHC. Comparing limits, we find that current LHC results only exclude regions in parameter space which were already disfavored by precision electroweak data.
Fractal boundary basins in spherically symmetric ϕ4 theory
NASA Astrophysics Data System (ADS)
Honda, Ethan
2010-07-01
Results are presented from numerical simulations of the flat-space nonlinear Klein-Gordon equation with an asymmetric double-well potential in spherical symmetry. Exit criteria are defined for the simulations that are used to help understand the boundaries of the basins of attraction for Gaussian “bubble” initial data. The first exit criterion, based on the immediate collapse or expansion of bubble radius, is used to observe the departure of the scalar field from a static intermediate attractor solution. The boundary separating these two behaviors in parameter space is smooth and demonstrates a time-scaling law with an exponent that depends on the asymmetry of the potential. The second exit criterion differentiates between the creation of an expanding true-vacuum bubble and dispersion of the field leaving the false vacuum; the boundary separating these basins of attraction is shown to demonstrate fractal behavior. The basins are defined by the number of bounces that the field undergoes before inducing a phase transition. A third, hybrid exit criterion is used to determine the location of the boundary to arbitrary precision and to characterize the threshold behavior. The possible effects this behavior might have on cosmological phase transitions are briefly discussed.
NASA Technical Reports Server (NTRS)
Li, C. J.; Devries, W. R.; Ludema, K. C.
1983-01-01
Measurements made with a stylus surface tracer which provides a digitized representation of a surface profile are discussed. Parameters are defined to characterize the height (e.g., RMS roughness, skewness, and kurtosis) and length (e.g., autocorrelation) of the surface topography. These are applied to the characterization of crank shaft journals which were manufactured by different grinding and lopping procedures known to give significant differences in crank shaft bearing life. It was found that three parameters (RMS roughness, skewness, and kurtosis) are necessary to adequately distinguish the character of these surfaces. Every surface specimen has a set of values for these three parameters. They can be regarded as a set coordinate in a space constituted by three characteristics axes. The various journal surfaces can be classified along with the determination of a proper wavelength cutoff (0.25 mm) by using a method of separated subspace. The finite radius of the stylus used for profile tracing gives an inherent measurement error as it passes over the fine structure of the surface. A mathematical model is derived to compensate for this error.
Exploring the free-energy landscape of a short peptide using an average force
NASA Astrophysics Data System (ADS)
Chipot, Christophe; Hénin, Jérôme
2005-12-01
The reversible folding of deca-alanine is chosen as a test case for characterizing a method that uses an adaptive biasing force (ABF) to escape from the minima and overcome the barriers of the free-energy landscape. This approach relies on the continuous estimation of a biasing force that yields a Hamiltonian in which no average force is exerted along the ordering parameter ξ. Optimizing the parameters that control how the ABF is applied, the method is shown to be extremely effective when a nonequivocal ordering parameter can be defined to explore the folding pathway of the peptide. Starting from a β-turn motif and restraining ξ to a region of the conformational space that extends from the α-helical state to an ensemble of extended structures, the ABF scheme is successful in folding the peptide chain into a compact α helix. Sampling of this conformation is, however, marginal when the range of ξ values embraces arrangements of greater compactness, hence demonstrating the inherent limitations of free-energy methods when ambiguous ordering parameters are utilized.
Low-mass neutralino dark matter in supergravity scenarios: phenomenology and naturalness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peiró, M.; Robles, S., E-mail: mpeirogarcia@gmail.com, E-mail: sandra.robles@uam.es
2017-05-01
The latest experimental results from the LHC and dark matter (DM) searches suggest that the parameter space allowed in supersymmetric theories is subject to strong reductions. These bounds are especially constraining for scenarios entailing light DM particles. Previous studies have shown that light neutralino DM in the Minimal Supersymmetric Standard Model (MSSM), with parameters defined at the electroweak scale, is still viable when the low energy spectrum of the model features light sleptons, in which case, the relic density constraint can be fulfilled. In view of this, we have investigated the viability of light neutralinos as DM candidates in themore » MSSM, with parameters defined at the grand unification scale. We have analysed the optimal choices of non-universalities in the soft supersymmetry-breaking parameters for both, gauginos and scalars, in order to avoid the stringent experimental constraints. We show that light neutralinos, with a mass as low as 25 GeV, are viable in supergravity scenarios if the gaugino mass parameters at high energy are very non universal, while the scalar masses can remain of the same order. These scenarios typically predict a very small cross section of neutralinos off protons and neutrons, thereby being very challenging for direct detection experiments. However, a potential detection of smuons and selectrons at the LHC, together with a hypothetical discovery of a gamma-ray signal from neutralino annihilations in dwarf spheroidal galaxies could shed light on this kind of solutions. Finally, we have investigated the naturalness of these scenarios, taking into account all the potential sources of tuning. Besides the electroweak fine-tuning, we have found that the tuning to reproduce the correct DM relic abundance and that to match the measured Higgs mass can also be important when estimating the total degree of naturalness.« less
Low-mass neutralino dark matter in supergravity scenarios: phenomenology and naturalness
NASA Astrophysics Data System (ADS)
Peiró, M.; Robles, S.
2017-05-01
The latest experimental results from the LHC and dark matter (DM) searches suggest that the parameter space allowed in supersymmetric theories is subject to strong reductions. These bounds are especially constraining for scenarios entailing light DM particles. Previous studies have shown that light neutralino DM in the Minimal Supersymmetric Standard Model (MSSM), with parameters defined at the electroweak scale, is still viable when the low energy spectrum of the model features light sleptons, in which case, the relic density constraint can be fulfilled. In view of this, we have investigated the viability of light neutralinos as DM candidates in the MSSM, with parameters defined at the grand unification scale. We have analysed the optimal choices of non-universalities in the soft supersymmetry-breaking parameters for both, gauginos and scalars, in order to avoid the stringent experimental constraints. We show that light neutralinos, with a mass as low as 25 GeV, are viable in supergravity scenarios if the gaugino mass parameters at high energy are very non universal, while the scalar masses can remain of the same order. These scenarios typically predict a very small cross section of neutralinos off protons and neutrons, thereby being very challenging for direct detection experiments. However, a potential detection of smuons and selectrons at the LHC, together with a hypothetical discovery of a gamma-ray signal from neutralino annihilations in dwarf spheroidal galaxies could shed light on this kind of solutions. Finally, we have investigated the naturalness of these scenarios, taking into account all the potential sources of tuning. Besides the electroweak fine-tuning, we have found that the tuning to reproduce the correct DM relic abundance and that to match the measured Higgs mass can also be important when estimating the total degree of naturalness.
Intelligent Space Tube Optimization for speeding ground water remedial design.
Kalwij, Ineke M; Peralta, Richard C
2008-01-01
An innovative Intelligent Space Tube Optimization (ISTO) two-stage approach facilitates solving complex nonlinear flow and contaminant transport management problems. It reduces computational effort of designing optimal ground water remediation systems and strategies for an assumed set of wells. ISTO's stage 1 defines an adaptive mobile space tube that lengthens toward the optimal solution. The space tube has overlapping multidimensional subspaces. Stage 1 generates several strategies within the space tube, trains neural surrogate simulators (NSS) using the limited space tube data, and optimizes using an advanced genetic algorithm (AGA) with NSS. Stage 1 speeds evaluating assumed well locations and combinations. For a large complex plume of solvents and explosives, ISTO stage 1 reaches within 10% of the optimal solution 25% faster than an efficient AGA coupled with comprehensive tabu search (AGCT) does by itself. ISTO input parameters include space tube radius and number of strategies used to train NSS per cycle. Larger radii can speed convergence to optimality for optimizations that achieve it but might increase the number of optimizations reaching it. ISTO stage 2 automatically refines the NSS-AGA stage 1 optimal strategy using heuristic optimization (we used AGCT), without using NSS surrogates. Stage 2 explores the entire solution space. ISTO is applicable for many heuristic optimization settings in which the numerical simulator is computationally intensive, and one would like to reduce that burden.
Performance assessment of solid state actuators through a common procedure and comparison criteria
NASA Astrophysics Data System (ADS)
Reithler, Livier; Guedra-Degeorges, Didier
1998-07-01
The design of systems based on smart structure technologies for active shape and vibration control and high precision positioning requires a good knowledge of the behavior of the active materials (electrostrictive and piezoelectric ceramics and polymers, magnetostrictive and shape memory alloys...) and of commercially available actuators. Extensive theoretical studies have been made on the behavior of active materials during the past decades but there are only a few developments on experimental comparisons between different kinds of commercially available actuators. The purpose of this study is to find out the pertinent parameters for the design of such systems, to set up a common static test procedure for all types of actuators and to define comparison criteria in terms of output force and displacement, mechanical and electrical energy, mass and dimensions. After having define the pertinent parameters of the characterization and having described the resulting testing procedure, test results are presented for different types of actuators based on piezoceramics and magnetostrictive alloys. The performances of each actuator are compared through both the test results and the announced characteristics: to perform this comparison absolute and relative criteria are chosen considering aeronautical and space applications.
Zheng, Lianqing; Chen, Mengen; Yang, Wei
2009-06-21
To overcome the pseudoergodicity problem, conformational sampling can be accelerated via generalized ensemble methods, e.g., through the realization of random walks along prechosen collective variables, such as spatial order parameters, energy scaling parameters, or even system temperatures or pressures, etc. As usually observed, in generalized ensemble simulations, hidden barriers are likely to exist in the space perpendicular to the collective variable direction and these residual free energy barriers could greatly abolish the sampling efficiency. This sampling issue is particularly severe when the collective variable is defined in a low-dimension subset of the target system; then the "Hamiltonian lagging" problem, which reveals the fact that necessary structural relaxation falls behind the move of the collective variable, may be likely to occur. To overcome this problem in equilibrium conformational sampling, we adopted the orthogonal space random walk (OSRW) strategy, which was originally developed in the context of free energy simulation [L. Zheng, M. Chen, and W. Yang, Proc. Natl. Acad. Sci. U.S.A. 105, 20227 (2008)]. Thereby, generalized ensemble simulations can simultaneously escape both the explicit barriers along the collective variable direction and the hidden barriers that are strongly coupled with the collective variable move. As demonstrated in our model studies, the present OSRW based generalized ensemble treatments show improved sampling capability over the corresponding classical generalized ensemble treatments.
Project WISH: The Emerald City
NASA Technical Reports Server (NTRS)
Oz, Hayrani; Slonksnes, Linda (Editor); Rogers, James W. (Editor); Sherer, Scott E. (Editor); Strosky, Michelle A. (Editor); Szmerekovsky, Andrew G. (Editor); Klupar, G. Joseph (Editor)
1990-01-01
The preliminary design of a permanently manned autonomous space oasis (PEMASO), including its pertinent subsystems, was performed during the 1990 Winter and Spring quarters. The purpose for the space oasis was defined and the preliminary design work was started with emphasis placed on the study of orbital mechanics, power systems and propulsion systems. A rotating torus was selected as the preliminary configuration, and overall size, mass and location of some subsystems within the station were addressed. Computer software packages were utilized to determine station transfer parameters and thus the preliminary propulsion requirements. Power and propulsion systems were researched to determine feasible configurations and many conventional schemes were ruled out. Vehicle dynamics and control, mechanical and life support systems were also studied. For each subsystem studied, the next step in the design process to be performed during the continuation of the project was also addressed.
Effects of anisotropic conduction and heat pipe interaction on minimum mass space radiators
NASA Technical Reports Server (NTRS)
Baker, Karl W.; Lund, Kurt O.
1991-01-01
Equations are formulated for the two dimensional, anisotropic conduction of heat in space radiator fins. The transverse temperature field was obtained by the integral method, and the axial field by numerical integration. A shape factor, defined for the axial boundary condition, simplifies the analysis and renders the results applicable to general heat pipe/conduction fin interface designs. The thermal results are summarized in terms of the fin efficiency, a radiation/axial conductance number, and a transverse conductance surface Biot number. These relations, together with those for mass distribution between fins and heat pipes, were used in predicting the minimum radiator mass for fixed thermal properties and fin efficiency. This mass is found to decrease monotonically with increasing fin conductivity. Sensitivities of the minimum mass designs to the problem parameters are determined.
Guo, Hanqi; Phillips, Carolyn L; Peterka, Tom; Karpeyev, Dmitry; Glatz, Andreas
2016-01-01
We propose a method for the vortex extraction and tracking of superconducting magnetic flux vortices for both structured and unstructured mesh data. In the Ginzburg-Landau theory, magnetic flux vortices are well-defined features in a complex-valued order parameter field, and their dynamics determine electromagnetic properties in type-II superconductors. Our method represents each vortex line (a 1D curve embedded in 3D space) as a connected graph extracted from the discretized field in both space and time. For a time-varying discrete dataset, our vortex extraction and tracking method is as accurate as the data discretization. We then apply 3D visualization and 2D event diagrams to the extraction and tracking results to help scientists understand vortex dynamics and macroscale superconductor behavior in greater detail than previously possible.
Galileo 1989 VEEGA trajectory design. [Venus-Earth-Earth-Gravity-Assist
NASA Technical Reports Server (NTRS)
D'Amario, Louis A.; Byrnes, Dennis V.; Johannesen, Jennie R.; Nolan, Brian G.
1989-01-01
The new baseline for the Galileo Mission is a 1989 Venus-earth-earth gravity-assist (VEEGA) trajectory, which utilizes three gravity-assist planetary flybys in order to reduce launch energy requirements significantly compared to other earth-Jupiter transfer modes. The launch period occurs during October-November 1989. The total flight time is about 6 years, with November 1995 as the most likely choice for arrival at Jupiter. Optimal 1989 VEEGA trajectories have been generated for a wide range of earth launch dates and Jupiter arrival dates. Launch/arrival space contour plots are presented for various trajectory parameters, including propellant margin, which is used to measure mission performance. The accessible region of the launch/arrival space is defined by propellant margin and launch energy constraints; the available launch period is approximately 1.5 months long.
The weight hierarchies and chain condition of a class of codes from varieties over finite fields
NASA Technical Reports Server (NTRS)
Wu, Xinen; Feng, Gui-Liang; Rao, T. R. N.
1996-01-01
The generalized Hamming weights of linear codes were first introduced by Wei. These are fundamental parameters related to the minimal overlap structures of the subcodes and very useful in several fields. It was found that the chain condition of a linear code is convenient in studying the generalized Hamming weights of the product codes. In this paper we consider a class of codes defined over some varieties in projective spaces over finite fields, whose generalized Hamming weights can be determined by studying the orbits of subspaces of the projective spaces under the actions of classical groups over finite fields, i.e., the symplectic groups, the unitary groups and orthogonal groups. We give the weight hierarchies and generalized weight spectra of the codes from Hermitian varieties and prove that the codes satisfy the chain condition.
Metric Calibration of a Focused Plenoptic Camera Based on a 3d Calibration Target
NASA Astrophysics Data System (ADS)
Zeller, N.; Noury, C. A.; Quint, F.; Teulière, C.; Stilla, U.; Dhome, M.
2016-06-01
In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.
A global fit of the MSSM with GAMBIT
NASA Astrophysics Data System (ADS)
Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin
2017-12-01
We study the seven-dimensional Minimal Supersymmetric Standard Model (MSSM7) with the new GAMBIT software framework, with all parameters defined at the weak scale. Our analysis significantly extends previous weak-scale, phenomenological MSSM fits, by adding more and newer experimental analyses, improving the accuracy and detail of theoretical predictions, including dominant uncertainties from the Standard Model, the Galactic dark matter halo and the quark content of the nucleon, and employing novel and highly-efficient statistical sampling methods to scan the parameter space. We find regions of the MSSM7 that exhibit co-annihilation of neutralinos with charginos, stops and sbottoms, as well as models that undergo resonant annihilation via both light and heavy Higgs funnels. We find high-likelihood models with light charginos, stops and sbottoms that have the potential to be within the future reach of the LHC. Large parts of our preferred parameter regions will also be accessible to the next generation of direct and indirect dark matter searches, making prospects for discovery in the near future rather good.
Korez, Robert; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž
2014-10-01
Gradual degeneration of intervertebral discs of the lumbar spine is one of the most common causes of low back pain. Although conservative treatment for low back pain may provide relief to most individuals, surgical intervention may be required for individuals with significant continuing symptoms, which is usually performed by replacing the degenerated intervertebral disc with an artificial implant. For designing implants with good bone contact and continuous force distribution, the morphology of the intervertebral disc space and vertebral body endplates is of considerable importance. In this study, we propose a method for parametric modeling of the intervertebral disc space in three dimensions (3D) and show its application to computed tomography (CT) images of the lumbar spine. The initial 3D model of the intervertebral disc space is generated according to the superquadric approach and therefore represented by a truncated elliptical cone, which is initialized by parameters obtained from 3D models of adjacent vertebral bodies. In an optimization procedure, the 3D model of the intervertebral disc space is incrementally deformed by adding parameters that provide a more detailed morphometric description of the observed shape, and aligned to the observed intervertebral disc space in the 3D image. By applying the proposed method to CT images of 20 lumbar spines, the shape and pose of each of the 100 intervertebral disc spaces were represented by a 3D parametric model. The resulting mean (±standard deviation) accuracy of modeling was 1.06±0.98mm in terms of radial Euclidean distance against manually defined ground truth points, with the corresponding success rate of 93% (i.e. 93 out of 100 intervertebral disc spaces were modeled successfully). As the resulting 3D models provide a description of the shape of intervertebral disc spaces in a complete parametric form, morphometric analysis was straightforwardly enabled and allowed the computation of the corresponding heights, widths and volumes, as well as of other geometric features that in detail describe the shape of intervertebral disc spaces. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Srivastava, Priyaka; Kraus, Jeff; Murawski, Robert; Golden, Bertsel, Jr.
2015-01-01
NASAs Space Communications and Navigation (SCaN) program manages three active networks: the Near Earth Network, the Space Network, and the Deep Space Network. These networks simultaneously support NASA missions and provide communications services to customers worldwide. To efficiently manage these resources and their capabilities, a team of student interns at the NASA Glenn Research Center is developing a distributed system to model the SCaN networks. Once complete, the system shall provide a platform that enables users to perform capacity modeling of current and prospective missions with finer-grained control of information between several simulation and modeling tools. This will enable the SCaN program to access a holistic view of its networks and simulate the effects of modifications in order to provide NASA with decisional information. The development of this capacity modeling system is managed by NASAs Strategic Center for Education, Networking, Integration, and Communication (SCENIC). Three primary third-party software tools offer their unique abilities in different stages of the simulation process. MagicDraw provides UMLSysML modeling, AGIs Systems Tool Kit simulates the physical transmission parameters and de-conflicts scheduled communication, and Riverbed Modeler (formerly OPNET) simulates communication protocols and packet-based networking. SCENIC developers are building custom software extensions to integrate these components in an end-to-end space communications modeling platform. A central control module acts as the hub for report-based messaging between client wrappers. Backend databases provide information related to mission parameters and ground station configurations, while the end user defines scenario-specific attributes for the model. The eight SCENIC interns are working under the direction of their mentors to complete an initial version of this capacity modeling system during the summer of 2015. The intern team is composed of four students in Computer Science, two in Computer Engineering, one in Electrical Engineering, and one studying Space Systems Engineering.
NASA Astrophysics Data System (ADS)
Márquez, I.; Lima Neto, G. B.; Capelato, H.; Durret, F.; Lanzoni, B.; Gerbal, D.
2001-12-01
In the present paper, we show that elliptical galaxies (Es) obey a scaling relation between potential energy and mass. Since they are relaxed systems in a post violent-relaxation stage, they are quasi-equilibrium gravitational systems and therefore they also have a quasi-constant specific entropy. Assuming that light traces mass, these two laws imply that in the space defined by the three Sérsic law parameters (intensity Sigma0 , scale a and shape nu ), elliptical galaxies are distributed on two intersecting 2-manifolds: the Entropic Surface and the Energy-Mass Surface. Using a sample of 132 galaxies belonging to three nearby clusters, we have verified that ellipticals indeed follow these laws. This also implies that they are distributed along the intersection line (the Energy-Entropy line), thus they constitute a one-parameter family. These two physical laws (separately or combined), allow to find the theoretical origin of several observed photometrical relations, such as the correlation between absolute magnitude and effective surface brightness, and the fact that ellipticals are located on a surface in the [log Reff, -2.5 log Sigma0, log nu ] space. The fact that elliptical galaxies are a one-parameter family has important implications for cosmology and galaxy formation and evolution models. Moreover, the Energy-Entropy line could be used as a distance indicator.
Slepoy, A; Peters, M D; Thompson, A P
2007-11-30
Molecular dynamics and other molecular simulation methods rely on a potential energy function, based only on the relative coordinates of the atomic nuclei. Such a function, called a force field, approximately represents the electronic structure interactions of a condensed matter system. Developing such approximate functions and fitting their parameters remains an arduous, time-consuming process, relying on expert physical intuition. To address this problem, a functional programming methodology was developed that may enable automated discovery of entirely new force-field functional forms, while simultaneously fitting parameter values. The method uses a combination of genetic programming, Metropolis Monte Carlo importance sampling and parallel tempering, to efficiently search a large space of candidate functional forms and parameters. The methodology was tested using a nontrivial problem with a well-defined globally optimal solution: a small set of atomic configurations was generated and the energy of each configuration was calculated using the Lennard-Jones pair potential. Starting with a population of random functions, our fully automated, massively parallel implementation of the method reproducibly discovered the original Lennard-Jones pair potential by searching for several hours on 100 processors, sampling only a minuscule portion of the total search space. This result indicates that, with further improvement, the method may be suitable for unsupervised development of more accurate force fields with completely new functional forms. Copyright (c) 2007 Wiley Periodicals, Inc.
Problem Space Matters: Evaluation of a German Enrichment Program for Gifted Children.
Welter, Marisete M; Jaarsveld, Saskia; Lachmann, Thomas
2018-01-01
We studied the development of cognitive abilities related to intelligence and creativity ( N = 48, 6-10 years old), using a longitudinal design (over one school year), in order to evaluate an Enrichment Program for gifted primary school children initiated by the government of the German federal state of Rhineland-Palatinate ( Entdeckertag Rheinland Pfalz , Germany; ET; Day of Discoverers). A group of German primary school children ( N = 24), identified earlier as intellectually gifted and selected to join the ET program was compared to a gender-, class- and IQ- matched group of control children that did not participate in this program. All participants performed the Standard Progressive Matrices (SPM) test, which measures intelligence in well-defined problem space; the Creative Reasoning Task (CRT), which measures intelligence in ill-defined problem space; and the test of creative thinking-drawing production (TCT-DP), which measures creativity, also in ill-defined problem space. Results revealed that problem space matters: the ET program is effective only for the improvement of intelligence operating in well-defined problem space. An effect was found for intelligence as measured by SPM only, but neither for intelligence operating in ill-defined problem space (CRT) nor for creativity (TCT-DP). This suggests that, depending on the type of problem spaces presented, different cognitive abilities are elicited in the same child. Therefore, enrichment programs for gifted, but also for children attending traditional schools, should provide opportunities to develop cognitive abilities related to intelligence, operating in both well- and ill-defined problem spaces, and to creativity in a parallel, using an interactive approach.
Problem Space Matters: Evaluation of a German Enrichment Program for Gifted Children
Welter, Marisete M.; Jaarsveld, Saskia; Lachmann, Thomas
2018-01-01
We studied the development of cognitive abilities related to intelligence and creativity (N = 48, 6–10 years old), using a longitudinal design (over one school year), in order to evaluate an Enrichment Program for gifted primary school children initiated by the government of the German federal state of Rhineland-Palatinate (Entdeckertag Rheinland Pfalz, Germany; ET; Day of Discoverers). A group of German primary school children (N = 24), identified earlier as intellectually gifted and selected to join the ET program was compared to a gender-, class- and IQ- matched group of control children that did not participate in this program. All participants performed the Standard Progressive Matrices (SPM) test, which measures intelligence in well-defined problem space; the Creative Reasoning Task (CRT), which measures intelligence in ill-defined problem space; and the test of creative thinking-drawing production (TCT-DP), which measures creativity, also in ill-defined problem space. Results revealed that problem space matters: the ET program is effective only for the improvement of intelligence operating in well-defined problem space. An effect was found for intelligence as measured by SPM only, but neither for intelligence operating in ill-defined problem space (CRT) nor for creativity (TCT-DP). This suggests that, depending on the type of problem spaces presented, different cognitive abilities are elicited in the same child. Therefore, enrichment programs for gifted, but also for children attending traditional schools, should provide opportunities to develop cognitive abilities related to intelligence, operating in both well- and ill-defined problem spaces, and to creativity in a parallel, using an interactive approach. PMID:29740367
Launch Vehicle Propulsion Parameter Design Multiple Selection Criteria
NASA Technical Reports Server (NTRS)
Shelton, Joey Dewayne
2004-01-01
The optimization tool described herein addresses and emphasizes the use of computer tools to model a system and focuses on a concept development approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system, but more particularly the development of the optimized system using new techniques. This methodology uses new and innovative tools to run Monte Carlo simulations, genetic algorithm solvers, and statistical models in order to optimize a design concept. The concept launch vehicle and propulsion system were modeled and optimized to determine the best design for weight and cost by varying design and technology parameters. Uncertainty levels were applied using Monte Carlo Simulations and the model output was compared to the National Aeronautics and Space Administration Space Shuttle Main Engine. Several key conclusions are summarized here for the model results. First, the Gross Liftoff Weight and Dry Weight were 67% higher for the design case for minimization of Design, Development, Test and Evaluation cost when compared to the weights determined by the minimization of Gross Liftoff Weight case. In turn, the Design, Development, Test and Evaluation cost was 53% higher for optimized Gross Liftoff Weight case when compared to the cost determined by case for minimization of Design, Development, Test and Evaluation cost. Therefore, a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Secondly, the tool outputs define the sensitivity of propulsion parameters, technology and cost factors and how these parameters differ when cost and weight are optimized separately. A key finding was that for a Space Shuttle Main Engine thrust level the oxidizer/fuel ratio of 6.6 resulted in the lowest Gross Liftoff Weight rather than at 5.2 for the maximum specific impulse, demonstrating the relationships between specific impulse, engine weight, tank volume and tank weight. Lastly, the optimum chamber pressure for Gross Liftoff Weight minimization was 2713 pounds per square inch as compared to 3162 for the Design, Development, Test and Evaluation cost optimization case. This chamber pressure range is close to 3000 pounds per square inch for the Space Shuttle Main Engine.
Positive signs in massive gravity
Cheung, Clifford; Remmen, Grant N.
2016-04-01
Here, we derive new constraints on massive gravity from unitarity and analyticity of scattering amplitudes. Our results apply to a general effective theory defined by Einstein gravity plus the leading soft diffeomorphism-breaking corrections. We calculate scattering amplitudes for all combinations of tensor, vector, and scalar polarizations. Furthermore, the high-energy behavior of these amplitudes prescribes a specific choice of couplings that ameliorates the ultraviolet cutoff, in agreement with existing literature. We then derive consistency conditions from analytic dispersion relations, which dictate positivity of certain combinations of parameters appearing in the forward scattering amplitudes. These constraints exclude all but a small islandmore » in the parameter space of ghost-free massive gravity. And while the theory of the "Galileon" scalar mode alone is known to be inconsistent with positivity constraints, this is remedied in the full massive gravity theory.« less
GN/C translation and rotation control parameters for AR/C (category 2)
NASA Technical Reports Server (NTRS)
Henderson, David M.
1991-01-01
Detailed analysis of the Automatic Rendezvous and Capture problem indicate a need for three different regions of mathematical description for the GN&C algorithms: (1) multi-vehicle orbital mechanics to the rendezvous interface point, i.e., within 100 n.; (2) relative motion solutions (such as Clohessy-Wiltshire type) from the far-field to the near-field interface, i.e., within 1 nm; and (3) close proximity motion, the nearfield motion where the relative differences in the gravitational and orbit inertial accelerations can be neglected from the equations of motion. This paper defines the reference coordinate frames and control parameters necessary to model the relative motion and attitude of spacecraft in the close proximity of another space system (Region 2 and 3) during the Automatic Rendezvous and Capture phase of an orbit operation.
Universality in quantum chaos and the one-parameter scaling theory.
García-García, Antonio M; Wang, Jiao
2008-02-22
The one-parameter scaling theory is adapted to the context of quantum chaos. We define a generalized dimensionless conductance, g, semiclassically and then study Anderson localization corrections by renormalization group techniques. This analysis permits a characterization of the universality classes associated to a metal (g-->infinity), an insulator (g-->0), and the metal-insulator transition (g-->g(c)) in quantum chaos provided that the classical phase space is not mixed. According to our results the universality class related to the metallic limit includes all the systems in which the Bohigas-Giannoni-Schmit conjecture holds but automatically excludes those in which dynamical localization effects are important. The universality class related to the metal-insulator transition is characterized by classical superdiffusion or a fractal spectrum in low dimensions (d < or = 2). Several examples are discussed in detail.
Development of a composite geodetic structure for space construction, phase 2
NASA Technical Reports Server (NTRS)
1981-01-01
Primary physical and mechanical properties were defined for pultruded hybrid HMS/E-glass P1700 rod material used for the fabrication of geodetic beams. Key properties established were used in the analysis, design, fabrication, instrumentation, and testing of a geodetic parameter cylinder and a lattice cone closeout joined to a short cylindrical geodetic beam segment. Requirements of structural techniques were accomplished. Analytical procedures were refined and extended to include the effect of rod dimensions for the helical and longitudinal members on local buckling, and the effect of different flexural and extensional moduli on general instability buckling.
Design of simplified maximum-likelihood receivers for multiuser CPM systems.
Bing, Li; Bai, Baoming
2014-01-01
A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases) reduced complexity and marginal performance degradation.
Advanced defect classification by smart sampling, based on sub-wavelength anisotropic scatterometry
NASA Astrophysics Data System (ADS)
van der Walle, Peter; Kramer, Esther; Ebeling, Rob; Spruit, Helma; Alkemade, Paul; Pereira, Silvania; van der Donck, Jacques; Maas, Diederik
2018-03-01
We report on advanced defect classification using TNO's RapidNano particle scanner. RapidNano was originally designed for defect detection on blank substrates. In detection-mode, the RapidNano signal from nine azimuth angles is added for sensitivity. In review-mode signals from individual angles are analyzed to derive additional defect properties. We define the Fourier coefficient parameter space that is useful to study the statistical variation in defect types on a sample. By selecting defects from each defect type for further review by SEM, information on all defects can be obtained efficiently.
Development of NASA's Next Generation L-Band Digital Beamforming Synthetic Aperture Radar (DBSAR-2)
NASA Technical Reports Server (NTRS)
Rincon, Rafael; Fatoyinbo, Temilola; Osmanoglu, Batuhan; Lee, Seung-Kuk; Ranson, K. Jon; Marrero, Victor; Yeary, Mark
2014-01-01
NASA's Next generation Digital Beamforming SAR (DBSAR-2) is a state-of-the-art airborne L-band radar developed at the NASA Goddard Space Flight Center (GSFC). The instrument builds upon the advanced architectures in NASA's DBSAR-1 and EcoSAR instruments. The new instrument employs a 16-channel radar architecture characterized by multi-mode operation, software defined waveform generation, digital beamforming, and configurable radar parameters. The instrument has been design to support several disciplines in Earth and Planetary sciences. The instrument was recently completed, and tested and calibrated in a anechoic chamber.
Information at the edge of chaos in fluid neural networks
NASA Astrophysics Data System (ADS)
Solé, Ricard V.; Miramontes, Octavio
1995-01-01
Fluid neural networks, defined as neural nets of mobile elements with random activation, are studied by means of several approaches. They are proposed as a theoretical framework for a wide class of systems as insect societies, collectives of robots or the immune system. The critical properties of this model are also analysed, showing the existence of a critical boundary in parameter space where maximum information transfer occurs. In this sense, this boundary is in fact an example of the “edge of chaos” in systems like those described in our approach. Recent experiments with ant colonies seem to confirm our result.
Two-stream instability with time-dependent drift velocity
Qin, Hong; Davidson, Ronald C.
2014-06-26
The classical two-stream instability driven by a constant relative drift velocity between two plasma components is extended to the case with time-dependent drift velocity. A solution method is developed to rigorously define and calculate the instability growth rate for linear perturbations relative to the time-dependent unperturbed two-stream motions. The stability diagrams for the oscillating two-stream instability are presented over a large region of parameter space. It is shown that the growth rate for the classical two-stream instability can be significantly reduced by adding an oscillatory component to the relative drift velocity.
Telecom 2-B and 2-C (TC2B and TC2C)
NASA Technical Reports Server (NTRS)
Dulac, J.; Alvarez, H.
1991-01-01
The DSN (Deep Space Network) mission support requirements for Telecom 2-B and 2-C (TC2B and TC2C) are summarized. These Telecom missions will provide high-speed data link applications, telephone, and television service between France and overseas territories as a follow-on to TC2A. Mission objectives are outlined and the DSN support requirements are defined through the presentation of tables and narratives describing the spacecraft flight profile; DSN support coverage; frequency assignments; support parameters for telemetry, command and support systems; and tracking support responsibility.
Implicit multiplane 3D camera calibration matrices for stereo image processing
NASA Astrophysics Data System (ADS)
McKee, James W.; Burgett, Sherrie J.
1997-12-01
By implicit camera calibration, we mean the process of calibrating cameras without explicitly computing their physical parameters. We introduce a new implicit model based on a generalized mapping between an image plane and multiple, parallel calibration planes (usually between four to seven planes). This paper presents a method of computing a relationship between a point on a three-dimensional (3D) object and its corresponding two-dimensional (2D) coordinate in a camera image. This relationship is expanded to form a mapping of points in 3D space to points in image (camera) space and visa versa that requires only matrix multiplication operations. This paper presents the rationale behind the selection of the forms of four matrices and the algorithms to calculate the parameters for the matrices. Two of the matrices are used to map 3D points in object space to 2D points on the CCD camera image plane. The other two matrices are used to map 2D points on the image plane to points on user defined planes in 3D object space. The mappings include compensation for lens distortion and measurement errors. The number of parameters used can be increased, in a straight forward fashion, to calculate and use as many parameters as needed to obtain a user desired accuracy. Previous methods of camera calibration use a fixed number of parameters which can limit the obtainable accuracy and most require the solution of nonlinear equations. The procedure presented can be used to calibrate a single camera to make 2D measurements or calibrate stereo cameras to make 3D measurements. Positional accuracy of better than 3 parts in 10,000 have been achieved. The algorithms in this paper were developed and are implemented in MATLABR (registered trademark of The Math Works, Inc.). We have developed a system to analyze the path of optical fiber during high speed payout (unwinding) of optical fiber off a bobbin. This requires recording and analyzing high speed (5 microsecond exposure time), synchronous, stereo images of the optical fiber during payout. A 3D equation for the fiber at an instant in time is calculated from the corresponding pair of stereo images as follows. In each image, about 20 points along the 2D projection of the fiber are located. Each of these 'fiber points' in one image is mapped to its projection line in 3D space. Each projection line is mapped into another line in the second image. The intersection of each mapped projection line and a curve fitted to the fiber points of the second image (fiber projection in second image) is calculated. Each intersection point is mapped back to the 3D space. A 3D fiber coordinate is formed from the intersection, in 3D space, of a mapped intersection point with its corresponding projection line. The 3D equation for the fiber is computed from this ordered list of 3D coordinates. This process requires a method of accurately mapping 2D (image space) to 3D (object space) and visa versa.3173
NASA Astrophysics Data System (ADS)
Duan, Y.; Durand, M. T.; Jezek, K. C.; Yardim, C.; Bringer, A.; Aksoy, M.; Johnson, J. T.
2017-12-01
The ultra-wideband software-defined microwave radiometer (UWBRAD) is designed to provide ice sheet internal temperature product via measuring low frequency microwave emission. Twelve channels ranging from 0.5 to 2.0 GHz are covered by the instrument. A Greenland air-borne demonstration was demonstrated in September 2016, provided first demonstration of Ultra-wideband radiometer observations of geophysical scenes, including ice sheets. Another flight is planned for September 2017 for acquiring measurements in central ice sheet. A Bayesian framework is designed to retrieve the ice sheet internal temperature from simulated UWBRAD brightness temperature (Tb) measurements over Greenland flight path with limited prior information of the ground. A 1-D heat-flow model, the Robin Model, was used to model the ice sheet internal temperature profile with ground information. Synthetic UWBRAD Tb observations was generated via the partially coherent radiation transfer model, which utilizes the Robin model temperature profile and an exponential fit of ice density from Borehole measurement as input, and corrupted with noise. The effective surface temperature, geothermal heat flux, the variance of upper layer ice density, and the variance of fine scale density variation at deeper ice sheet were treated as unknown variables within the retrieval framework. Each parameter is defined with its possible range and set to be uniformly distributed. The Markov Chain Monte Carlo (MCMC) approach is applied to make the unknown parameters randomly walk in the parameter space. We investigate whether the variables can be improved over priors using the MCMC approach and contribute to the temperature retrieval theoretically. UWBRAD measurements near camp century from 2016 was also treated with the MCMC to examine the framework with scattering effect. The fine scale density fluctuation is an important parameter. It is the most sensitive yet highly unknown parameter in the estimation framework. Including the fine scale density fluctuation greatly improved the retrieval results. The ice sheet vertical temperature profile, especially the 10m temperature, can be well retrieved via the MCMC process. Future retrieval work will apply the Bayesian approach to UWBRAD airborne measurements.
Interactive model evaluation tool based on IPython notebook
NASA Astrophysics Data System (ADS)
Balemans, Sophie; Van Hoey, Stijn; Nopens, Ingmar; Seuntjes, Piet
2015-04-01
In hydrological modelling, some kind of parameter optimization is mostly performed. This can be the selection of a single best parameter set, a split in behavioural and non-behavioural parameter sets based on a selected threshold or a posterior parameter distribution derived with a formal Bayesian approach. The selection of the criterion to measure the goodness of fit (likelihood or any objective function) is an essential step in all of these methodologies and will affect the final selected parameter subset. Moreover, the discriminative power of the objective function is also dependent from the time period used. In practice, the optimization process is an iterative procedure. As such, in the course of the modelling process, an increasing amount of simulations is performed. However, the information carried by these simulation outputs is not always fully exploited. In this respect, we developed and present an interactive environment that enables the user to intuitively evaluate the model performance. The aim is to explore the parameter space graphically and to visualize the impact of the selected objective function on model behaviour. First, a set of model simulation results is loaded along with the corresponding parameter sets and a data set of the same variable as the model outcome (mostly discharge). The ranges of the loaded parameter sets define the parameter space. A selection of the two parameters visualised can be made by the user. Furthermore, an objective function and a time period of interest need to be selected. Based on this information, a two-dimensional parameter response surface is created, which actually just shows a scatter plot of the parameter combinations and assigns a color scale corresponding with the goodness of fit of each parameter combination. Finally, a slider is available to change the color mapping of the points. Actually, the slider provides a threshold to exclude non behaviour parameter sets and the color scale is only attributed to the remaining parameter sets. As such, by interactively changing the settings and interpreting the graph, the user gains insight in the model structural behaviour. Moreover, a more deliberate choice of objective function and periods of high information content can be identified. The environment is written in an IPython notebook and uses the available interactive functions provided by the IPython community. As such, the power of the IPython notebook as a development environment for scientific computing is illustrated (Shen, 2014).
NASA Astrophysics Data System (ADS)
Krenn, Julia; Zangerl, Christian; Mergili, Martin
2017-04-01
r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This strategy is best demonstrated for two input parameters, but can be extended arbitrarily. We use a set of small rock avalanches from western Austria, and some larger ones from Canada and New Zealand, to optimize the basal friction coefficient and the mass-to-drag ratio of the two-parameter friction model implemented with r.randomwalk. Thereby we repeat the optimization procedure with conservative and non-conservative assumptions of a set of complementary parameters and with different raster cell sizes. Our preliminary results indicate that the model performance in terms of AUROC achieved with broad parameter spaces is hardly surpassed by the performance achieved with narrow parameter spaces. However, broad spaces may result in very conservative or very non-conservative predictions. Therefore, guiding parameter spaces have to be (i) broad enough to avoid the risk of being off target; and (ii) narrow enough to ensure a reasonable level of conservativeness of the results. The next steps will consist in (i) extending the study to other types of mass flow processes in order to support forward calculations using r.randomwalk; and (ii) in applying the same strategy to the more complex, dynamic model r.avaflow.
Mechanically tunable actin networks using programmable DNA based cross-linkers
NASA Astrophysics Data System (ADS)
Schnauss, Joerg; Lorenz, Jessica; Schuldt, Carsten; Kaes, Josef; Smith, David
Cells employ multiple cross-linkers with very different properties. Studies of the entire phase space, however, were infeasible since they were restricted to naturally occurring cross-linkers. These components cannot be controllably varied and differ in many parameters. We resolve this limitation by forming artificial actin cross-linkers, which can be controllably varied. The basic building block is DNA enabling a well-defined length variation. DNA can be attached to actin binding peptides with known binding affinities. We used bulk rheology to investigate mechanical properties of these networks. We were able to reproduce mechanical features of actin networks cross-linked by fascin by using a short version of our artificial complex with a high binding affinity. Additionally, we were able to resemble findings for the cross-linker alpha-actinin by employing a long cross-linker with a low binding affinity. Between these natural limits we investigated three different cross-linker lengths each with two different binding affinities. With these controlled variations we are able to precisely screen the phase space of cross-linked actin networks by changing only one specific parameter and not the entire set of properties as in the case of naturally occurring cross-linking complexes.
Exploring the free energy surface using ab initio molecular dynamics
NASA Astrophysics Data System (ADS)
Samanta, Amit; Morales, Miguel A.; Schwegler, Eric
2016-04-01
Efficient exploration of configuration space and identification of metastable structures in condensed phase systems are challenging from both computational and algorithmic perspectives. In this regard, schemes that utilize a set of pre-defined order parameters to sample the relevant parts of the configuration space [L. Maragliano and E. Vanden-Eijnden, Chem. Phys. Lett. 426, 168 (2006); J. B. Abrams and M. E. Tuckerman, J. Phys. Chem. B 112, 15742 (2008)] have proved useful. Here, we demonstrate how these order-parameter aided temperature accelerated sampling schemes can be used within the Born-Oppenheimer and the Car-Parrinello frameworks of ab initio molecular dynamics to efficiently and systematically explore free energy surfaces, and search for metastable states and reaction pathways. We have used these methods to identify the metastable structures and reaction pathways in SiO2 and Ti. In addition, we have used the string method [W. E, W. Ren, and E. Vanden-Eijnden, Phys. Rev. B 66, 052301 (2002); L. Maragliano et al., J. Chem. Phys. 125, 024106 (2006)] within the density functional theory to study the melting pathways in the high pressure cotunnite phase of SiO2 and the hexagonal closed packed to face centered cubic phase transition in Ti.
Dynamics of a neuron model in different two-dimensional parameter-spaces
NASA Astrophysics Data System (ADS)
Rech, Paulo C.
2011-03-01
We report some two-dimensional parameter-space diagrams numerically obtained for the multi-parameter Hindmarsh-Rose neuron model. Several different parameter planes are considered, and we show that regardless of the combination of parameters, a typical scenario is preserved: for all choice of two parameters, the parameter-space presents a comb-shaped chaotic region immersed in a large periodic region. We also show that exist regions close these chaotic region, separated by the comb teeth, organized themselves in period-adding bifurcation cascades.
Color-Space Outliers in DPOSS: Quasars and Peculiar Objects
NASA Astrophysics Data System (ADS)
Djorgovski, S. G.; Gal, R. R.; Mahabal, A.; Brunner, R.; Castro, S. M.; Odewahn, S. C.; de Carvalho, R. R.; DPOSS Team
2000-12-01
The processing of DPOSS, a digital version of the POSS-II sky atlas, is now nearly complete. The resulting Palomar--Norris Sky Catalog (PNSC) is expected to contain > 5 x 107 galaxies and > 109 stars, including large numbers of quasars and other unresolved sources. For objects morphologically classified as stellar (i.e., PSF-like), colors and magnitudes provide the only additional source of discriminating information. We investigate the distribution of objects in the parameter space of (g-r) and (r-i) colors as a function of magnitude. Normal stars form a well-defined (temperature) sequence in this parameter space, and we explore the nature of the objects which deviate significantly from this stellar locus. The causes of the deviations include: non-thermal or peculiar spectra, interagalactic absorption (for high-z quasars), presence of strong emission lines in one or more of the bandpasses, or strong variability (because the plates are taken at widely separated epochs). In addition to minor contamination by misclassified compact galaxies, we find the following: (1) Quasars at z > 4; to date, ~ 100 of these objects have been found, and used for a variety of follow-up studies. They are made publicly available immediately after discovery, through http://astro.caltech.edu/ ~george/z4.qsos. (2) Type-2 quasars in the redshift interval z ~ 0.31 - 0.38. (3) Other quasars, starburst and emission-line galaxies, and emission-line stars. (4) Objects with highly peculiar spectra, some or all of which may be rare subtypes of BAL QSOs. (5) Highly variable stars and optical transients, some of which may be GRB ``orphan afterglows''. To date, systematic searches have been made only for (1) and (2); other types of objects were found serendipitously. However, we plan to explore systematically all of the statistically significant outliers in this parameter space. This illustrates the potential of large digital sky surveys for discovery of rare types of objects, both known (e.g., high-z quasars) and as yet unknown.
NASA Astrophysics Data System (ADS)
Molz, F. J.; Faybishenko, B.; Jenkins, E. W.
2012-12-01
Mass and energy fluxes within the soil-plant-atmosphere continuum are highly coupled and inherently nonlinear. The main focus of this presentation is to demonstrate the results of numerical modeling of a system of 4 coupled, nonlinear ordinary differential equations (ODEs), which are used to describe the long-term, rhizosphere processes of soil microbial dynamics, including the competition between nitrogen-fixing bacteria and those unable to fix nitrogen, along with substrate concentration (nutrient supply) and oxygen concentration. Modeling results demonstrate the synchronized patterns of temporal oscillations of competing microbial populations, which are affected by carbon and oxygen concentrations. The temporal dynamics and amplitude of the root exudation process serve as a driving force for microbial and geochemical phenomena, and lead to the development of the Gompetzian dynamics, synchronized oscillations, and phase-space attractors of microbial populations and carbon and oxygen concentrations. The nonlinear dynamic analysis of time series concentrations from the solution of the ODEs was used to identify several types of phase-space attractors, which appear to be dependent on the parameters of the exudation function and Monod kinetic parameters. This phase space analysis was conducted by means of assessing the global and local embedding dimensions, correlation time, capacity and correlation dimensions, and Lyapunov exponents of the calculated model variables defining the phase space. Such results can be used for planning experimental and theoretical studies of biogeochemical processes in the fields of plant nutrition, phyto- and bio-remediation, and other ecological areas.
Detection of multiple airborne targets from multisensor data
NASA Astrophysics Data System (ADS)
Foltz, Mark A.; Srivastava, Anuj; Miller, Michael I.; Grenander, Ulf
1995-08-01
Previously we presented a jump-diffusion based random sampling algorithm for generating conditional mean estimates of scene representations for the tracking and recongition of maneuvering airborne targets. These representations include target positions and orientations along their trajectories and the target type associated with each trajectory. Taking a Bayesian approach, a posterior measure is defined on the parameter space by combining sensor models with a sophisticated prior based on nonlinear airplane dynamics. The jump-diffusion algorithm constructs a Markov process which visits the elements of the parameter space with frequencies proportional to the posterior probability. It consititutes both the infinitesimal, local search via a sample path continuous diffusion transform and the larger, global steps through discrete jump moves. The jump moves involve the addition and deletion of elements from the scene configuration or changes in the target type assoviated with each target trajectory. One such move results in target detection by the addition of a track seed to the inference set. This provides initial track data for the tracking/recognition algorithm to estimate linear graph structures representing tracks using the other jump moves and the diffusion process, as described in our earlier work. Target detection ideally involves a continuous research over a continuum of the observation space. In this work we conclude that for practical implemenations the search space must be discretized with lattice granularity comparable to sensor resolution, and discuss how fast Fourier transforms are utilized for efficient calcuation of sufficient statistics given our array models. Some results are also presented from our implementation on a networked system including a massively parallel machine architecture and a silicon graphics onyx workstation.
Parametric cost estimation for space science missions
NASA Astrophysics Data System (ADS)
Lillie, Charles F.; Thompson, Bruce E.
2008-07-01
Cost estimation for space science missions is critically important in budgeting for successful missions. The process requires consideration of a number of parameters, where many of the values are only known to a limited accuracy. The results of cost estimation are not perfect, but must be calculated and compared with the estimates that the government uses for budgeting purposes. Uncertainties in the input parameters result from evolving requirements for missions that are typically the "first of a kind" with "state-of-the-art" instruments and new spacecraft and payload technologies that make it difficult to base estimates on the cost histories of previous missions. Even the cost of heritage avionics is uncertain due to parts obsolescence and the resulting redesign work. Through experience and use of industry best practices developed in participation with the Aerospace Industries Association (AIA), Northrop Grumman has developed a parametric modeling approach that can provide a reasonably accurate cost range and most probable cost for future space missions. During the initial mission phases, the approach uses mass- and powerbased cost estimating relationships (CER)'s developed with historical data from previous missions. In later mission phases, when the mission requirements are better defined, these estimates are updated with vendor's bids and "bottoms- up", "grass-roots" material and labor cost estimates based on detailed schedules and assigned tasks. In this paper we describe how we develop our CER's for parametric cost estimation and how they can be applied to estimate the costs for future space science missions like those presented to the Astronomy & Astrophysics Decadal Survey Study Committees.
A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures.
Neylon, J; Sheng, K; Yu, V; Chen, Q; Low, D A; Kupelian, P; Santhanam, A
2014-10-01
Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.
A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neylon, J., E-mail: jneylon@mednet.ucla.edu; Sheng, K.; Yu, V.
Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy intomore » a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. Results: The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. Conclusions: The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.« less
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
NASA Astrophysics Data System (ADS)
Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok
2016-06-01
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.
Chambers, T; Pearson, A L; Kawachi, I; Rzotkiewicz, Z; Stanley, J; Smith, M; Barr, M; Ni Mhurchu, C; Signal, L
2017-11-01
Defining the boundary of children's 'neighborhoods' has important implications for understanding the contextual influences on child health. Additionally, insight into activities that occur outside people's neighborhoods may indicate exposures that place-based studies cannot detect. This study aimed to 1) extend current neighborhood research, using data from wearable cameras and GPS devices that were worn over several days in an urban setting; 2) define the boundary of children's neighborhoods by using leisure time activity space data; and 3) determine the destinations visited by children in their leisure time, outside their neighborhoods. One hundred and fourteen children (mean age 12y) from Wellington, New Zealand wore wearable cameras and GPS recorders. Residential Euclidean buffers at incremental distances were paired with GPS data (thereby identifying time spent in different places) to explore alternative definitions of neighborhood boundaries. Children's neighborhood boundary was at 500 m. A newly developed software application was used to identify 'destinations' visited outside the neighborhood by specifying space-time parameters. Image data from wearable cameras were used to determine the type of destination. Children spent over half of their leisure time within 500 m of their homes. Children left their neighborhood predominantly to visit school (for leisure purposes), other residential locations (e.g. to visit friends) and food retail outlets (e.g. convenience stores, fast food outlets). Children spent more time at food retail outlets than at structured sport and in outdoor recreation locations combined. Person-centered neighborhood definitions may serve to better represent children's everyday experiences and neighborhood exposures than previous methods based on place-based measures. As schools and other residential locations (friends and family) are important destinations outside the neighborhood, such destinations should be taken into account. The combination of image data and activity space GPS data provides a more robust approach to understanding children's neighborhoods and activity spaces. Copyright © 2017 Elsevier Ltd. All rights reserved.
Planetary and Space Simulation Facilities (PSI) at DLR
NASA Astrophysics Data System (ADS)
Panitz, Corinna; Rabbow, E.; Rettberg, P.; Kloss, M.; Reitz, G.; Horneck, G.
2010-05-01
The Planetary and Space Simulation facilities at DLR offer the possibility to expose biological and physical samples individually or integrated into space hardware to defined and controlled space conditions like ultra high vacuum, low temperature and extraterrestrial UV radiation. An x-ray facility stands for the simulation of the ionizing component at the disposal. All of the simulation facilities are required for the preparation of space experiments: - for testing of the newly developed space hardware - for investigating the effect of different space parameters on biological systems as a preparation for the flight experiment - for performing the 'Experiment Verification Tests' (EVT) for the specification of the test parameters - and 'Experiment Sequence Tests' (EST) by simulating sample assemblies, exposure to selected space parameters, and sample disassembly. To test the compatibility of the different biological and chemical systems and their adaptation to the opportunities and constraints of space conditions a profound ground support program has been developed among many others for the ESA facilities of the ongoing missions EXPOSE-R and EXPOSE-E on board of the International Space Station ISS . Several experiment verification tests EVTs and an experiment sequence test EST have been conducted in the carefully equipped and monitored planetary and space simulation facilities PSI of the Institute of Aerospace Medicine at DLR in Cologne, Germany. These ground based pre-flight studies allowed the investigation of a much wider variety of samples and the selection of the most promising organisms for the flight experiment. EXPOSE-E had been attached to the outer balcony of the European Columbus module of the ISS in February 2008 and stayed for 1,5 years in space; EXPOSE-R has been attached to the Russian Svezda module of the ISS in spring 2009 and mission duration will be approx. 1,5 years. The missions will give new insights into the survivability of terrestrial organisms in space and will contribute to the understanding of the organic chemistry processes in space, the biological adaptation strategies to extreme conditions, e.g. on early Earth and Mars, and the distribution of life beyond its planet of origin The results gained during the simulation experiments demonstrated mission preparation as a basic requirement for successful and significant results of every space flight experiment. Hence, the Mission preparation program that was performed in the context of the space missions EXPOSE-E and EXPOSE-R proofed the outstanding importance and accentuated need for ground based experiments before and during a space mission. The facilities are also necessary for the performance of the ground control experiment during the mission, the so-called Mission Simulation Test (MST) under simulated space conditions, by parallel exposure of samples to simulated space parameters according to flight data received by telemetry. Finally the facilities also provide the possibility to simulate the surface and climate conditions of the planet Mars. In this way they offer the possibility to investigate under simulated Mars conditions the chances for development of life on Mars and to gain previous knowledge for the search for life on today's Mars and in this context especially the parameters for a manned mission to Mars. References [1] Rabbow E, Rettberg P, Panitz C, Drescher J, Horneck G, Reitz G (2005) SSIOUX - Space Simulation for Investigating Organics, Evolution and Exobiology, Adv. Space Res. 36 (2) 297-302, doi:10.1016/j.asr.2005.08.040Aman, A. and Bman, B. (1997) JGR, 90,1151-1154. [2] Fekete A, Modos K, Hegedüs M, Kovacs G, Ronto Gy, Peter A, Lammer H, Panitz C (2005) DNA Damage under simulated extraterrestrial conditions in bacteriophage T7 Adv. Space Res. 305-310Aman, A. et al. (1997) Meteoritics & Planet. Sci., 32,A74. [3] Cockell Ch, Schuerger AC, Billi D., Friedmann EI, Panitz C (2005) Effects of a Simulated Martian UV Flux on the Cyanobacterium, Chroococcidiopsis sp. 029, Astrobiology, 5/2 127-140Aman, A. (1996) LPS XXVII, 1344-1 [4] de la Torre Noetzel, R.; Sancho, L.G.; Pintado,A.; Rettberg, Petra; Rabbow, Elke; Panitz,Corinna; Deutschmann, U.; Reina, M.; Horneck, Gerda (2007): BIOPAN experiment LICHENS on the Foton M2 mission Pre-flight verification tests of the Rhizocarpon geographicum-granite ecosystem. COSPAR [Hrsg.]: Advances in Space Research, 40, Elsevier, S. 1665 - 1671, DOI 10.1016/j.asr.2007.02.022
Exploring the Role of Space-Defining Objects in Constructing and Maintaining Imagined Scenes
ERIC Educational Resources Information Center
Mullally, Sinead L.; Maguire, Eleanor A.
2013-01-01
It has recently been observed that certain objects, when viewed or imagined in isolation, evoke a strong sense of three-dimensional local space surrounding them (space-defining (SD) objects), while others do not (space-ambiguous (SA) objects), and this is associated with engagement of the parahippocampal cortex (PHC). But activation of the PHC is…
Anatomical nuances of the internal carotid artery in relation to the quadrangular space.
Dolci, Ricardo L L; Ditzel Filho, Leo F S; Goulart, Carlos R; Upadhyay, Smita; Buohliqah, Lamia; Lazarini, Paulo R; Prevedello, Daniel M; Carrau, Ricardo L
2018-01-01
OBJECTIVE The aim of this study was to evaluate the anatomical variations of the internal carotid artery (ICA) in relation to the quadrangular space (QS) and to propose a classification system based on the results. METHODS A total of 44 human cadaveric specimens were dissected endonasally under direct endoscopic visualization. During the dissection, the anatomical variations of the ICA and their relationship with the QS were noted. RESULTS The space between the paraclival ICAs (i.e., intercarotid space) can be classified as 1 of 3 different shapes (i.e., trapezoid, square, or hourglass) based on the trajectory of the ICAs. The ICA trajectories also directly influence the volumetric area of the QS. Based on its geometry, the QS was classified as one of the following: 1) Type A has the smallest QS area and is associated with a trapezoid intercarotid space, 2) Type B corresponds to the expected QS area (not minimized or enlarged) and is associated with a square intercarotid space, and 3) Type C has the largest QS area and is associated with an hourglass intercarotid space. CONCLUSIONS The different trajectories of the ICAs can modify the area of the QS and may be an essential parameter to consider for preoperative planning and defining the most appropriate corridor to reach Meckel's cave. In addition, ICA trajectories should be considered prior to surgery to avoid injuring the vessels.
NASA Astrophysics Data System (ADS)
Finaeva, O.
2017-11-01
The article represents a brief analysis of factors that influence the development of an urban green space system: territorial and climatic conditions, cultural and historical background as well as the modern strategy of historic cities development. The introduction defines the concept of urban greening, green spaces and green space distribution. The environmental parameters influenced by green spaces are determined. By the example of Italian cities the principles of the urban greening system development are considered: the historical aspects of formation of the urban greening system in Italian cities are analyzed, the role of green spaces in the formation of the urban environment structure and the creation of a favorable microclimate is determined, and a set of measures aimed at its improvement is highlighted. The modern principles of urban greening systems development and their characteristic features are considered. Special attention is paid to the interrelation of architectural and green structures in the formation of a favorable microclimate and psychological comfort in the urban environment; various methods of greening are considered by the example of existing architectural complexes depending on the climate of the area and the landscape features. The examples for the choice of plants and the application of compositional techniques are given. The results represent the basic principles of developing an urban green spaces system. The conclusion summarizes the techniques aimed at the microclimate improvement in the urban environment.
How to classify plantar plate injuries: parameters from history and physical examination.
Nery, Caio; Coughlin, Michael; Baumfeld, Daniel; Raduan, Fernando; Mann, Tania Szejnfeld; Catena, Fernanda
2015-01-01
To find the best clinical parameters for defining and classifying the degree of plantar plate injuries. Sixty-eight patients (100 metatarsophalangeal joints) were classified in accordance with the Arthroscopic Anatomical Classification for plantar plate injuries and were divided into five groups (0 to IV). Their medical files were reviewed and the incidence of each parameter for the respective group was correlated. These parameters were: use of high heels, sports, acute pain, local edema, Mulder's sign, widening of the interdigital space, pain in the head of the corresponding metatarsal, touching the ground, "drawer test", toe grip and toe deformities (in the sagittal, coronal and transversal planes). There were no statistically significant associations between the degree of injury and use of high-heel shoes, sports trauma, pain at the head of the metatarsal, Mulder's sign, deformity in pronation or displacement in the transversal and sagittal planes (although their combination, i.e. "cross toe", showed a statistically significant correlation). Positive correlations with the severity of the injuries were found in relation to initial acute pain, progressive widening of the interdigital space, loss of "touching the ground", positive results from the "drawer test" on the metatarsophalangeal joint, diminished grip strength and toe deformity in supination. The "drawer test" was seen to be the more reliable and precise tool for classifying the degree of plantar plate injury, followed by "touching the ground" and rotational deformities. It is possible to improve the precision of the diagnosis and the predictions of the anatomical classification for plantar plate injuries through combining the clinical history and data from the physical examination.
Hands-on parameter search for neural simulations by a MIDI-controller.
Eichner, Hubert; Borst, Alexander
2011-01-01
Computational neuroscientists frequently encounter the challenge of parameter fitting--exploring a usually high dimensional variable space to find a parameter set that reproduces an experimental data set. One common approach is using automated search algorithms such as gradient descent or genetic algorithms. However, these approaches suffer several shortcomings related to their lack of understanding the underlying question, such as defining a suitable error function or getting stuck in local minima. Another widespread approach is manual parameter fitting using a keyboard or a mouse, evaluating different parameter sets following the users intuition. However, this process is often cumbersome and time-intensive. Here, we present a new method for manual parameter fitting. A MIDI controller provides input to the simulation software, where model parameters are then tuned according to the knob and slider positions on the device. The model is immediately updated on every parameter change, continuously plotting the latest results. Given reasonably short simulation times of less than one second, we find this method to be highly efficient in quickly determining good parameter sets. Our approach bears a close resemblance to tuning the sound of an analog synthesizer, giving the user a very good intuition of the problem at hand, such as immediate feedback if and how results are affected by specific parameter changes. In addition to be used in research, our approach should be an ideal teaching tool, allowing students to interactively explore complex models such as Hodgkin-Huxley or dynamical systems.
Hands-On Parameter Search for Neural Simulations by a MIDI-Controller
Eichner, Hubert; Borst, Alexander
2011-01-01
Computational neuroscientists frequently encounter the challenge of parameter fitting – exploring a usually high dimensional variable space to find a parameter set that reproduces an experimental data set. One common approach is using automated search algorithms such as gradient descent or genetic algorithms. However, these approaches suffer several shortcomings related to their lack of understanding the underlying question, such as defining a suitable error function or getting stuck in local minima. Another widespread approach is manual parameter fitting using a keyboard or a mouse, evaluating different parameter sets following the users intuition. However, this process is often cumbersome and time-intensive. Here, we present a new method for manual parameter fitting. A MIDI controller provides input to the simulation software, where model parameters are then tuned according to the knob and slider positions on the device. The model is immediately updated on every parameter change, continuously plotting the latest results. Given reasonably short simulation times of less than one second, we find this method to be highly efficient in quickly determining good parameter sets. Our approach bears a close resemblance to tuning the sound of an analog synthesizer, giving the user a very good intuition of the problem at hand, such as immediate feedback if and how results are affected by specific parameter changes. In addition to be used in research, our approach should be an ideal teaching tool, allowing students to interactively explore complex models such as Hodgkin-Huxley or dynamical systems. PMID:22066027
Using Natural Language to Enable Mission Managers to Control Multiple Heterogeneous UAVs
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Puig-Navarro, Javier; Mehdi, S. Bilal; Mcquarry, A. Kyle
2016-01-01
The availability of highly capable, yet relatively cheap, unmanned aerial vehicles (UAVs) is opening up new areas of use for hobbyists and for commercial activities. This research is developing methods beyond classical control-stick pilot inputs, to allow operators to manage complex missions without in-depth vehicle expertise. These missions may entail several heterogeneous UAVs flying coordinated patterns or flying multiple trajectories deconflicted in time or space to predefined locations. This paper describes the functionality and preliminary usability measures of an interface that allows an operator to define a mission using speech inputs. With a defined and simple vocabulary, operators can input the vast majority of mission parameters using simple, intuitive voice commands. Although the operator interface is simple, it is based upon autonomous algorithms that allow the mission to proceed with minimal input from the operator. This paper also describes these underlying algorithms that allow an operator to manage several UAVs.
Open building and flexibility in healthcare: strategies for shaping spaces for social aspects.
Capolongo, Stefano; Buffoli, Maddalena; Nachiero, Dario; Tognolo, Chiara; Zanchi, Eleonora; Gola, Marco
2016-01-01
The fast development of technology and medicine influences the functioning of healthcare facilities as health promoter for the society, making the flexibility a fundamental requirement. Among the many ways to ensure adaptability, one that allows change without increasing the building's overall size is the Open Building approach. Starting from the analysis of the State-of-the-Art and many case-studies, eight parameters of evaluation were defined, appraising their relative importance through a weighting system defined with several experts. The resulting evaluation tool establishes in what measure healthcare facilities follow the Open Building principles. The tool is tested to ten case-studies, chosen for their flexible features, in order to determine his effectiveness and to identify projects' weaknesses and strengths. The results suggest that many Open Building's principles are already in use but, only through a good design thinking, it will be possible to guarantee architectures for health adaptable for future social challenges.
Orlandini, Serena; Pasquini, Benedetta; Caprini, Claudia; Del Bubba, Massimo; Pinzauti, Sergio; Furlanetto, Sandra
2015-11-01
A fast and selective CE method for the determination of zolmitriptan (ZOL) and its five potential impurities has been developed applying the analytical Quality by Design principles. Voltage, temperature, buffer concentration, and pH were investigated as critical process parameters that can influence the critical quality attributes, represented by critical resolution values between peak pairs, analysis time, and peak efficiency of ZOL-dimer. A symmetric screening matrix was employed for investigating the knowledge space, and a Box-Behnken design was used to evaluate the main, interaction, and quadratic effects of the critical process parameters on the critical quality attributes. Contour plots were drawn highlighting important interactions between buffer concentration and pH, and the gained information was merged into the sweet spot plots. Design space (DS) was established by the combined use of response surface methodology and Monte Carlo simulations, introducing a probability concept and thus allowing the quality of the analytical performances to be assured in a defined domain. The working conditions (with the interval defining the DS) were as follows: BGE, 138 mM (115-150 mM) phosphate buffer pH 2.74 (2.54-2.94); temperature, 25°C (24-25°C); voltage, 30 kV. A control strategy was planned based on method robustness and system suitability criteria. The main advantages of applying the Quality by Design concept consisted of a great increase of knowledge of the analytical system, obtained throughout multivariate techniques, and of the achievement of analytical assurance of quality, derived by probability-based definition of DS. The developed method was finally validated and applied to the analysis of ZOL tablets. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Near Earth Asteroid Rendezvous (NEAR) Revised Eros Orbit Phase Trajectory Design
NASA Technical Reports Server (NTRS)
Helfrich, J; Miller, J. K.; Antreasian, P. G.; Carranza, E.; Williams, B. G.; Dunham, D. W.; Farquhar, R. W.; McAdams, J. V.
1999-01-01
Trajectory design of the orbit phase of the NEAR mission involves a new process that departs significantly from those procedures used in previous missions. In most cases, a precise spacecraft ephemeris is designed well in advance of arrival at the target body. For NEAR, the uncertainty in the dynamic environment around Eros does not allow the luxury of a precise spacecraft trajectory to be defined in advance. The principal cause of this uncertainty is the limited knowledge oi' the gravity field a,-id rotational state of Eros. As a result, the concept for the NEAR trajectory design is to define a number of rules for satisfying spacecraft, mission, and science constraints, and then apply these rules to various assumptions for the model of Eros. Nominal, high, and low Eros mass models are used for testing the trajectory design strategy and to bracket the ranges of parameter variations that are expected upon arrival at the asteroid. The final design is completed after arrival at Eros and determination of the actual gravity field and rotational state. As a result of the unplanned termination of the deep space rendezvous maneuver on December 20, 1998, the NEAR spacecraft passed within 3830 km of Eros on December 23, 1998. This flyby provided a brief glimpse of Eros, and allowed for a more accurate model of the rotational parameters and gravity field uncertainty. Furthermore, after the termination of the deep space rendezvous burn, contact with the spacecraft was lost and the NEAR spacecraft lost attitude control. During the subsequent gyrations of the spacecraft, hydrazine thruster firings were used to regain attitude control. This unplanned thruster activity used Much of the fuel margin allocated for the orbit phase. Consequently, minimizing fuel consumption is now even more important.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagener, Thorsten; Mann, Michael; Crane, Robert
2014-04-29
This project focuses on uncertainty in streamflow forecasting under climate change conditions. The objective is to develop easy to use methodologies that can be applied across a range of river basins to estimate changes in water availability for realistic projections of climate change. There are three major components to the project: Empirical downscaling of regional climate change projections from a range of Global Climate Models; Developing a methodology to use present day information on the climate controls on the parameterizations in streamflow models to adjust the parameterizations under future climate conditions (a trading-space-for-time approach); and Demonstrating a bottom-up approach tomore » establishing streamflow vulnerabilities to climate change. The results reinforce the need for downscaling of climate data for regional applications, and further demonstrates the challenges of using raw GCM data to make local projections. In addition, it reinforces the need to make projections across a range of global climate models. The project demonstrates the potential for improving streamflow forecasts by using model parameters that are adjusted for future climate conditions, but suggests that even with improved streamflow models and reduced climate uncertainty through the use of downscaled data, there is still large uncertainty is the streamflow projections. The most useful output from the project is the bottom-up vulnerability driven approach to examining possible climate and land use change impacts on streamflow. Here, we demonstrate an inexpensive and easy to apply methodology that uses Classification and Regression Trees (CART) to define the climate and environmental parameters space that can produce vulnerabilities in the system, and then feeds in the downscaled projections to determine the probability top transitioning to a vulnerable sate. Vulnerabilities, in this case, are defined by the end user.« less
Space Communication and Navigation Testbed Communications Technology for Exploration
NASA Technical Reports Server (NTRS)
Reinhart, Richard
2013-01-01
NASA developed and launched an experimental flight payload (referred to as the Space Communication and Navigation Test Bed) to investigate software defined radio, networking, and navigation technologies, operationally in the space environment. The payload consists of three software defined radios each compliant to NASAs Space Telecommunications Radio System Architecture, a common software interface description standard for software defined radios. The software defined radios are new technology developed by NASA and industry partners. The payload is externally mounted to the International Space Station truss and available to NASA, industry, and university partners to conduct experiments representative of future mission capability. Experiment operations include in-flight reconfiguration of the SDR waveform functions and payload networking software. The flight system communicates with NASAs orbiting satellite relay network, the Tracking, Data Relay Satellite System at both S-band and Ka-band and to any Earth-based compatible S-band ground station.
Software Defined Radio Standard Architecture and its Application to NASA Space Missions
NASA Technical Reports Server (NTRS)
Andro, Monty; Reinhart, Richard C.
2006-01-01
A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG's SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA's current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.
Catastrophe on the Horizon: A Scenario-Based Future Effect of Orbital Space Debris
2010-04-01
real. In fact, the preliminary results of a recent NASA risk assessment of the soon to be decommissioned Space Shuttle puts the risk of a manned...Section 1 – Introduction Orbital Space Debris Defined Orbital space debris can be defined as dead satellites, discarded rocket parts, or simply flecks...of paint or other small objects orbiting the earth. It is simply space ―junk,‖ but junk that can be extremely dangerous to space assets. Most of the
Equation of state for dense nucleonic matter from metamodeling. I. Foundational aspects
NASA Astrophysics Data System (ADS)
Margueron, Jérôme; Hoffmann Casali, Rudiney; Gulminelli, Francesca
2018-02-01
Metamodeling for the nucleonic equation of state (EOS), inspired from a Taylor expansion around the saturation density of symmetric nuclear matter, is proposed and parameterized in terms of the empirical parameters. The present knowledge of nuclear empirical parameters is first reviewed in order to estimate their average values and associated uncertainties, and thus defining the parameter space of the metamodeling. They are divided into isoscalar and isovector types, and ordered according to their power in the density expansion. The goodness of the metamodeling is analyzed against the predictions of the original models. In addition, since no correlation among the empirical parameters is assumed a priori, all arbitrary density dependences can be explored, which might not be accessible in existing functionals. Spurious correlations due to the assumed functional form are also removed. This meta-EOS allows direct relations between the uncertainties on the empirical parameters and the density dependence of the nuclear equation of state and its derivatives, and the mapping between the two can be done with standard Bayesian techniques. A sensitivity analysis shows that the more influential empirical parameters are the isovector parameters Lsym and Ksym, and that laboratory constraints at supersaturation densities are essential to reduce the present uncertainties. The present metamodeling for the EOS for nuclear matter is proposed for further applications in neutron stars and supernova matter.
A multiparameter wearable physiologic monitoring system for space and terrestrial applications
NASA Technical Reports Server (NTRS)
Mundt, Carsten W.; Montgomery, Kevin N.; Udoh, Usen E.; Barker, Valerie N.; Thonier, Guillaume C.; Tellier, Arnaud M.; Ricks, Robert D.; Darling, Robert B.; Cagle, Yvonne D.; Cabrol, Nathalie A.;
2005-01-01
A novel, unobtrusive and wearable, multiparameter ambulatory physiologic monitoring system for space and terrestrial applications, termed LifeGuard, is presented. The core element is a wearable monitor, the crew physiologic observation device (CPOD), that provides the capability to continuously record two standard electrocardiogram leads, respiration rate via impedance plethysmography, heart rate, hemoglobin oxygen saturation, ambient or body temperature, three axes of acceleration, and blood pressure. These parameters can be digitally recorded with high fidelity over a 9-h period with precise time stamps and user-defined event markers. Data can be continuously streamed to a base station using a built-in Bluetooth RF link or stored in 32 MB of on-board flash memory and downloaded to a personal computer using a serial port. The device is powered by two AAA batteries. The design, laboratory, and field testing of the wearable monitors are described.
A Particle-in-cell scheme of the RFQ in the SSC-Linac
NASA Astrophysics Data System (ADS)
Xiao, Chen; He, Yuan; Lu, Yuan-Rong; Yuri, Batygin; Yin, Ling; Wang, Zhi-Jun; Yuan, You-Jin; Liu, Yong; Chang, Wei; Du, Xiao-Nan; Wang, Zhi; Xia, Jia-Wen
2010-11-01
A 52 MHz Radio Frequency Quadrupole (RFQ) linear accelerator (linac) is designed to serve as an initial structure for the SSC-Linac system (injector into Separated Sector Cyclotron). The designed injection and output energy are 3.5 keV/u and 143 keV/u, respectively. The beam dynamics in this RFQ have been studied using a three-dimensional Particle-In-Cell (PIC) code BEAMPATH. Simulation results show that this RFQ structure is characterized by stable values of beam transmission efficiency (at least 95%) for both zero-current mode and the space charge dominated regime. The beam accelerated in the RFQ has good quality in both transverse and longitudinal directions, and could easily be accepted by Drift Tube Linac (DTL). The effect of the vane error and that of the space charge on the beam parameters have been studied as well to define the engineering tolerance for RFQ vane machining and alignment.
Gravity modulates Listing's plane orientation during both pursuit and saccades
NASA Technical Reports Server (NTRS)
Hess, Bernhard J M.; Angelaki, Dora E.
2003-01-01
Previous studies have shown that the spatial organization of all eye orientations during visually guided saccadic eye movements (Listing's plane) varies systematically as a function of static and dynamic head orientation in space. Here we tested if a similar organization also applies to the spatial orientation of eye positions during smooth pursuit eye movements. Specifically, we characterized the three-dimensional distribution of eye positions during horizontal and vertical pursuit (0.1 Hz, +/-15 degrees and 0.5 Hz, +/-8 degrees) at different eccentricities and elevations while rhesus monkeys were sitting upright or being statically tilted in different roll and pitch positions. We found that the spatial organization of eye positions during smooth pursuit depends on static orientation in space, similarly as during visually guided saccades and fixations. In support of recent modeling studies, these results are consistent with a role of gravity on defining the parameters of Listing's law.
A variable structure approach to robust control of VTOL aircraft
NASA Technical Reports Server (NTRS)
Calise, A. J.; Kramer, F.
1982-01-01
This paper examines the application of variable structure control theory to the design of a flight control system for the AV-8A Harrier in a hover mode. The objective in variable structure design is to confine the motion to a subspace of the total state space. The motion in this subspace is insensitive to system parameter variations and external disturbances that lie in the range space of the control. A switching type of control law results from the design procedure. The control system was designed to track a vector velocity command defined in the body frame. For comparison purposes, a proportional controller was designed using optimal linear regulator theory. Both control designs were first evaluated for transient response performance using a linearized model, then a nonlinear simulation study of a hovering approach to landing was conducted. Wind turbulence was modeled using a 1052 destroyer class air wake model.
Thunder-induced ground motions: 2. Site characterization
NASA Astrophysics Data System (ADS)
Lin, Ting-L.; Langston, Charles A.
2009-04-01
Thunder-induced ground motion, near-surface refraction, and Rayleigh wave dispersion measurements were used to constrain near-surface velocity structure at an unconsolidated sediment site. We employed near-surface seismic refraction measurements to first define ranges for site structure parameters. Air-coupled and hammer-generated Rayleigh wave dispersion curves were used to further constrain the site structure by a grid search technique. The acoustic-to-seismic coupling is modeled as an incident plane P wave in a fluid half-space impinging into a solid layered half-space. We found that the infrasound-induced ground motions constrained substrate velocities and the average thickness and velocities of the near-surface layer. The addition of higher-frequency near-surface Rayleigh waves produced tighter constraints on the near-surface velocities. This suggests that natural or controlled airborne pressure sources can be used to investigate the near-surface site structures for earthquake shaking hazard studies.
NASA Technical Reports Server (NTRS)
Jones, J. J.; Winn, W. P.; Hunyady, S. J.; Moore, C. B.; Bullock, J. W.
1990-01-01
During the fall of 1988, a Schweizer airplane equipped to measure electric field and other meteorological parameters flew over Kennedy Space Center (KSC) in a program to study clouds defined in the existing launch restriction criteria. A case study is presented of a single flight over KSC on November 4, 1988. This flight was chosen for two reasons: (1) the clouds were weakly electrified, and no lightning was reported during the flight; and (2) electric field mills in the surface array at KSC indicated field strengths greater than 3 kV/m, yet the aircraft flying directly over them at an altitude of 3.4 km above sea level measured field strengths of less than 1.6 kV/m. A weather summary, sounding description, record of cloud types, and an account of electric field measurements are included.
A reference model for space data system interconnection services
NASA Astrophysics Data System (ADS)
Pietras, John; Theis, Gerhard
1993-03-01
The widespread adoption of standard packet-based data communication protocols and services for spaceflight missions provides the foundation for other standard space data handling services. These space data handling services can be defined as increasingly sophisticated processing of data or information received from lower-level services, using a layering approach made famous in the International Organization for Standardization (ISO) Open System Interconnection Reference Model (OSI-RM). The Space Data System Interconnection Reference Model (SDSI-RM) incorporates the conventions of the OSIRM to provide a framework within which a complete set of space data handling services can be defined. The use of the SDSI-RM is illustrated through its application to data handling services and protocols that have been defined by, or are under consideration by, the Consultative Committee for Space Data Systems (CCSDS).
A reference model for space data system interconnection services
NASA Technical Reports Server (NTRS)
Pietras, John; Theis, Gerhard
1993-01-01
The widespread adoption of standard packet-based data communication protocols and services for spaceflight missions provides the foundation for other standard space data handling services. These space data handling services can be defined as increasingly sophisticated processing of data or information received from lower-level services, using a layering approach made famous in the International Organization for Standardization (ISO) Open System Interconnection Reference Model (OSI-RM). The Space Data System Interconnection Reference Model (SDSI-RM) incorporates the conventions of the OSIRM to provide a framework within which a complete set of space data handling services can be defined. The use of the SDSI-RM is illustrated through its application to data handling services and protocols that have been defined by, or are under consideration by, the Consultative Committee for Space Data Systems (CCSDS).
Telerobotic control of a mobile coordinated robotic server. M.S. Thesis Annual Technical Report
NASA Technical Reports Server (NTRS)
Lee, Gordon
1993-01-01
The annual report on telerobotic control of a mobile coordinated robotic server is presented. The goal of this effort is to develop advanced control methods for flexible space manipulator systems. As such, an adaptive fuzzy logic controller was developed in which model structure as well as parameter constraints are not required for compensation. The work builds upon previous work on fuzzy logic controllers. Fuzzy logic controllers have been growing in importance in the field of automatic feedback control. Hardware controllers using fuzzy logic have become available as an alternative to the traditional PID controllers. Software has also been introduced to aid in the development of fuzzy logic rule-bases. The advantages of using fuzzy logic controllers include the ability to merge the experience and intuition of expert operators into the rule-base and that a model of the system is not required to construct the controller. A drawback of the classical fuzzy logic controller, however, is the many parameters needed to be turned off-line prior to application in the closed-loop. In this report, an adaptive fuzzy logic controller is developed requiring no system model or model structure. The rule-base is defined to approximate a state-feedback controller while a second fuzzy logic algorithm varies, on-line, parameters of the defining controller. Results indicate the approach is viable for on-line adaptive control of systems when the model is too complex or uncertain for application of other more classical control techniques.
Quantifying Anderson's fault types
Simpson, R.W.
1997-01-01
Anderson [1905] explained three basic types of faulting (normal, strike-slip, and reverse) in terms of the shape of the causative stress tensor and its orientation relative to the Earth's surface. Quantitative parameters can be defined which contain information about both shape and orientation [Ce??le??rier, 1995], thereby offering a way to distinguish fault-type domains on plots of regional stress fields and to quantify, for example, the degree of normal-faulting tendencies within strike-slip domains. This paper offers a geometrically motivated generalization of Angelier's [1979, 1984, 1990] shape parameters ?? and ?? to new quantities named A?? and A??. In their simple forms, A?? varies from 0 to 1 for normal, 1 to 2 for strike-slip, and 2 to 3 for reverse faulting, and A?? ranges from 0?? to 60??, 60?? to 120??, and 120?? to 180??, respectively. After scaling, A?? and A?? agree to within 2% (or 1??), a difference of little practical significance, although A?? has smoother analytical properties. A formulation distinguishing horizontal axes as well as the vertical axis is also possible, yielding an A?? ranging from -3 to +3 and A?? from -180?? to +180??. The geometrically motivated derivation in three-dimensional stress space presented here may aid intuition and offers a natural link with traditional ways of plotting yield and failure criteria. Examples are given, based on models of Bird [1996] and Bird and Kong [1994], of the use of Anderson fault parameters A?? and A?? for visualizing tectonic regimes defined by regional stress fields. Copyright 1997 by the American Geophysical Union.
Vialet-Chabrand, Silvere; Griffiths, Howard
2017-01-01
The physical requirement for charge to balance across biological membranes means that the transmembrane transport of each ionic species is interrelated, and manipulating solute flux through any one transporter will affect other transporters at the same membrane, often with unforeseen consequences. The OnGuard systems modeling platform has helped to resolve the mechanics of stomatal movements, uncovering previously unexpected behaviors of stomata. To date, however, the manual approach to exploring model parameter space has captured little formal information about the emergent connections between parameters that define the most interesting properties of the system as a whole. Here, we introduce global sensitivity analysis to identify interacting parameters affecting a number of outputs commonly accessed in experiments in Arabidopsis (Arabidopsis thaliana). The analysis highlights synergies between transporters affecting the balance between Ca2+ sequestration and Ca2+ release pathways, notably those associated with internal Ca2+ stores and their turnover. Other, unexpected synergies appear, including with the plasma membrane anion channels and H+-ATPase and with the tonoplast TPK K+ channel. These emergent synergies, and the core hubs of interaction that they define, identify subsets of transporters associated with free cytosolic Ca2+ concentration that represent key targets to enhance plant performance in the future. They also highlight the importance of interactions between the voltage regulation of the plasma membrane and tonoplast in coordinating transport between the different cellular compartments. PMID:28432256
Chemical freezeout parameters within generic nonextensive statistics
NASA Astrophysics Data System (ADS)
Tawfik, Abdel; Yassin, Hayam; Abo Elyazeed, Eman R.
2018-06-01
The particle production in relativistic heavy-ion collisions seems to be created in a dynamically disordered system which can be best described by an extended exponential entropy. In distinguishing between the applicability of this and Boltzmann-Gibbs (BG) in generating various particle-ratios, generic (non)extensive statistics is introduced to the hadron resonance gas model. Accordingly, the degree of (non)extensivity is determined by the possible modifications in the phase space. Both BG extensivity and Tsallis nonextensivity are included as very special cases defined by specific values of the equivalence classes (c, d). We found that the particle ratios at energies ranging between 3.8 and 2760 GeV are best reproduced by nonextensive statistics, where c and d range between ˜ 0.9 and ˜ 1 . The present work aims at illustrating that the proposed approach is well capable to manifest the statistical nature of the system on interest. We don't aim at highlighting deeper physical insights. In other words, while the resulting nonextensivity is neither BG nor Tsallis, the freezeout parameters are found very compatible with BG and accordingly with the well-known freezeout phase-diagram, which is in an excellent agreement with recent lattice calculations. We conclude that the particle production is nonextensive but should not necessarily be accompanied by a radical change in the intensive or extensive thermodynamic quantities, such as internal energy and temperature. Only, the two critical exponents defining the equivalence classes (c, d) are the physical parameters characterizing the (non)extensivity.
Quantum motion of a point particle in the presence of the Aharonov–Bohm potential in curved space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, Edilberto O., E-mail: edilbertoo@gmail.com; Ulhoa, Sérgio C., E-mail: sc.ulhoa@gmail.com; Andrade, Fabiano M., E-mail: f.andrade@ucl.ac.uk
The nonrelativistic quantum dynamics of a spinless charged particle in the presence of the Aharonov–Bohm potential in curved space is considered. We chose the surface as being a cone defined by a line element in polar coordinates. The geometry of this line element establishes that the motion of the particle can occur on the surface of a cone or an anti-cone. As a consequence of the nontrivial topology of the cone and also because of two-dimensional confinement, the geometric potential should be taken into account. At first, we establish the conditions for the particle describing a circular path in suchmore » a context. Because of the presence of the geometric potential, which contains a singular term, we use the self-adjoint extension method in order to describe the dynamics in all space including the singularity. Expressions are obtained for the bound state energies and wave functions. -- Highlights: •Motion of particle under the influence of magnetic field in curved space. •Bound state for Aharonov–Bohm problem. •Particle describing a circular path. •Determination of the self-adjoint extension parameter.« less
Distinguishability notion based on Wootters statistical distance: Application to discrete maps
NASA Astrophysics Data System (ADS)
Gomez, Ignacio S.; Portesi, M.; Lamberti, P. W.
2017-08-01
We study the distinguishability notion given by Wootters for states represented by probability density functions. This presents the particularity that it can also be used for defining a statistical distance in chaotic unidimensional maps. Based on that definition, we provide a metric d ¯ for an arbitrary discrete map. Moreover, from d ¯ , we associate a metric space with each invariant density of a given map, which results to be the set of all distinguished points when the number of iterations of the map tends to infinity. Also, we give a characterization of the wandering set of a map in terms of the metric d ¯ , which allows us to identify the dissipative regions in the phase space. We illustrate the results in the case of the logistic and the circle maps numerically and analytically, and we obtain d ¯ and the wandering set for some characteristic values of their parameters. Finally, an extension of the metric space associated for arbitrary probability distributions (not necessarily invariant densities) is given along with some consequences. The statistical properties of distributions given by histograms are characterized in terms of the cardinal of the associated metric space. For two conjugate variables, the uncertainty principle is expressed in terms of the diameters of the associated metric space with those variables.
Manipulation of optical-pulse-imprinted memory in a Λ system
NASA Astrophysics Data System (ADS)
Gutiérrez-Cuevas, Rodrigo; Eberly, Joseph H.
2015-09-01
We examine coherent memory manipulation in a Λ -type medium, using the second-order solution presented by Groves, Clader, and Eberly [J. Phys. B: At. Mol. Opt. Phys. 46, 224005 (2013), 10.1088/0953-4075/46/22/224005] as a guide. The analytical solution obtained using the Darboux transformation and a nonlinear superposition principle describes complicated soliton-pulse dynamics which, by an appropriate choice of parameters, can be simplified to a well-defined sequence of pulses interacting with the medium. In this report, this solution is reviewed and put to test by means of a series of numerical simulations, encompassing all the parameter space and adding the effects of homogeneous broadening due to spontaneous emission. We find that even though the decohered results deviate from the analytical prediction they do follow a similar trend that could be used as a guide for future experiments.
Scaling theory of topological phase transitions
NASA Astrophysics Data System (ADS)
Chen, Wei
2016-02-01
Topologically ordered systems are characterized by topological invariants that are often calculated from the momentum space integration of a certain function that represents the curvature of the many-body state. The curvature function may be Berry curvature, Berry connection, or other quantities depending on the system. Akin to stretching a messy string to reveal the number of knots it contains, a scaling procedure is proposed for the curvature function in inversion symmetric systems, from which the topological phase transition can be identified from the flow of the driving energy parameters that control the topology (hopping, chemical potential, etc) under scaling. At an infinitesimal operation, one obtains the renormalization group (RG) equations for the driving energy parameters. A length scale defined from the curvature function near the gap-closing momentum is suggested to characterize the scale invariance at critical points and fixed points, and displays a universal critical behavior in a variety of systems examined.
Seamless variation of isometric and anisometric dynamical integrity measures in basins's erosion
NASA Astrophysics Data System (ADS)
Belardinelli, P.; Lenci, S.; Rega, G.
2018-03-01
Anisometric integrity measures defined as improvement and generalization of two existing measures (LIM, local integrity measure, and IF, integrity factor) of the extent and compactness of basins of attraction are introduced. Non-equidistant measures make it possible to account for inhomogeneous sensitivities of the state space variables to perturbations, thus permitting a more confident and targeted identification of the safe regions. All four measures are used for a global dynamics analysis of the twin-well Duffing oscillator, which is performed by considering a nearly continuous variation of a governing control parameter, thanks to the use of parallel computation allowing reasonable CPU time. This improves literature results based on finite (and commonly large) variations of the parameter, due to computational constraints. The seamless evolution of key integrity measures highlights the fine aspects of the erosion of the safe domain with respect to the increasing forcing amplitude.
Modal analysis and control of flexible manipulator arms. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Neto, O. M.
1974-01-01
The possibility of modeling and controlling flexible manipulator arms was examined. A modal approach was used for obtaining the mathematical model and control techniques. The arm model was represented mathematically by a state space description defined in terms of joint angles and mode amplitudes obtained from truncation on the distributed systems, and included the motion of a two link two joint arm. Three basic techniques were used for controlling the system: pole allocation with gains obtained from the rigid system with interjoint feedbacks, Simon-Mitter algorithm for pole allocation, and sensitivity analysis with respect to parameter variations. An improvement in arm bandwidth was obtained. Optimization of some geometric parameters was undertaken to maximize bandwidth for various payload sizes and programmed tasks. The controlled system is examined under constant gains and using the nonlinear model for simulations following a time varying state trajectory.
Multiple regimes of robust patterns between network structure and biodiversity
NASA Astrophysics Data System (ADS)
Jover, Luis F.; Flores, Cesar O.; Cortez, Michael H.; Weitz, Joshua S.
2015-12-01
Ecological networks such as plant-pollinator and host-parasite networks have structured interactions that define who interacts with whom. The structure of interactions also shapes ecological and evolutionary dynamics. Yet, there is significant ongoing debate as to whether certain structures, e.g., nestedness, contribute positively, negatively or not at all to biodiversity. We contend that examining variation in life history traits is key to disentangling the potential relationship between network structure and biodiversity. Here, we do so by analyzing a dynamic model of virus-bacteria interactions across a spectrum of network structures. Consistent with prior studies, we find plausible parameter domains exhibiting strong, positive relationships between nestedness and biodiversity. Yet, the same model can exhibit negative relationships between nestedness and biodiversity when examined in a distinct, plausible region of parameter space. We discuss steps towards identifying when network structure could, on its own, drive the resilience, sustainability, and even conservation of ecological communities.
Multiple regimes of robust patterns between network structure and biodiversity
Jover, Luis F.; Flores, Cesar O.; Cortez, Michael H.; Weitz, Joshua S.
2015-01-01
Ecological networks such as plant-pollinator and host-parasite networks have structured interactions that define who interacts with whom. The structure of interactions also shapes ecological and evolutionary dynamics. Yet, there is significant ongoing debate as to whether certain structures, e.g., nestedness, contribute positively, negatively or not at all to biodiversity. We contend that examining variation in life history traits is key to disentangling the potential relationship between network structure and biodiversity. Here, we do so by analyzing a dynamic model of virus-bacteria interactions across a spectrum of network structures. Consistent with prior studies, we find plausible parameter domains exhibiting strong, positive relationships between nestedness and biodiversity. Yet, the same model can exhibit negative relationships between nestedness and biodiversity when examined in a distinct, plausible region of parameter space. We discuss steps towards identifying when network structure could, on its own, drive the resilience, sustainability, and even conservation of ecological communities. PMID:26632996
Development of flat-plate solar collectors for the heating and cooling of buildings
NASA Technical Reports Server (NTRS)
Ramsey, J. W.; Borzoni, J. T.; Holland, T. H.
1975-01-01
The relevant design parameters in the fabrication of a solar collector for heating liquids were examined. The objective was to design, fabricate, and test a low-cost, flat-plate solar collector with high collection efficiency, high durability, and requiring little maintenance. Computer-aided math models of the heat transfer processes in the collector assisted in the design. The preferred physical design parameters were determined from a heat transfer standpoint and the absorber panel configuration, the surface treatment of the absorber panel, the type and thickness of insulation, and the number, spacing and material of the covers were defined. Variations of this configuration were identified, prototypes built, and performance tests performed using a solar simulator. Simulated operation of the baseline collector configuration was combined with insolation data for a number of locations and compared with a predicted load to determine the degree of solar utilization.
An approximation theory for the identification of linear thermoelastic systems
NASA Technical Reports Server (NTRS)
Rosen, I. G.; Su, Chien-Hua Frank
1990-01-01
An abstract approximation framework and convergence theory for the identification of thermoelastic systems is developed. Starting from an abstract operator formulation consisting of a coupled second order hyperbolic equation of elasticity and first order parabolic equation for heat conduction, well-posedness is established using linear semigroup theory in Hilbert space, and a class of parameter estimation problems is then defined involving mild solutions. The approximation framework is based upon generic Galerkin approximation of the mild solutions, and convergence of solutions of the resulting sequence of approximating finite dimensional parameter identification problems to a solution of the original infinite dimensional inverse problem is established using approximation results for operator semigroups. An example involving the basic equations of one dimensional linear thermoelasticity and a linear spline based scheme are discussed. Numerical results indicate how the approach might be used in a study of damping mechanisms in flexible structures.
Development of high strength, high temperature ceramics
NASA Technical Reports Server (NTRS)
Hall, W. B.
1982-01-01
Improvement in the high-pressure turbopumps, both fuel and oxidizer, in the Space Shuttle main engine were considered. The operation of these pumps is limited by temperature restrictions of the metallic components used in these pumps. Ceramic materials that retain strength at high temperatures and appear to be promising candidates for use as turbine blades and impellers are discussed. These high strength materials are sensitive to many related processing parameters such as impurities, sintering aids, reaction aids, particle size, processing temperature, and post thermal treatment. The specific objectives of the study were to: (1) identify and define the processing parameters that affect the properties of Si3N4 ceramic materials, (2) design and assembly equipment required for processing high strength ceramics, (3) design and assemble test apparatus for evaluating the high temperature properties of Si3N4, and (4) conduct a research program of manufacturing and evaluating Si3N4 materials as applicable to rocket engine applications.
2D discontinuous piecewise linear map: Emergence of fashion cycles.
Gardini, L; Sushko, I; Matsuyama, K
2018-05-01
We consider a discrete-time version of the continuous-time fashion cycle model introduced in Matsuyama, 1992. Its dynamics are defined by a 2D discontinuous piecewise linear map depending on three parameters. In the parameter space of the map periodicity, regions associated with attracting cycles of different periods are organized in the period adding and period incrementing bifurcation structures. The boundaries of all the periodicity regions related to border collision bifurcations are obtained analytically in explicit form. We show the existence of several partially overlapping period incrementing structures, that is, a novelty for the considered class of maps. Moreover, we show that if the time-delay in the discrete time formulation of the model shrinks to zero, the number of period incrementing structures tends to infinity and the dynamics of the discrete time fashion cycle model converges to those of continuous-time fashion cycle model.
Conceptualizing and Comparing Neighborhood and Activity Space Measures for Food Environment Research
Crawford, Thomas W.; Pitts, Stephanie B. Jilcott; McGuirt, Jared T.; Keyserling, Thomas C.; Ammerman, Alice S.
2014-01-01
Greater accessibility to geospatial technologies has led to a surge of spatialized public health research, much of which has focused on food environments. The purpose of this study was to analyze differing spatial measures of exposure to supermarkets and farmers’ markets among women of reproductive age in eastern North Carolina. Exposure measures were derived using participant-defined neighborhoods, investigator-defined road network neighborhoods, and activity spaces incorporating participants’ time space behaviors. Results showed that mean area for participant-defined neighborhoods (0.04 sq. miles) was much smaller than 2.0 mile road network neighborhoods (3.11 sq. miles) and activity spaces (26.36 sq. miles), and that activity spaces provided the greatest market exposure. The traditional residential neighborhood concept may not be particularly relevant for all places. Time-space approaches capturing activity space may be more relevant, particularly if integrated with mixed methods strategies. PMID:25306420
Parameter redundancy in discrete state-space and integrated models.
Cole, Diana J; McCrea, Rachel S
2016-09-01
Discrete state-space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state-space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state-space models using discrete analogues of methods for continuous state-space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Marziani, Paola; Sulentic, J. W.; Dultzin, D.; Negrete, A.; del Olmo, A.; Martínez-Carballo, M. A.; Stirpe, G. M.; D'Onofrio, M.; Perea, J.
2016-10-01
The 4D eigenvector 1 parameter space defined by Sulentic et al. may be seen as a surrogate H-R diagram for quasars. As in the stellar H-R diagram, a source sequence can be easily identified. In the case of quasars, the main sequence appears to be mainly driven by Eddington ratio. A transition Eddington ratio may in part explain the striking observational differences between quasars at opposite ends of the main sequence. The eigenvector-1 approach opens the door towards properly contextualized models of quasar physics, geometry and kinematics. We review some of the progress that has been made over the past 15 years, and point out still unsolved issues.
A joint analysis of the Drake equation and the Fermi paradox
NASA Astrophysics Data System (ADS)
Prantzos, Nikos
2013-07-01
I propose a unified framework for a joint analysis of the Drake equation and the Fermi paradox, which enables a simultaneous, quantitative study of both of them. The analysis is based on a simplified form of the Drake equation and on a fairly simple scheme for the colonization of the Milky Way. It appears that for sufficiently long-lived civilizations, colonization of the Galaxy is the only reasonable option to gain knowledge about other life forms. This argument allows one to define a region in the parameter space of the Drake equation, where the Fermi paradox definitely holds (`Strong Fermi paradox').
Evolution of statistical averages: An interdisciplinary proposal using the Chapman-Enskog method
NASA Astrophysics Data System (ADS)
Mariscal-Sanchez, A.; Sandoval-Villalbazo, A.
2017-08-01
This work examines the idea of applying the Chapman-Enskog (CE) method for approximating the solution of the Boltzmann equation beyond the realm of physics, using an information theory approach. Equations describing the evolution of averages and their fluctuations in a generalized phase space are established up to first-order in the Knudsen parameter which is defined as the ratio of the time between interactions (mean free time) and a characteristic macroscopic time. Although the general equations here obtained may be applied in a wide range of disciplines, in this paper, only a particular case related to the evolution of averages in speculative markets is examined.
Optimization techniques applied to spectrum management for communications satellites
NASA Astrophysics Data System (ADS)
Ottey, H. R.; Sullivan, T. M.; Zusman, F. S.
This paper describes user requirements, algorithms and software design features for the application of optimization techniques to the management of the geostationary orbit/spectrum resource. Relevant problems include parameter sensitivity analyses, frequency and orbit position assignment coordination, and orbit position allotment planning. It is shown how integer and nonlinear programming as well as heuristic search techniques can be used to solve these problems. Formalized mathematical objective functions that define the problems are presented. Constraint functions that impart the necessary solution bounds are described. A versatile program structure is outlined, which would allow problems to be solved in stages while varying the problem space, solution resolution, objective function and constraints.
Characterizing segregation in the Schelling-Voter model
NASA Astrophysics Data System (ADS)
Caridi, I.; Pinasco, J. P.; Saintier, N.; Schiaffino, P.
2017-12-01
In this work we analyze several aspects related with segregation patterns appearing in the Schelling-Voter model in which an unhappy agent can change her location or her state in order to live in a neighborhood where she is happy. Briefly, agents may be in two possible states, each one represents an individually-chosen feature, such as the language she speaks or the opinion she supports; and an individual is happy in a neighborhood if she has, at least, some proportion of agents of her own type, defined in terms of a fixed parameter T. We study the model in a regular two dimensional lattice. The parameters of the model are ρ, the density of empty sites, and p, the probability of changing locations. The stationary states reached in a system of N agents as a function of the model parameters entail the extinction of one of the states, the coexistence of both, segregated patterns with conglomerated clusters of agents of the same state, and a diluted region. Using indicators as the energy and perimeter of the populations of agents in the same state, the inner radius of their locations (i.e., the side of the maximum square which could fit with empty spaces or agents of only one type), and the Shannon Information of the empty sites, we measure the segregation phenomena. We have found that there is a region within the coexistence phase where both populations take advantage of space in an equitable way, which is sustained by the role of the empty sites.
Robustness, Death of Spiral Wave in the Network of Neurons under Partial Ion Channel Block
NASA Astrophysics Data System (ADS)
Ma, Jun; Huang, Long; Wang, Chun-Ni; Pu, Zhong-Sheng
2013-02-01
The development of spiral wave in a two-dimensional square array due to partial ion channel block (Potassium, Sodium) is investigated, the dynamics of the node is described by Hodgkin—Huxley neuron and these neurons are coupled with nearest neighbor connection. The parameter ratio xNa (and xK), which defines the ratio of working ion channel number of sodium (potassium) to the total ion channel number of sodium (and potassium), is used to measure the shift conductance induced by channel block. The distribution of statistical variable R in the two-parameter phase space (parameter ratio vs. poisoning area) is extensively calculated to mark the parameter region for transition of spiral wave induced by partial ion channel block, the area with smaller factors of synchronization R is associated the parameter region that spiral wave keeps alive and robust to the channel poisoning. Spiral wave keeps alive when the poisoned area (potassium or sodium) and degree of intoxication are small, distinct transition (death, several spiral waves coexist or multi-arm spiral wave emergence) occurs under moderate ratio xNa (and xK) when the size of blocked area exceeds certain thresholds. Breakup of spiral wave occurs and multi-arm of spiral waves are observed when the channel noise is considered.
NASA Technical Reports Server (NTRS)
Lambert, WInifred; Roeder, William
2007-01-01
This conference presentation describes the development of a peak wind forecast tool to assist forecasters in determining the probability of violating launch commit criteria (LCC) at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) in east-central Florida. The peak winds are an important forecast element for both the Space Shuttle and Expendable Launch Vehicle (ELV) programs. The LCC define specific peak wind thresholds for each launch operation that cannot be exceeded in order to ensure the safety of the vehicle. The 45th Weather Squadron (45 WS) has found that peak winds are a challenging parameter to forecast, particularly in the cool season months of October through April. Based on the importance of forecasting peak winds, the 45 WS tasked the Applied Meteorology Unit (AMU) to develop a short-range peak-wind forecast tool to assist in forecasting LCC violations. The tool will include climatologies of the 5-minute mean and peak winds by month, hour, and direction, and probability distributions of the peak winds as a function of the 5-minute mean wind speeds.
NASA Technical Reports Server (NTRS)
Crawford, Winifred
2010-01-01
This final report describes the development of a peak wind forecast tool to assist forecasters in determining the probability of violating launch commit criteria (LCC) at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS). The peak winds are an important forecast element for both the Space Shuttle and Expendable Launch Vehicle (ELV) programs. The LCC define specific peak wind thresholds for each launch operation that cannot be exceeded in order to ensure the safety of the vehicle. The 45th Weather Squadron (45 WS) has found that peak winds are a challenging parameter to forecast, particularly in the cool season months of October through April. Based on the importance of forecasting peak winds, the 45 WS tasked the Applied Meteorology Unit (AMU) to develop a short-range peak-wind forecast tool to assist in forecasting LCC violations.The tool includes climatologies of the 5-minute mean and peak winds by month, hour, and direction, and probability distributions of the peak winds as a function of the 5-minute mean wind speeds.
A Peak Wind Probability Forecast Tool for Kennedy Space Center and Cape Canaveral Air Force Station
NASA Technical Reports Server (NTRS)
Crawford, Winifred; Roeder, William
2008-01-01
This conference abstract describes the development of a peak wind forecast tool to assist forecasters in determining the probability of violating launch commit criteria (LCC) at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) in east-central Florida. The peak winds are an important forecast element for both the Space Shuttle and Expendable Launch Vehicle (ELV) programs. The LCC define specific peak wind thresholds for each launch operation that cannot be exceeded in order to ensure the safety of the vehicle. The 45th Weather Squadron (45 WS) has found that peak winds are a challenging parameter to forecast, particularly in the cool season months of October through April. Based on the importance of forecasting peak winds, the 45 WS tasked the Applied Meteorology Unit (AMU) to develop a short-range peak-wind forecast tool to assist in forecasting LCC violatioas.The tool will include climatologies of the 5-minute mean end peak winds by month, hour, and direction, and probability distributions of the peak winds as a function of the 5-minute mean wind speeds.
NASA Technical Reports Server (NTRS)
Crawford, Winifred
2011-01-01
This final report describes the development of a peak wind forecast tool to assist forecasters in determining the probability of violating launch commit criteria (LCC) at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS). The peak winds arc an important forecast clement for both the Space Shuttle and Expendable Launch Vehicle (ELV) programs. The LCC define specific peak wind thresholds for each launch operation that cannot be exceeded in order to ensure the safety of the vehicle. The 45th Weather Squadron (45 WS) has found that peak winds are a challenging parameter to forecast, particularly in the cool season months of October through April. Based on the importance of forecasting peak winds, the 45 WS tasked the Applied Meteorology Unit (AMU) to update the statistics in the current peak-wind forecast tool to assist in forecasting LCC violations. The tool includes onshore and offshore flow climatologies of the 5-minute mean and peak winds and probability distributions of the peak winds as a function of the 5-minute mean wind speeds.
Adaptive density trajectory cluster based on time and space distance
NASA Astrophysics Data System (ADS)
Liu, Fagui; Zhang, Zhijie
2017-10-01
There are some hotspot problems remaining in trajectory cluster for discovering mobile behavior regularity, such as the computation of distance between sub trajectories, the setting of parameter values in cluster algorithm and the uncertainty/boundary problem of data set. As a result, based on the time and space, this paper tries to define the calculation method of distance between sub trajectories. The significance of distance calculation for sub trajectories is to clearly reveal the differences in moving trajectories and to promote the accuracy of cluster algorithm. Besides, a novel adaptive density trajectory cluster algorithm is proposed, in which cluster radius is computed through using the density of data distribution. In addition, cluster centers and number are selected by a certain strategy automatically, and uncertainty/boundary problem of data set is solved by designed weighted rough c-means. Experimental results demonstrate that the proposed algorithm can perform the fuzzy trajectory cluster effectively on the basis of the time and space distance, and obtain the optimal cluster centers and rich cluster results information adaptably for excavating the features of mobile behavior in mobile and sociology network.
NASA Technical Reports Server (NTRS)
Bloomfield, Harvey S.; Heller, Jack A.
1987-01-01
A preliminary feasibility assessment of the integration of reactor power system concepts with a projected growth space station architecture was conducted to address a variety of installation, operational disposition, and safety issues. A previous NASA sponsored study, which showed the advantages of space station - attached concepts, served as the basis for this study. A study methodology was defined and implemented to assess compatible combinations of reactor power installation concepts, disposal destinations, and propulsion methods. Three installation concepts that met a set of integration criteria were characterized from a configuration and operational viewpoint, with end-of-life disposal mass identified. Disposal destinations that met current aerospace nuclear safety criteria were identified and characterized from an operational and energy requirements viewpoint, with delta-V energy requirement as a key parameter. Chemical propulsion methods that met current and near-term application criteria were identified and payload mass and delta-V capabilities were characterized. These capabilities were matched against concept disposal mass and destination delta-V requirements to provide the feasibility of each combination.
Issues on 3D noncommutative electromagnetic duality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodrigues, Davi C.; Wotzasek, Clovis
We extend the ordinary 3D electromagnetic duality to the noncommutative (NC) space-time through a Seiberg-Witten map to second order in the noncommutativity parameter {theta}, defining a new scalar field model. There are similarities with the 4D NC duality; these are exploited to clarify properties of both cases. Up to second order in {theta}, we find that duality interchanges the 2-form {theta} with its 1-form Hodge dual *{theta} times the gauge coupling constant, i.e., {theta}{yields}*{theta}g{sup 2} (similar to the 4D NC electromagnetic duality). We directly prove that this property is false in the third order expansion in both 3D and 4Dmore » space-times, unless the slowly varying fields limit is imposed. Outside this limit, starting from the third order expansion, {theta} cannot be rescaled to attain an S-duality. In addition to possible applications on effective models, the 3D space-time is useful for studying general properties of NC theories. In particular, in this dimension, we deduce an expression that significantly simplifies the Seiberg-Witten mapped Lagrangian to all orders in {theta}.« less
A feasibility assessment of nuclear reactor power system concepts for the NASA Growth Space Station
NASA Technical Reports Server (NTRS)
Bloomfield, H. S.; Heller, J. A.
1986-01-01
A preliminary feasibility assessment of the integration of reactor power system concepts with a projected growth Space Station architecture was conducted to address a variety of installation, operational, disposition and safety issues. A previous NASA sponsored study, which showed the advantages of Space Station - attached concepts, served as the basis for this study. A study methodology was defined and implemented to assess compatible combinations of reactor power installation concepts, disposal destinations, and propulsion methods. Three installation concepts that met a set of integration criteria were characterized from a configuration and operational viewpoint, with end-of-life disposal mass identified. Disposal destinations that met current aerospace nuclear safety criteria were identified and characterized from an operational and energy requirements viewpoint, with delta-V energy requirement as a key parameter. Chemical propulsion methods that met current and near-term application criteria were identified and payload mass and delta-V capabilities were characterized. These capabilities were matched against concept disposal mass and destination delta-V requirements to provide a feasibility of each combination.
Study of power management technology for orbital multi-100KWe applications. Volume 2: Study results
NASA Technical Reports Server (NTRS)
Mildice, J. W.
1980-01-01
The preliminary requirements and technology advances required for cost effective space power management systems for multi-100 kilowatt requirements were identified. System requirements were defined by establishing a baseline space platform in the 250 KE KWe range and examining typical user loads and interfaces. The most critical design parameters identified for detailed analysis include: increased distribution voltages and space plasma losses, the choice between ac and dc distribution systems, shuttle servicing effects on reliability, life cycle costs, and frequency impacts to power management system and payload systems for AC transmission. The first choice for a power management system for this kind of application and size range is a hybrid ac/dc combination with the following major features: modular design and construction-sized minimum weight/life cycle cost; high voltage transmission (100 Vac RMS); medium voltage array or = 440 Vdc); resonant inversion; transformer rotary joint; high frequency power transmission line or = 20 KHz); energy storage on array side or rotary joint; fully redundant; and 10 year life with minimal replacement and repair.
NASA Technical Reports Server (NTRS)
1981-01-01
Reasonable space systems concepts were systematically identified and defined and a total system was evaluated for the space disposal of nuclear wastes. Areas studied include space destinations, space transportation options, launch site options payload protection approaches, and payload rescue techniques. Systems level cost and performance trades defined four alternative space systems which deliver payloads to the selected 0.85 AU heliocentric orbit destination at least as economically as the reference system without requiring removal of the protective radiation shield container. No concepts significantly less costly than the reference concept were identified.
14 CFR Appendix A to Part 420 - Method for Defining a Flight Corridor
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Method for Defining a Flight Corridor A Appendix A to Part 420 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION... represents the launch vehicle the applicant plans to support at its launch point; (ii) Select a debris...
14 CFR Appendix A to Part 420 - Method for Defining a Flight Corridor
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Method for Defining a Flight Corridor A Appendix A to Part 420 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION... represents the launch vehicle the applicant plans to support at its launch point; (ii) Select a debris...
14 CFR Appendix A to Part 420 - Method for Defining a Flight Corridor
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Method for Defining a Flight Corridor A Appendix A to Part 420 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION... represents the launch vehicle the applicant plans to support at its launch point; (ii) Select a debris...
Civil Navigation Signal Status
2015-04-29
8/15 defined CNAV Message Types - Led to pre-operational use beginning 28 Apr 14 • Planned live-sky event fall of 2015 - Incorporate Midi Almanac...Parameters, Text, 18 eight-bit ASCII characters 37 Clock & Midi Almanac SV Clock Correction Parameters, Midi Almanac parameters 15 Defined Message Types
The space shuttle payload planning working groups. Volume 2: Atmospheric and space physics
NASA Technical Reports Server (NTRS)
1973-01-01
The findings of the Atmospheric and Space Physics working group of the space shuttle mission planning activity are presented. The principal objectives defined by the group are: (1) to investigate the detailed mechanisms which control the near-space environment of the earth, (2) to perform plasma physics investigations not feasible in ground-based laboratories, and (3) to conduct investigations which are important in understanding planetary and cometary phenomena. The core instrumentation and laboratory configurations for conducting the investigations are defined.
NASA Technical Reports Server (NTRS)
Chobotov, V. A.
1974-01-01
Control elements such as sensors, momentum exchange devices, and thrusters are described which can be used to define space replaceable units (SRU), in accordance with attitude control, guidance, and navigation performance requirements selected for NASA space serviceable mission spacecraft. A number of SRU's are developed, and their reliability block diagrams are presented. An SRU assignment is given in order to define a set of feasible space serviceable spacecraft for the missions of interest.
Space Communication and Navigation SDR Testbed, Overview and Opportunity for Experiments
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.
2013-01-01
NASA has developed an experimental flight payload (referred to as the Space Communication and Navigation (SCAN) Test Bed) to investigate software defined radio (SDR) communications, networking, and navigation technologies, operationally in the space environment. The payload consists of three software defined radios each compliant to NASAs Space Telecommunications Radio System Architecture, a common software interface description standard for software defined radios. The software defined radios are new technology developments underway by NASA and industry partners launched in 2012. The payload is externally mounted to the International Space Station truss to conduct experiments representative of future mission capability. Experiment operations include in-flight reconfiguration of the SDR waveform functions and payload networking software. The flight system will communicate with NASAs orbiting satellite relay network, the Tracking and Data Relay Satellite System at both S-band and Ka-band and to any Earth-based compatible S-band ground station. The system is available for experiments by industry, academia, and other government agencies to participate in the SDR technology assessments and standards advancements.
NASA Astrophysics Data System (ADS)
Lisi, Mariano; Tramutoli, Valerio; Eleftheriou, Alexander; Filizzola, Carolina; Genzano, Nicola; Lacava, Teodosio; Paciello, Rossana; Pergola, Nicola; Vallianatos, Filippos
2017-04-01
Real-time integration of independent observations is expected to significantly improve our present capability of dynamically assess Seismic Hazard. Specific observations (e.g. anomaly in one parameter) can be used as a trigger (and/or to establish space/time constraints) for activating (implementing) the analysis on other independent parameters (e.g. b-value computation, Natural Time Analysis, on seismic data) whose systematic computation could result otherwise very computationally expensive or operationally impossible. In the present paper one of these parameters (the Earth's emitted radiation in the Thermal Infra-Red spectral region) has been used to activate the application of Natural Time Analysis of seismic data in order to verify possible improvements in the forecast of earthquakes (with M≥4) occurred in Greece during 2004-2013. The RST (Robust Satellite Technique) data analysis approach and RETIRA (Robust Estimator of TIR Anomalies) index were used to preliminarily define, and then to identify, Significant Sequences of TIR Anomalies (SSTAs) in 10 years (2004-2013) of daily TIR images acquired by the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on board the Meteosat Second Generation (MSG) satellite. A previous paper showed that in the same period of time more than 93% of all identified SSTAs occurred in a pre-fixed space-time window around earthquakes time (30 days before up to 15 after) and epicenter (within 150 km or Dorbrovolsky distance) with a false positive rate smaller than 7%. In this paper a circular area around the barycenter of the observed Thermal Anomalies (and not just the convolution of them) has been used to define the area from which to collect seismic data required for Natural Time Analysis. Fifteen days prior the date of the first observed Significant Thermal Anomaly (STA) was the starting time used for collecting earthquakes from the catalog. The changes in the quality of earthquake forecast that were achieved by using each individual parameter in different configurations as well as the improvement emerging by their joint use of them will be presented referring to the 10 years studied period and to several recent events occurred in Greece.
Long-range interacting systems in the unconstrained ensemble.
Latella, Ivan; Pérez-Madrid, Agustín; Campa, Alessandro; Casetti, Lapo; Ruffo, Stefano
2017-01-01
Completely open systems can exchange heat, work, and matter with the environment. While energy, volume, and number of particles fluctuate under completely open conditions, the equilibrium states of the system, if they exist, can be specified using the temperature, pressure, and chemical potential as control parameters. The unconstrained ensemble is the statistical ensemble describing completely open systems and the replica energy is the appropriate free energy for these control parameters from which the thermodynamics must be derived. It turns out that macroscopic systems with short-range interactions cannot attain equilibrium configurations in the unconstrained ensemble, since temperature, pressure, and chemical potential cannot be taken as a set of independent variables in this case. In contrast, we show that systems with long-range interactions can reach states of thermodynamic equilibrium in the unconstrained ensemble. To illustrate this fact, we consider a modification of the Thirring model and compare the unconstrained ensemble with the canonical and grand-canonical ones: The more the ensemble is constrained by fixing the volume or number of particles, the larger the space of parameters defining the equilibrium configurations.
NASA Astrophysics Data System (ADS)
Gałęcki, Krystian; Kowalska-Baron, Agnieszka
2016-12-01
In this study, the influence of heavy-atom perturbation, induced by the addition of iodide ions, on the fluorescence and phosphorescence decay parameters of some single tryptophan containing serum albumins isolated from: human (HSA), equine (ESA) and leporine (LSA) has been studied. The obtained results indicated that, there exist two distinct conformations of the proteins with different exposure to the quencher. In addition, the Stern-Volmer plots indicated saturation of iodide ions in the binding region. Therefore, to determine quenching parameter, we proposed alternative quenching model and we have performed a global analysis of each conformer to define the effect of iodide ions in the cavity by determining the value of the association constant. The possible quenching mechanism may be based on long-range through-space interactions between the buried chromophore and quencher in the aqueous phase. The discrepancies of the decay parameters between the albumins studied may be related with the accumulation of positive charge at the main and the back entrance to the Drug Site 1 where tryptophan residue is located.
NASA Astrophysics Data System (ADS)
Amsallem, David; Tezaur, Radek; Farhat, Charbel
2016-12-01
A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.
NASA Astrophysics Data System (ADS)
Kalanov, Temur Z.
2003-04-01
A new theory of space is suggested. It represents the new point of view which has arisen from the critical analysis of the foundations of physics (in particular the theory of relativity and quantum mechanics), mathematics, cosmology and philosophy. The main idea following from the analysis is that the concept of movement represents a key to understanding of the essence of space. The starting-point of the theory is represented by the following philosophical (dialectical materialistic) principles. (a) The principle of the materiality (of the objective reality) of the Nature: the Nature (the Universe) is a system (a set) of material objects (particles, bodies, fields); each object has properties, features, and the properties, the features are inseparable characteristics of material object and belong only to material object. (b) The principle of the existence of material object: an object exists as the objective reality, and movement is a form of existence of object. (c) The principle (definition) of movement of object: the movement is change (i.e. transition of some states into others) in general; the movement determines a direction, and direction characterizes the movement. (d) The principle of existence of time: the time exists as the parameter of the system of reference. These principles lead to the following statements expressing the essence of space. (1) There is no space in general, and there exist space only as a form of existence of the properties and features of the object. It means that the space is a set of the measures of the object (the measure is the philosophical category meaning unity of the qualitative and quantitative determinacy of the object). In other words, the space of the object is a set of the states of the object. (2) The states of the object are manifested only in a system of reference. The main informational property of the unitary system researched physical object + system of reference is that the system of reference determines (measures, calculates) the parameters of the subsystem researched physical object (for example, the coordinates of the object M); the parameters characterize the system of reference (for example, the system of coordinates S). (3) Each parameter of the object is its measure. Total number of the mutually independent parameters of the object is called dimension of the space of the object. (4) The set of numerical values (i.e. the range, the spectrum) of each parameter is the subspace of the object. (The coordinate space, the momentum space and the energy space are examples of the subspaces of the object). (5) The set of the parameters of the object is divided into two non intersecting (opposite) classes: the class of the internal parameters and the class of the non internal (i.e. external) parameters. The class of the external parameters is divided into two non intersecting (opposite) subclasses: the subclass of the absolute parameters (characterizing the form, the sizes of the object) and the subclass of the non absolute (relative) parameters (characterizing the position, the coordinates of the object). (6) Set of the external parameters forms the external space of object. It is called geometrical space of object. (7) Since a macroscopic object has three mutually independent sizes, the dimension of its external absolute space is equal to three. Consequently, the dimension of its external relative space is also equal to three. Thus, the total dimension of the external space of the macroscopic object is equal to six. (8) In general case, the external absolute space (i.e. the form, the sizes) and the external relative space (i.e. the position, the coordinates) of any object are mutually dependent because of influence of a medium. The geometrical space of such object is called non Euclidean space. If the external absolute space and the external relative space of some object are mutually independent, then the external relative space of such object is the homogeneous and isotropic geometrical space. It is called Euclidean space of the object. Consequences: (i) the question of true geometry of the Universe is incorrect; (ii) the theory of relativity has no physical meaning.
Novel methods for parameter-based analysis of myocardial tissue in MR images
NASA Astrophysics Data System (ADS)
Hennemuth, A.; Behrens, S.; Kuehnel, C.; Oeltze, S.; Konrad, O.; Peitgen, H.-O.
2007-03-01
The analysis of myocardial tissue with contrast-enhanced MR yields multiple parameters, which can be used to classify the examined tissue. Perfusion images are often distorted by motion, while late enhancement images are acquired with a different size and resolution. Therefore, it is common to reduce the analysis to a visual inspection, or to the examination of parameters related to the 17-segment-model proposed by the American Heart Association (AHA). As this simplification comes along with a considerable loss of information, our purpose is to provide methods for a more accurate analysis regarding topological and functional tissue features. In order to achieve this, we implemented registration methods for the motion correction of the perfusion sequence and the matching of the late enhancement information onto the perfusion image and vice versa. For the motion corrected perfusion sequence, vector images containing the voxel enhancement curves' semi-quantitative parameters are derived. The resulting vector images are combined with the late enhancement information and form the basis for the tissue examination. For the exploration of data we propose different modes: the inspection of the enhancement curves and parameter distribution in areas automatically segmented using the late enhancement information, the inspection of regions segmented in parameter space by user defined threshold intervals and the topological comparison of regions segmented with different settings. Results showed a more accurate detection of distorted regions in comparison to the AHA-model-based evaluation.
NASA Astrophysics Data System (ADS)
Atanasov, Victor
2017-07-01
We extend the superconductor's free energy to include an interaction of the order parameter with the curvature of space-time. This interaction leads to geometry dependent coherence length and Ginzburg-Landau parameter which suggests that the curvature of space-time can change the superconductor's type. The curvature of space-time doesn't affect the ideal diamagnetism of the superconductor but acts as chemical potential. In a particular circumstance, the geometric field becomes order-parameter dependent, therefore the superconductor's order parameter dynamics affects the curvature of space-time and electrical or internal quantum mechanical energy can be channelled into the curvature of space-time. Experimental consequences are discussed.
Study of constrained minimal supersymmetry
NASA Astrophysics Data System (ADS)
Kane, G. L.; Kolda, Chris; Roszkowski, Leszek; Wells, James D.
1994-06-01
Taking seriously the phenomenological indications for supersymmetry we have made a detailed study of unified minimal SUSY, including many effects at the few percent level in a consistent fashion. We report here a general analysis of what can be studied without choosing a particular gauge group at the unification scale. Firstly, we find that the encouraging SUSY unification results of recent years do survive the challenge of a more complete and accurate analysis. Taking into account effects at the 5-10 % level leads to several improvements of previous results and allows us to sharpen our predictions for SUSY in the light of unification. We perform a thorough study of the parameter space and look for patterns to indicate SUSY predictions, so that they do not depend on arbitrary choices of some parameters or untested assumptions. Our results can be viewed as a fully constrained minimal SUSY standard model. The resulting model forms a well-defined basis for comparing the physics potential of different facilities. Very little of the acceptable parameter space has been excluded by CERN LEP or Fermilab so far, but a significant fraction can be covered when these accelerators are upgraded. A number of initial applications to the understanding of the values of mh and mt, the SUSY spectrum, detectability of SUSY at LEP II or Fermilab, B(b-->sγ), Γ(Z-->bb¯), dark matter, etc., are included in a separate section that might be of more interest to some readers than the technical aspects of model building. We formulate an approach to extracting SUSY parameters from data when superpartners are detected. For small tanβ or large mt both m1/2 and m0 are entirely bounded from above at ~1 TeV without having to use a fine-tuning constraint.
Hubert, C; Houari, S; Rozet, E; Lebrun, P; Hubert, Ph
2015-05-22
When using an analytical method, defining an analytical target profile (ATP) focused on quantitative performance represents a key input, and this will drive the method development process. In this context, two case studies were selected in order to demonstrate the potential of a quality-by-design (QbD) strategy when applied to two specific phases of the method lifecycle: the pre-validation study and the validation step. The first case study focused on the improvement of a liquid chromatography (LC) coupled to mass spectrometry (MS) stability-indicating method by the means of the QbD concept. The design of experiments (DoE) conducted during the optimization step (i.e. determination of the qualitative design space (DS)) was performed a posteriori. Additional experiments were performed in order to simultaneously conduct the pre-validation study to assist in defining the DoE to be conducted during the formal validation step. This predicted protocol was compared to the one used during the formal validation. A second case study based on the LC/MS-MS determination of glucosamine and galactosamine in human plasma was considered in order to illustrate an innovative strategy allowing the QbD methodology to be incorporated during the validation phase. An operational space, defined by the qualitative DS, was considered during the validation process rather than a specific set of working conditions as conventionally performed. Results of all the validation parameters conventionally studied were compared to those obtained with this innovative approach for glucosamine and galactosamine. Using this strategy, qualitative and quantitative information were obtained. Consequently, an analyst using this approach would be able to select with great confidence several working conditions within the operational space rather than a given condition for the routine use of the method. This innovative strategy combines both a learning process and a thorough assessment of the risk involved. Copyright © 2015 Elsevier B.V. All rights reserved.
Space station systems analysis study. Part 3: Documentation. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
1977-01-01
The space stations systems analysis study is summarized. A cost efffective system concept capable of meeting a broad spectrum of mission requirements was developed. Candidate objectives were reviewed and implementation requirements were defined. Program options for both low earth and geosynchronous orbits were examined. Space construction concepts were analyzed and defined in detail.
The Autistic Dialogic Style: A Case of Asperger's Syndrome
ERIC Educational Resources Information Center
Fonseca, Vera Regina J. R. M.
2009-01-01
In a former study (Fonseca and Bussab, 2006, "Self, other and dialogical space in autistic states", "International Journal of Psycho-Analysis", 87:1-16), the author hypothesised that in autistic disorders there is a distortion in the construction of what she defined as dialogic space. Such a space, in which self and other define each other…
Chaikuad, Apirat; Knapp, Stefan; von Delft, Frank
2015-01-01
The quest for an optimal limited set of effective crystallization conditions remains a challenge in macromolecular crystallography, an issue that is complicated by the large number of chemicals which have been deemed to be suitable for promoting crystal growth. The lack of rational approaches towards the selection of successful chemical space and representative combinations has led to significant overlapping conditions, which are currently present in a multitude of commercially available crystallization screens. Here, an alternative approach to the sampling of widely used PEG precipitants is suggested through the use of PEG smears, which are mixtures of different PEGs with a requirement of either neutral or cooperatively positive effects of each component on crystal growth. Four newly defined smears were classified by molecular-weight groups and enabled the preservation of specific properties related to different polymer sizes. These smears not only allowed a wide coverage of properties of these polymers, but also reduced PEG variables, enabling greater sampling of other parameters such as buffers and additives. The efficiency of the smear-based screens was evaluated on more than 220 diverse recombinant human proteins, which overall revealed a good initial crystallization success rate of nearly 50%. In addition, in several cases successful crystallizations were only obtained using PEG smears, while various commercial screens failed to yield crystals. The defined smears therefore offer an alternative approach towards PEG sampling, which will benefit the design of crystallization screens sampling a wide chemical space of this key precipitant. PMID:26249344
Chaikuad, Apirat; Knapp, Stefan; von Delft, Frank
2015-08-01
The quest for an optimal limited set of effective crystallization conditions remains a challenge in macromolecular crystallography, an issue that is complicated by the large number of chemicals which have been deemed to be suitable for promoting crystal growth. The lack of rational approaches towards the selection of successful chemical space and representative combinations has led to significant overlapping conditions, which are currently present in a multitude of commercially available crystallization screens. Here, an alternative approach to the sampling of widely used PEG precipitants is suggested through the use of PEG smears, which are mixtures of different PEGs with a requirement of either neutral or cooperatively positive effects of each component on crystal growth. Four newly defined smears were classified by molecular-weight groups and enabled the preservation of specific properties related to different polymer sizes. These smears not only allowed a wide coverage of properties of these polymers, but also reduced PEG variables, enabling greater sampling of other parameters such as buffers and additives. The efficiency of the smear-based screens was evaluated on more than 220 diverse recombinant human proteins, which overall revealed a good initial crystallization success rate of nearly 50%. In addition, in several cases successful crystallizations were only obtained using PEG smears, while various commercial screens failed to yield crystals. The defined smears therefore offer an alternative approach towards PEG sampling, which will benefit the design of crystallization screens sampling a wide chemical space of this key precipitant.
A Framework for Robust Multivariable Optimization of Integrated Circuits in Space Applications
NASA Technical Reports Server (NTRS)
DuMonthier, Jeffrey; Suarez, George
2013-01-01
Application Specific Integrated Circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way which facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as framework of software modules, templates and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation. Templates provide a starting point for both while toolbox functions minimize the code required. Once a test bench has been coded to optimize a particular circuit, it is also used to verify the final design. The combination of test bench and cost function can then serve as a template for similar circuits or be re-used to migrate the design to different processes by re-running it with the new process specific device models. The system has been used in the design of time to digital converters for laser ranging and time-of-flight mass spectrometry to optimize analog, mixed signal and digital circuits such as charge sensitive amplifiers, comparators, delay elements, radiation tolerant dual interlocked (DICE) flip-flops and two of three voter gates.
Space Telecommunications Radio Architecture (STRS)
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.
2006-01-01
A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG's SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA s current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.
Space Telecommunications Radio Architecture (STRS): Technical Overview
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.
2006-01-01
A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG s SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA's current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.
NASA's SDR Standard: Space Telecommunications Radio System
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Johnson, Sandra K.
2007-01-01
A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG s SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA s current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.
Quantifying uncertainty in NDSHA estimates due to earthquake catalogue
NASA Astrophysics Data System (ADS)
Magrin, Andrea; Peresan, Antonella; Vaccari, Franco; Panza, Giuliano
2014-05-01
The procedure for the neo-deterministic seismic zoning, NDSHA, is based on the calculation of synthetic seismograms by the modal summation technique. This approach makes use of information about the space distribution of large magnitude earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g., morphostructural features and ongoing deformation processes identified by earth observations). Hence the method does not make use of attenuation models (GMPE), which may be unable to account for the complexity of the product between seismic source tensor and medium Green function and are often poorly constrained by the available observations. NDSHA defines the hazard from the envelope of the values of ground motion parameters determined considering a wide set of scenario earthquakes; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In NDSHA uncertainties are not statistically treated as in PSHA, where aleatory uncertainty is traditionally handled with probability density functions (e.g., for magnitude and distance random variables) and epistemic uncertainty is considered by applying logic trees that allow the use of alternative models and alternative parameter values of each model, but the treatment of uncertainties is performed by sensitivity analyses for key modelling parameters. To fix the uncertainty related to a particular input parameter is an important component of the procedure. The input parameters must account for the uncertainty in the prediction of fault radiation and in the use of Green functions for a given medium. A key parameter is the magnitude of sources used in the simulation that is based on catalogue informations, seismogenic zones and seismogenic nodes. Because the largest part of the existing catalogues is based on macroseismic intensity, a rough estimate of ground motion error can therefore be the factor of 2, intrinsic in MCS scale. We tested this hypothesis by the analysis of uncertainty in ground motion maps due to the catalogue random errors in magnitude and localization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trott, Cathryn M.; Tingay, Steven J.; Wayth, Randall B.
2013-04-10
We define a framework for determining constraints on the detection rate of fast transient events from a population of underlying sources, with a view to incorporate beam shape, frequency effects, scattering effects, and detection efficiency into the metric. We then demonstrate a method for combining independent data sets into a single event rate constraint diagram, using a probabilistic approach to the limits on parameter space. We apply this new framework to present the latest results from the V-FASTR experiment, a commensal fast transients search using the Very Long Baseline Array (VLBA). In the 20 cm band, V-FASTR now has themore » ability to probe the regions of parameter space of importance for the observed Lorimer and Keane fast radio transient candidates by combining the information from observations with differing bandwidths, and properly accounting for the source dispersion measure, VLBA antenna beam shape, experiment time sampling, and stochastic nature of events. We then apply the framework to combine the results of the V-FASTR and Allen Telescope Array Fly's Eye experiments, demonstrating their complementarity. Expectations for fast transients experiments for the SKA Phase I dish array are then computed, and the impact of large differential bandwidths is discussed.« less
Multi-Mission Earth Vehicle Subsonic Dynamic Stability Testing and Analyses
NASA Technical Reports Server (NTRS)
Glaab, Louis J.; Fremaux, C. Michael
2013-01-01
Multi-Mission Earth Entry Vehicles (MMEEVs) are blunt-body vehicles designed with the purpose of transporting payloads from outer space to the surface of the Earth. To achieve high-reliability and minimum weight, MMEEVs avoid use of limited-reliability systems, such as parachutes, retro-rockets, and reaction control systems and rely on the natural aerodynamic stability of the vehicle throughout the Entry, Descent, and Landing (EDL) phase of flight. The Multi-Mission Systems Analysis for Planetary Entry (M-SAPE) parametric design tool is used to facilitate the design of MMEEVs for an array of missions and develop and visualize the trade space. Testing in NASA Langley?s Vertical Spin Tunnel (VST) was conducted to significantly improve M-SAPE?s subsonic aerodynamic models. Vehicle size and shape can be driven by entry flight path angle and speed, thermal protection system performance, terminal velocity limitations, payload mass and density, among other design parameters. The objectives of the VST testing were to define usable subsonic center of gravity limits, and aerodynamic parameters for 6-degree-of-freedom (6-DOF) simulations, for a range of MMEEV designs. The range of MMEEVs tested was from 1.8m down to 1.2m diameter. A backshell extender provided the ability to test a design with a much larger payload for the 1.2m MMEEV.
Pini, Núbia Inocencya Pavesi; Marchi, Luciana Manzotti De; Pascotto, Renata Corrêa
2015-01-01
Maxillary lateral incisor agenesis (MLIA) is a condition that affects both dental esthetics and function in young patients, and represents an important challenge for clinicians. Although several treatment options are available, the mesial repositioning of the canines followed by teeth recontouring into lateral incisors; or space opening/maintenance followed by implant placement have recently emerged as two important treatment approaches. In this article, the current and latest literature has been reviewed in order to summarize the functional and esthetic outcomes obtained with these two forms of treatment of MLIA patients in recent years. Indications, clinical limitations and the most important parameters to achieve the best possible results with each treatment modality are also discussed. Within the limitations of this review, it is not possible to assert at this point in time that one treatment approach is more advantageous than the other. Long-term followup studies comparing the existing treatment options are still lacking in the literature, and they are necessary to shed some light on the issue. It is possible, however, to state that adequate multidisciplinary diagnosis and planning are imperative to define the treatment option that will provide the best individual results for patients with MLIA. PMID:25646137
Integrability and Linear Stability of Nonlinear Waves
NASA Astrophysics Data System (ADS)
Degasperis, Antonio; Lombardo, Sara; Sommacal, Matteo
2018-03-01
It is well known that the linear stability of solutions of 1+1 partial differential equations which are integrable can be very efficiently investigated by means of spectral methods. We present here a direct construction of the eigenmodes of the linearized equation which makes use only of the associated Lax pair with no reference to spectral data and boundary conditions. This local construction is given in the general N× N matrix scheme so as to be applicable to a large class of integrable equations, including the multicomponent nonlinear Schrödinger system and the multiwave resonant interaction system. The analytical and numerical computations involved in this general approach are detailed as an example for N=3 for the particular system of two coupled nonlinear Schrödinger equations in the defocusing, focusing and mixed regimes. The instabilities of the continuous wave solutions are fully discussed in the entire parameter space of their amplitudes and wave numbers. By defining and computing the spectrum in the complex plane of the spectral variable, the eigenfrequencies are explicitly expressed. According to their topological properties, the complete classification of these spectra in the parameter space is presented and graphically displayed. The continuous wave solutions are linearly unstable for a generic choice of the coupling constants.
Mars rover/sample return mission requirements affecting space station
NASA Technical Reports Server (NTRS)
1988-01-01
The possible interfaces between the Space Station and the Mars Rover/Sample Return (MRSR) mission are defined. In order to constrain the scope of the report a series of seven design reference missions divided into three major types were assumed. These missions were defined to span the probable range of Space Station-MRSR interactions. The options were reduced, the MRSR sample handling requirements and baseline assumptions about the MRSR hardware and the key design features and requirements of the Space Station are summarized. Only the aspects of the design reference missions necessary to define the interfaces, hooks and scars, and other provisions on the Space Station are considered. An analysis of each of the three major design reference missions, is reported, presenting conceptual designs of key hardware to be mounted on the Space Station, a definition of weights, interfaces, and required hooks and scars.
Nelson, Danielle V; Klinck, Holger; Carbaugh-Rutland, Alexander; Mathis, Codey L; Morzillo, Anita T; Garcia, Tiffany S
2017-01-01
Loss of acoustic habitat due to anthropogenic noise is a key environmental stressor for vocal amphibian species, a taxonomic group that is experiencing global population declines. The Pacific chorus frog ( Pseudacris regilla ) is the most common vocal species of the Pacific Northwest and can occupy human-dominated habitat types, including agricultural and urban wetlands. This species is exposed to anthropogenic noise, which can interfere with vocalizations during the breeding season. We hypothesized that Pacific chorus frogs would alter the spatial and temporal structure of their breeding vocalizations in response to road noise, a widespread anthropogenic stressor. We compared Pacific chorus frog call structure and ambient road noise levels along a gradient of road noise exposures in the Willamette Valley, Oregon, USA. We used both passive acoustic monitoring and directional recordings to determine source level (i.e., amplitude or volume), dominant frequency (i.e., pitch), call duration, and call rate of individual frogs and to quantify ambient road noise levels. Pacific chorus frogs were unable to change their vocalizations to compensate for road noise. A model of the active space and time ("spatiotemporal communication") over which a Pacific chorus frog vocalization could be heard revealed that in high-noise habitats, spatiotemporal communication was drastically reduced for an individual. This may have implications for the reproductive success of this species, which relies on specific call repertoires to portray relative fitness and attract mates. Using the acoustic call parameters defined by this study (frequency, source level, call rate, and call duration), we developed a simplified model of acoustic communication space-time for this species. This model can be used in combination with models that determine the insertion loss for various acoustic barriers to define the impact of anthropogenic noise on the radius of communication in threatened species. Additionally, this model can be applied to other vocal taxonomic groups provided the necessary acoustic parameters are determined, including the frequency parameters and perception thresholds. Reduction in acoustic habitat by anthropogenic noise may emerge as a compounding environmental stressor for an already sensitive taxonomic group.
NASA Astrophysics Data System (ADS)
Eleftheriou, Alexander; Filizzola, Carolina; Genzano, Nicola; Lacava, Teodosio; Lisi, Mariano; Paciello, Rossana; Pergola, Nicola; Vallianatos, Filippos; Tramutoli, Valerio
2016-01-01
Real-time integration of multi-parametric observations is expected to accelerate the process toward improved, and operationally more effective, systems for time-Dependent Assessment of Seismic Hazard (t-DASH) and earthquake short-term (from days to weeks) forecast. However, a very preliminary step in this direction is the identification of those parameters (chemical, physical, biological, etc.) whose anomalous variations can be, to some extent, associated with the complex process of preparation for major earthquakes. In this paper one of these parameters (the Earth's emitted radiation in the Thermal InfraRed spectral region) is considered for its possible correlation with M ≥ 4 earthquakes occurred in Greece in between 2004 and 2013. The Robust Satellite Technique (RST) data analysis approach and Robust Estimator of TIR Anomalies (RETIRA) index were used to preliminarily define, and then to identify, significant sequences of TIR anomalies (SSTAs) in 10 years (2004-2013) of daily TIR images acquired by the Spinning Enhanced Visible and Infrared Imager on board the Meteosat Second Generation satellite. Taking into account the physical models proposed for justifying the existence of a correlation among TIR anomalies and earthquake occurrences, specific validation rules (in line with the ones used by the Collaboratory for the Study of Earthquake Predictability—CSEP—Project) have been defined to drive a retrospective correlation analysis process. The analysis shows that more than 93 % of all identified SSTAs occur in the prefixed space-time window around ( M ≥ 4) earthquake's time and location of occurrence with a false positive rate smaller than 7 %. Molchan error diagram analysis shows that such a correlation is far to be achievable by chance notwithstanding the huge amount of missed events due to frequent space/time data gaps produced by the presence of clouds over the scene. Achieved results, and particularly the very low rate of false positives registered on a so long testing period, seems already sufficient (at least) to qualify TIR anomalies (identified by RST approach and RETIRA index) among the parameters to be considered in the framework of a multi-parametric approach to t-DASH.
NASA Astrophysics Data System (ADS)
Eladj, Said; bansir, fateh; ouadfeul, sid Ali
2016-04-01
The application of genetic algorithm starts with an initial population of chromosomes representing a "model space". Chromosome chains are preferentially Reproduced based on Their fitness Compared to the total population. However, a good chromosome has a Greater opportunity to Produce offspring Compared To other chromosomes in the population. The advantage of the combination HGA / SAA is the use of a global search approach on a large population of local maxima to Improve Significantly the performance of the method. To define the parameters of the Hybrid Genetic Algorithm Steepest Ascent Auto Statics (HGA / SAA) job, we Evaluated by testing in the first stage of "Steepest Ascent," the optimal parameters related to the data used. 1- The number of iterations "Number of hill climbing iteration" is equal to 40 iterations. This parameter defines the participation of the algorithm "SA", in this hybrid approach. 2- The minimum eigenvalue for SA '= 0.8. This is linked to the quality of data and S / N ratio. To find an implementation performance of hybrid genetic algorithms in the inversion for estimating of the residual static corrections, tests Were Performed to determine the number of generation of HGA / SAA. Using the values of residual static corrections already calculated by the Approaches "SAA and CSAA" learning has Proved very effective in the building of the cross-correlation table. To determine the optimal number of generation, we Conducted a series of tests ranging from [10 to 200] generations. The application on real seismic data in southern Algeria allowed us to judge the performance and capacity of the inversion with this hybrid method "HGA / SAA". This experience Clarified the influence of the corrections quality estimated from "SAA / CSAA" and the optimum number of generation hybrid genetic algorithm "HGA" required to have a satisfactory performance. Twenty (20) generations Were enough to Improve continuity and resolution of seismic horizons. This Will allow us to achieve a more accurate structural interpretation Key words: Hybrid Genetic Algorithm, number of generations, model space, local maxima, Number of hill climbing iteration, Minimum eigenvalue, cross-correlation table
EXPOSE-R on Mission on the ISS
NASA Astrophysics Data System (ADS)
Panitz, Corinna; Rabbow, Elke; Rettberg, Petra; Barczyk, Simon; Kloss, Maria; Reitz, Guenther
Currently EXPOSE-R is on mission! This astrobiological exposure facility was accommodated at the universal workplace URM-D Zenith payload site, located outside the Russian Svezda Module of the International Space Station (ISS) by extravehicular activity (EVA) on March 10th 2009. It contains 3 trays accommodating 12 sample compartments with sample carriers in three levels either open to space vacuum or kept in a defined gas environment. In its 8 experiments of biological and chemical content, more than 1200 individual samples are exposed to solar ultraviolet (UV) radiations, vacuum, cosmic rays or extreme temperature variations. In their different experiments the involved scientists are studying the question of life's origin on Earth and the results of their experiments are contributing to different aspects of the evolution and distribution of life in the Universe. Additionally integrated into the EXPOSE-R facility are several dosimeters monitoring the ionising and the solar UV-radiation during the mission to deliver useful information to complement the sample analysis. In close cooperation with the DLR and the Technical University Munich (TUM), the Rheinisch -Westfülische Technischen Hochschule Aachen (RWTH Aachen) operates the experiment "Spores". a This is one of the 6 astrobiological experiments of the ROSE-Consortium" (Response of Or-ganisms to Space Environment) of the EXPOSE-R mission. In these experiments spores of bacteria, fungi and ferns are being over layered or mixed with meteorite material. The analysis of the effect of the space parameters on different biological endpoints of the spores of the mi-croorganism Bacillus subtilis will be performed after the retrieval of the experiment scheduled for the end of 2010. Parallel to the space mission an identical set of samples was accommodated into EXPOSE-R trays identical in construction to perform the Mission Ground Reference (MGR) Test. Currently this MGR Test is carried out in the Planetary and Space Simulation Facilities (PSI) of DLR, Cologne: the space parameters (vacuum, temperature and extra-terrestrial UV-radiation) as delivered from the ISS are simulated. An overview over the EXPOSE mission from the EXPOSE-R Experiment Verification Test (EVT) Program to the flight sample preparation is presented.
Tokamak with liquid metal for inducing toroidal electrical field
Ohkawa, Tihiro
1981-01-01
A tokamak apparatus includes a vessel for defining a reservoir and confining liquid therein. A toroidal liner disposed within said vessel defines a toroidal space within the liner confines gas therein. Liquid metal fills the reservoir outside the liner. A magnetic field is established in the liquid metal to develop magnetic flux linking the toroidal space. The gas is ionized. The liquid metal and the toroidal space are moved relative to one another transversely of the space to generate electric current in the ionized gas in the toroidal space about its major axis and thereby heat plasma developed in the toroidal space.
Kim, Hyoungkyu; Hudetz, Anthony G.; Lee, Joseph; Mashour, George A.; Lee, UnCheol; Avidan, Michael S.
2018-01-01
The integrated information theory (IIT) proposes a quantitative measure, denoted as Φ, of the amount of integrated information in a physical system, which is postulated to have an identity relationship with consciousness. IIT predicts that the value of Φ estimated from brain activities represents the level of consciousness across phylogeny and functional states. Practical limitations, such as the explosive computational demands required to estimate Φ for real systems, have hindered its application to the brain and raised questions about the utility of IIT in general. To achieve practical relevance for studying the human brain, it will be beneficial to establish the reliable estimation of Φ from multichannel electroencephalogram (EEG) and define the relationship of Φ to EEG properties conventionally used to define states of consciousness. In this study, we introduce a practical method to estimate Φ from high-density (128-channel) EEG and determine the contribution of each channel to Φ. We examine the correlation of power, frequency, functional connectivity, and modularity of EEG with regional Φ in various states of consciousness as modulated by diverse anesthetics. We find that our approximation of Φ alone is insufficient to discriminate certain states of anesthesia. However, a multi-dimensional parameter space extended by four parameters related to Φ and EEG connectivity is able to differentiate all states of consciousness. The association of Φ with EEG connectivity during clinically defined anesthetic states represents a new practical approach to the application of IIT, which may be used to characterize various physiological (sleep), pharmacological (anesthesia), and pathological (coma) states of consciousness in the human brain. PMID:29503611
Kim, Hyoungkyu; Hudetz, Anthony G; Lee, Joseph; Mashour, George A; Lee, UnCheol
2018-01-01
The integrated information theory (IIT) proposes a quantitative measure, denoted as Φ, of the amount of integrated information in a physical system, which is postulated to have an identity relationship with consciousness. IIT predicts that the value of Φ estimated from brain activities represents the level of consciousness across phylogeny and functional states. Practical limitations, such as the explosive computational demands required to estimate Φ for real systems, have hindered its application to the brain and raised questions about the utility of IIT in general. To achieve practical relevance for studying the human brain, it will be beneficial to establish the reliable estimation of Φ from multichannel electroencephalogram (EEG) and define the relationship of Φ to EEG properties conventionally used to define states of consciousness. In this study, we introduce a practical method to estimate Φ from high-density (128-channel) EEG and determine the contribution of each channel to Φ. We examine the correlation of power, frequency, functional connectivity, and modularity of EEG with regional Φ in various states of consciousness as modulated by diverse anesthetics. We find that our approximation of Φ alone is insufficient to discriminate certain states of anesthesia. However, a multi-dimensional parameter space extended by four parameters related to Φ and EEG connectivity is able to differentiate all states of consciousness. The association of Φ with EEG connectivity during clinically defined anesthetic states represents a new practical approach to the application of IIT, which may be used to characterize various physiological (sleep), pharmacological (anesthesia), and pathological (coma) states of consciousness in the human brain.
Indexing of exoplanets in search for potential habitability: application to Mars-like worlds
NASA Astrophysics Data System (ADS)
Kashyap Jagadeesh, Madhu; Gudennavar, Shivappa B.; Doshi, Urmi; Safonova, Margarita
2017-08-01
Study of exoplanets is one of the main goals of present research in planetary sciences and astrobiology. Analysis of huge planetary data from space missions such as CoRoT and Kepler is directed ultimately at finding a planet similar to Earth—the Earth's twin, and answering the question of potential exo-habitability. The Earth Similarity Index (ESI) is a first step in this quest, ranging from 1 (Earth) to 0 (totally dissimilar to Earth). It was defined for the four physical parameters of a planet: radius, density, escape velocity and surface temperature. The ESI is further sub-divided into interior ESI (geometrical mean of radius and density) and surface ESI (geometrical mean of escape velocity and surface temperature). The challenge here is to determine which exoplanet parameter(s) is important in finding this similarity; how exactly the individual parameters entering the interior ESI and surface ESI are contributing to the global ESI. Since the surface temperature entering surface ESI is a non-observable quantity, it is difficult to determine its value. Using the known data for the Solar System objects, we established the calibration relation between surface and equilibrium temperatures to devise an effective way to estimate the value of the surface temperature of exoplanets. ESI is a first step in determining potential exo-habitability that may not be very similar to a terrestrial life. A new approach, called Mars Similarity Index (MSI), is introduced to identify planets that may be habitable to the extreme forms of life. MSI is defined in the range between 1 (present Mars) and 0 (dissimilar to present Mars) and uses the same physical parameters as ESI. We are interested in Mars-like planets to search for planets that may host the extreme life forms, such as the ones living in extreme environments on Earth; for example, methane on Mars may be a product of the methane-specific extremophile life form metabolism.
NASA Astrophysics Data System (ADS)
Kettermann, Michael; von Hagke, Christoph; Urai, Janos L.
2017-04-01
Dilatant faults often form in rocks containing pre-existing joints, but the effects of joints on fault segment linkage and fracture connectivity is not well understood. Studying evolution of dilatancy and influence of fractures on fault development provides insights into geometry of fault zones in brittle rocks and will eventually allow for predicting their subsurface appearance. In an earlier study we recognized the effect of different angles between strike direction of vertical joints and a basement fault on the geometry of a developing fault zone. We now systematically extend the results by varying geometric joint parameters such as joint spacing and vertical extent of the joints and measuring fracture density and connectivity. A reproducibility study shows a small error-range for the measurements, allowing for a confident use of the experimental setup. Analogue models were carried out in a manually driven deformation box (30x28x20 cm) with a 60° dipping pre-defined basement fault and 4.5 cm of displacement. To produce open joints prior to faulting, sheets of paper were mounted in the box to a depth of 5 cm at a spacing of 2.5 cm. We varied the vertical extent of the joints from 5 to 50 mm. Powder was then sieved into the box, embedding the paper almost entirely (column height of 19 cm), and the paper was removed. During deformation we captured structural information by time-lapse photography that allows particle imaging velocimetry analyses (PIV) to detect localized deformation at every increment of displacement. Post-mortem photogrammetry preserves the final 3-dimensional structure of the fault zone. A counterintuitive result is that joint depth is of only minor importance for the evolution of the fault zone. Even very shallow joints form weak areas at which the fault starts to form and propagate. More important is joint spacing. Very large joint spacing leads to faults and secondary fractures that form subparallel to the basement fault. In contrast, small joint spacing results in fault strands that only localize at the pre-existing joints, and secondary fractures that are oriented at high angles to the pre-existing joints. With this new set of experiments we can now quantitatively constrain how (i) the angle between joints and basement fault, (ii) the joint depth and (iii) the joint spacing affect fault zone parameters such as (1) the damage zone width, (2) the density of secondary fractures, (3) map-view area of open gaps or (4) the fracture connectivity. We apply these results to predict subsurface geometries of joint-fault networks in cohesive rocks, e.g. basaltic sequences in Iceland and sandstones in the Canyonlands NP, USA.
Optimization of the monitoring of landfill gas and leachate in closed methanogenic landfills.
Jovanov, Dejan; Vujić, Bogdana; Vujić, Goran
2018-06-15
Monitoring of the gas and leachate parameters in a closed landfill is a long-term activity defined by national legislative worldwide. Serbian Waste Disposal Law defines the monitoring of a landfill at least 30 years after its closing, but the definition of the monitoring extent (number and type of parameters) is incomplete. In order to define and clear all the uncertainties, this research focuses on process of monitoring optimization, using the closed landfill in Zrenjanin, Serbia, as the experimental model. The aim of optimization was to find representative parameters which would define the physical, chemical and biological processes in the closed methanogenic landfill and to make this process less expensive. Research included development of the five monitoring models with different number of gas and leachate parameters and each model has been processed in open source software GeoGebra which is often used for solving optimization problems. The results of optimization process identified the most favorable monitoring model which fulfills all the defined criteria not only from the point of view of mathematical analyses, but also from the point of view of environment protection. The final outcome of this research - the minimal required parameters which should be included in the landfill monitoring are precisely defined. Copyright © 2017 Elsevier Ltd. All rights reserved.
Tokamak with in situ magnetohydrodynamic generation of toroidal magnetic field
Schaffer, Michael J.
1986-01-01
A tokamak apparatus includes an electrically conductive metal pressure vessel for defining a chamber and confining liquid therein. A liner disposed within said chamber defines a toroidal space within the liner and confines gas therein. The metal vessel provides an electrically conductive path linking the toroidal space. Liquid metal is forced outwardly through the chamber outside of the toroidal space to generate electric current in the conductive path and thereby generate a toroidal magnetic field within the toroidal space. Toroidal plasma is developed within the toroidal space about the major axis thereof.
Extraterrestrial processing and manufacturing of large space systems, volume 1, chapters 1-6
NASA Technical Reports Server (NTRS)
Miller, R. H.; Smith, D. B. S.
1979-01-01
Space program scenarios for production of large space structures from lunar materials are defined. The concept of the space manufacturing facility (SMF) is presented. The manufacturing processes and equipment for the SMF are defined and the conceptual layouts are described for the production of solar cells and arrays, structures and joints, conduits, waveguides, RF equipment radiators, wire cables, and converters. A 'reference' SMF was designed and its operation requirements are described.
14 CFR 1214.813 - Computation of sharing and pricing parameters.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Computation of sharing and pricing parameters. 1214.813 Section 1214.813 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION SPACE FLIGHT Reimbursement for Spacelab Services § 1214.813 Computation of sharing and pricing...
Definition of technology development missions for early space station satellite servicing, volume 1
NASA Technical Reports Server (NTRS)
1983-01-01
The testbed role of an early manned space station in the context of a satellite servicing evolutionary development and flight demonstration technology plan which results in a satellite servicing operational capability is defined. A satellite servicing technology development mission (a set of missions) to be performed on an early manned space station is conceptually defined.
A Convective Coordinate Approach to Continuum Mechanics with Application to Electrodynamics
2013-01-01
7 3. Differential Operators in Curvilinear Spaces 9 3.1 The Covariant...the particles in an arbitrary (perhaps initial or even fictitious) configuration, and a set of spatial coordinates that fixes locations in space (that...of field quantities defined in such spaces . 2.1 The Background Cartesian System Before defining the physical coordinate systems at the heart of this
A validation study of a stochastic model of human interaction
NASA Astrophysics Data System (ADS)
Burchfield, Mitchel Talmadge
The purpose of this dissertation is to validate a stochastic model of human interactions which is part of a developmentalism paradigm. Incorporating elements of ancient and contemporary philosophy and science, developmentalism defines human development as a progression of increasing competence and utilizes compatible theories of developmental psychology, cognitive psychology, educational psychology, social psychology, curriculum development, neurology, psychophysics, and physics. To validate a stochastic model of human interactions, the study addressed four research questions: (a) Does attitude vary over time? (b) What are the distributional assumptions underlying attitudes? (c) Does the stochastic model, {-}N{intlimitssbsp{-infty}{infty}}varphi(chi,tau)\\ Psi(tau)dtau, have utility for the study of attitudinal distributions and dynamics? (d) Are the Maxwell-Boltzmann, Fermi-Dirac, and Bose-Einstein theories applicable to human groups? Approximately 25,000 attitude observations were made using the Semantic Differential Scale. Positions of individuals varied over time and the logistic model predicted observed distributions with correlations between 0.98 and 1.0, with estimated standard errors significantly less than the magnitudes of the parameters. The results bring into question the applicability of Fisherian research designs (Fisher, 1922, 1928, 1938) for behavioral research based on the apparent failure of two fundamental assumptions-the noninteractive nature of the objects being studied and normal distribution of attributes. The findings indicate that individual belief structures are representable in terms of a psychological space which has the same or similar properties as physical space. The psychological space not only has dimension, but individuals interact by force equations similar to those described in theoretical physics models. Nonlinear regression techniques were used to estimate Fermi-Dirac parameters from the data. The model explained a high degree of the variance in each probability distribution. The correlation between predicted and observed probabilities ranged from a low of 0.955 to a high value of 0.998, indicating that humans behave in psychological space as Fermions behave in momentum space.
Calibration Laboratory Capabilities Listing as of April 2009
NASA Technical Reports Server (NTRS)
Kennedy, Gary W.
2009-01-01
This document reviews the Calibration Laboratory capabilities for various NASA centers (i.e., Glenn Research Center and Plum Brook Test Facility Kennedy Space Center Marshall Space Flight Center Stennis Space Center and White Sands Test Facility.) Some of the parameters reported are: Alternating current, direct current, dimensional, mass, force, torque, pressure and vacuum, safety, and thermodynamics parameters. Some centers reported other parameters.
Craddock, Helen L; Youngson, Callum C; Manogue, Michael; Blance, Andrew
2007-01-01
One of the barriers to restoring an edentulous space may be the supraeruption of an unopposed tooth to occupy some or all of the space needed for prosthetic replacement. The aim of this study was to determine the extent and type of supraeruption associated with unopposed posterior teeth and to investigate the relationship between these and oral and patient factors. Diagnostic casts of 100 patients with an unopposed posterior tooth and of 100 control patients were scanned and analyzed to record the extent of supraeruption, together with other clinical parameters. The type of eruption present was defined for each subject as Periodontal Growth, Active Eruption, or Relative Wear. Generalized Linear Models were developed to examine associations between the extent and type of supraeruption and patient or dental factors. The extent of supraeruption for an individual was modeled to show association between the degree of supraeruption and clinical parameters. Three models were produced to show associations between each type of supraeruption and clinical parameters. The mean supraeruption for subjects was 1.68 mm (SD 0.79, range 0 to 3.99 mm) and for controls, 0.24 mm (SD 0.39, range 0 to 1.46 mm). The extent of supraeruption was statistically greater in maxillary unopposed teeth than in mandibular unopposed teeth. Supraeruption was found in 92% of subjects' unopposed teeth. A Generalized Linear Model could be produced to demonstrate that the clinical parameters associated with supraeruption are periodontal growth, attachment loss, and the lingual movement of the tooth distal to the extraction site. Three types of supraeruption, which may be present singly, or in combination, can be identified. Active eruption has an association with attachment loss. Periodontal growth has an inverse association with attachment loss, is more prevalent in younger patients, in the maxilla, in premolars, and in females. Relative wear has an association with increasing age and is more prevalent in unopposed mandibular teeth.
Gomez-Cardona, Daniel; Hayes, John W; Zhang, Ran; Li, Ke; Cruz-Bastida, Juan Pablo; Chen, Guang-Hong
2018-05-01
Different low-signal correction (LSC) methods have been shown to efficiently reduce noise streaks and noise level in CT to provide acceptable images at low-radiation dose levels. These methods usually result in CT images with highly shift-variant and anisotropic spatial resolution and noise, which makes the parameter optimization process highly nontrivial. The purpose of this work was to develop a local task-based parameter optimization framework for LSC methods. Two well-known LSC methods, the adaptive trimmed mean (ATM) filter and the anisotropic diffusion (AD) filter, were used as examples to demonstrate how to use the task-based framework to optimize filter parameter selection. Two parameters, denoted by the set P, for each LSC method were included in the optimization problem. For the ATM filter, these parameters are the low- and high-signal threshold levels p l and p h ; for the AD filter, the parameters are the exponents δ and γ in the brightness gradient function. The detectability index d' under the non-prewhitening (NPW) mathematical observer model was selected as the metric for parameter optimization. The optimization problem was formulated as an unconstrained optimization problem that consisted of maximizing an objective function d'(P), where i and j correspond to the i-th imaging task and j-th spatial location, respectively. Since there is no explicit mathematical function to describe the dependence of d' on the set of parameters P for each LSC method, the optimization problem was solved via an experimentally measured d' map over a densely sampled parameter space. In this work, three high-contrast-high-frequency discrimination imaging tasks were defined to explore the parameter space of each of the LSC methods: a vertical bar pattern (task I), a horizontal bar pattern (task II), and a multidirectional feature (task III). Two spatial locations were considered for the analysis, a posterior region-of-interest (ROI) located within the noise streaks region and an anterior ROI, located further from the noise streaks region. Optimal results derived from the task-based detectability index metric were compared to other operating points in the parameter space with different noise and spatial resolution trade-offs. The optimal operating points determined through the d' metric depended on the interplay between the major spatial frequency components of each imaging task and the highly shift-variant and anisotropic noise and spatial resolution properties associated with each operating point in the LSC parameter space. This interplay influenced imaging performance the most when the major spatial frequency component of a given imaging task coincided with the direction of spatial resolution loss or with the dominant noise spatial frequency component; this was the case of imaging task II. The performance of imaging tasks I and III was influenced by this interplay in a smaller scale than imaging task II, since the major frequency component of task I was perpendicular to imaging task II, and because imaging task III did not have strong directional dependence. For both LSC methods, there was a strong dependence of the overall d' magnitude and shape of the contours on the spatial location within the phantom, particularly for imaging tasks II and III. The d' value obtained at the optimal operating point for each spatial location and imaging task was similar when comparing the LSC methods studied in this work. A local task-based detectability framework to optimize the selection of parameters for LSC methods was developed. The framework takes into account the potential shift-variant and anisotropic spatial resolution and noise properties to maximize the imaging performance of the CT system. Optimal parameters for a given LSC method depend strongly on the spatial location within the image object. © 2018 American Association of Physicists in Medicine.
Automation life-cycle cost model
NASA Technical Reports Server (NTRS)
Gathmann, Thomas P.; Reeves, Arlinda J.; Cline, Rick; Henrion, Max; Ruokangas, Corinne
1992-01-01
The problem domain being addressed by this contractual effort can be summarized by the following list: Automation and Robotics (A&R) technologies appear to be viable alternatives to current, manual operations; Life-cycle cost models are typically judged with suspicion due to implicit assumptions and little associated documentation; and Uncertainty is a reality for increasingly complex problems and few models explicitly account for its affect on the solution space. The objectives for this effort range from the near-term (1-2 years) to far-term (3-5 years). In the near-term, the envisioned capabilities of the modeling tool are annotated. In addition, a framework is defined and developed in the Decision Modelling System (DEMOS) environment. Our approach is summarized as follows: Assess desirable capabilities (structure into near- and far-term); Identify useful existing models/data; Identify parameters for utility analysis; Define tool framework; Encode scenario thread for model validation; and Provide transition path for tool development. This report contains all relevant, technical progress made on this contractual effort.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konno, Kohkichi, E-mail: kohkichi@tomakomai-ct.ac.jp; Nagasawa, Tomoaki, E-mail: nagasawa@tomakomai-ct.ac.jp; Takahashi, Rohta, E-mail: takahashi@tomakomai-ct.ac.jp
We consider the scattering of a quantum particle by two independent, successive parity-invariant point interactions in one dimension. The parameter space for the two point interactions is given by the direct product of two tori, which is described by four parameters. By investigating the effects of the two point interactions on the transmission probability of plane wave, we obtain the conditions for the parameter space under which perfect resonant transmission occur. The resonance conditions are found to be described by symmetric and anti-symmetric relations between the parameters.
Mapping an operator's perception of a parameter space
NASA Technical Reports Server (NTRS)
Pew, R. W.; Jagacinski, R. J.
1972-01-01
Operators monitored the output of two versions of the crossover model having a common random input. Their task was to make discrete, real-time adjustments of the parameters k and tau of one of the models to make its output time history converge to that of the other, fixed model. A plot was obtained of the direction of parameter change as a function of position in the (tau, k) parameter space relative to the nominal value. The plot has a great deal of structure and serves as one form of representation of the operator's perception of the parameter space.
Pisano, Roberto; Fissore, Davide; Barresi, Antonello A; Brayard, Philippe; Chouvenc, Pierre; Woinet, Bertrand
2013-02-01
This paper shows how to optimize the primary drying phase, for both product quality and drying time, of a parenteral formulation via design space. A non-steady state model, parameterized with experimentally determined heat and mass transfer coefficients, is used to define the design space when the heat transfer coefficient varies with the position of the vial in the array. The calculations recognize both equipment and product constraints, and also take into account model parameter uncertainty. Examples are given of cycles designed for the same formulation, but varying the freezing conditions and the freeze-dryer scale. These are then compared in terms of drying time. Furthermore, the impact of inter-vial variability on design space, and therefore on the optimized cycle, is addressed. With this regard, a simplified method is presented for the cycle design, which reduces the experimental effort required for the system qualification. The use of mathematical modeling is demonstrated to be very effective not only for cycle development, but also for solving problem of process transfer. This study showed that inter-vial variability remains significant when vials are loaded on plastic trays, and how inter-vial variability can be taken into account during process design.
Extraterrestrial processing and manufacturing of large space systems. Volume 3: Executive summary
NASA Technical Reports Server (NTRS)
Miller, R. H.; Smith, D. B. S.
1979-01-01
Facilities and equipment are defined for refining processes to commercial grade of lunar material that is delivered to a 'space manufacturing facility' in beneficiated, primary processed quality. The manufacturing facilities and the equipment for producing elements of large space systems from these materials and providing programmatic assessments of the concepts are also defined. In-space production processes of solar cells (by vapor deposition) and arrays, structures and joints, conduits, waveguides, RF equipment radiators, wire cables, converters, and others are described.
Tokamak with liquid metal toroidal field coil
Ohkawa, Tihiro; Schaffer, Michael J.
1981-01-01
Tokamak apparatus includes a pressure vessel for defining a reservoir and confining liquid therein. A toroidal liner disposed within the pressure vessel defines a toroidal space within the liner. Liquid metal fills the reservoir outside said liner. Electric current is passed through the liquid metal over a conductive path linking the toroidal space to produce a toroidal magnetic field within the toroidal space about the major axis thereof. Toroidal plasma is developed within the toroidal space about the major axis thereof.
Software-Defined Radio for Space-to-Space Communications
NASA Technical Reports Server (NTRS)
Fisher, Ken; Jih, Cindy; Moore, Michael S.; Price, Jeremy C.; Abbott, Ben A.; Fritz, Justin A.
2011-01-01
A paper describes the Space- to-Space Communications System (SSCS) Software- Defined Radio (SDR) research project to determine the most appropriate method for creating flexible and reconfigurable radios to implement wireless communications channels for space vehicles so that fewer radios are required, and commonality in hardware and software architecture can be leveraged for future missions. The ability to reconfigure the SDR through software enables one radio platform to be reconfigured to interoperate with many different waveforms. This means a reduction in the number of physical radio platforms necessary to support a space mission s communication requirements, thus decreasing the total size, weight, and power needed for a mission.
Reachable Sets for Multiple Asteroid Sample Return Missions
2005-12-01
reduce the number of feasible asteroid targets. Reachable sets are defined in a reduced classical orbital element space. The boundary of this...Reachable sets are defined in a reduced classical orbital element space. The boundary of this reduced space is obtained by extremizing a family of...aliasing problems. Other coordinate elements , such as equinoctial elements , can provide a set of singularity-free slowly changing variables, but
Prophylactic surgery prior to extended-duration space flight: Is the benefit worth the risk?
Ball, Chad G.; Kirkpatrick, Andrew W.; Williams, David R.; Jones, Jeffrey A.; Polk, J.D.; Vanderploeg, James M.; Talamini, Mark A.; Campbell, Mark R.; Broderick, Timothy J.
2012-01-01
This article explores the potential benefits and defined risks associated with prophylactic surgical procedures for astronauts before extended-duration space flight. This includes, but is not limited to, appendectomy and cholecystesctomy. Furthermore, discussion of treatment during space flight, potential impact of an acute illness on a defined mission and the ethical issues surrounding this concept are debated in detail. PMID:22564516
Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas
2013-01-01
Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.
Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas
2013-01-01
Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum. PMID:23766941
Parameterised post-Newtonian expansion in screened regions
NASA Astrophysics Data System (ADS)
McManus, Ryan; Lombriser, Lucas; Peñarrubia, Jorge
2017-12-01
The parameterised post-Newtonian (PPN) formalism has enabled stringent tests of static weak-field gravity in a theory-independent manner. Here we incorporate screening mechanisms of modified gravity theories into the framework by introducing an effective gravitational coupling and defining the PPN parameters as functions of position. To determine these functions we develop a general method for efficiently performing the post-Newtonian expansion in screened regimes. For illustration, we derive all the PPN functions for a cubic galileon and a chameleon model. We also analyse the Shapiro time delay effect for these two models and find no deviations from General Relativity insofar as the signal path and the perturbing mass reside in a screened region of space.
Synthesis and analysis of precise spaceborne laser ranging systems, volume 1. [link analysis
NASA Technical Reports Server (NTRS)
Paddon, E. A.
1977-01-01
Measurement accuracy goals of 2 cm rms range estimation error and 0.003 cm/sec rms range rate estimation error, with no more than 1 cm (range) static bias error are requirements for laser measurement systems to be used in planned space-based earth physics investigations. Constraints and parameters were defined for links between a high altitude, transmit/receive satellite (HATRS), and one of three targets: a low altitude target satellite, passive (LATS), and active low altitude target, and a ground-based target, as well as with operations with a primary transmit/receive terminal intended to be carried as a shuttle payload, in conjunction with the Spacelab program.
Mission specification for three generic mission classes
NASA Technical Reports Server (NTRS)
1979-01-01
Mission specifications for three generic mission classes are generated to provide a baseline for definition and analysis of data acquisition platform system concepts. The mission specifications define compatible groupings of sensors that satisfy specific earth resources and environmental mission objectives. The driving force behind the definition of sensor groupings is mission need; platform and space transportation system constraints are of secondary importance. The three generic mission classes are: (1) low earth orbit sun-synchronous; (2) geosynchronous; and (3) non-sun-synchronous, nongeosynchronous. These missions are chosen to provide a variety of sensor complements and implementation concepts. Each mission specification relates mission categories, mission objectives, measured parameters, and candidate sensors to orbits and coverage, operations compatibility, and platform fleet size.
Registration of cortical surfaces using sulcal landmarks for group analysis of MEG data☆
Joshi, Anand A.; Shattuck, David W.; Thompson, Paul M.; Leahy, Richard M.
2010-01-01
We present a method to register individual cortical surfaces to a surface-based brain atlas or canonical template using labeled sulcal curves as landmark constraints. To map one cortex smoothly onto another, we minimize a thin-plate spline energy defined on the surface by solving the associated partial differential equations (PDEs). By using covariant derivatives in solving these PDEs, we compute the bending energy with respect to the intrinsic geometry of the 3D surface rather than evaluating it in the flattened metric of the 2D parameter space. This covariant approach greatly reduces the confounding effects of the surface parameterization on the resulting registration. PMID:20824115
Spin-density wave state in simple hexagonal graphite
NASA Astrophysics Data System (ADS)
Mosoyan, K. S.; Rozhkov, A. V.; Sboychakov, A. O.; Rakhmanov, A. L.
2018-02-01
Simple hexagonal graphite, also known as AA graphite, is a metastable configuration of graphite. Using tight-binding approximation, it is easy to show that AA graphite is a metal with well-defined Fermi surface. The Fermi surface consists of two sheets, each shaped like a rugby ball. One sheet corresponds to electron states, another corresponds to hole states. The Fermi surface demonstrates good nesting: a suitable translation in the reciprocal space superposes one sheet onto another. In the presence of the electron-electron repulsion, a nested Fermi surface is unstable with respect to spin-density-wave ordering. This instability is studied using the mean-field theory at zero temperature, and the spin-density-wave order parameter is evaluated.
Effect of hydrogen on the strength and microstructure of selected ceramics
NASA Technical Reports Server (NTRS)
Herbell, Thomas P.; Eckel, Andrew J.; Hull, David R.; Misra, Ajay K.
1990-01-01
Ceramics in monolithic form and as composite constituents in the form of fibers, matrices, and coatings are currently being considered for a variety of high-temperature applications in aeronautics and space. Many of these applications involve exposure to a hydrogen-containing environment. The compatibility of selected ceramics in gaseous high-temperature hydrogen is assessed. Environmental stability regimes for the long term use of ceramic materials are defined by the parameters of temperature, pressure, and moisture content. Thermodynamically predicted reactions between hydrogen and several monolithic ceramics are compared with actual performance in a controlled environment. Morphology of hydrogen attack and the corresponding strength degradation is reported for silicon carbide, silicon nitride, alumina, magnesia, and mullite.
NASA Technical Reports Server (NTRS)
Carney, Kelly; Melis, Matthew; Fasanella, Edwin L.; Lyle, Karen H.; Gabrys, Jonathan
2004-01-01
Upon the commencement of the analytical effort to characterize the impact dynamics and damage of the Space Shuttle Columbia leading edge due to External Tank insulating foam, the necessity of creating analytical descriptions of these materials became evident. To that end, material models were developed of the leading edge thermal protection system, Reinforced Carbon Carbon (RCC), and a low density polyurethane foam, BX-250. Challenges in modeling the RCC include its extreme brittleness, the differing behavior in compression and tension, and the anisotropic fabric layup. These effects were successfully included in LS-DYNA Material Model 58, *MAT_LAMINATED_ COMPOSITE_ FABRIC. The differing compression and tension behavior was modeled using the available damage parameters. Each fabric layer was given an integration point in the shell element, and was allowed to fail independently. Comparisons were made to static test data and coupon ballistic impact tests before being utilized in the full scale analysis. The foam's properties were typical of elastic automotive foams; and LS-DYNA Material Model 83, *MAT_FU_CHANG_FOAM, was successfully used to model its behavior. Material parameters defined included strain rate dependent stress-strain curves for both loading and un-loading, and for both compression and tension. This model was formulated with static test data and strain rate dependent test data, and was compared to ballistic impact tests on load-cell instrumented aluminum plates. These models were subsequently utilized in analysis of the Shuttle leading edge full scale ballistic impact tests, and are currently being used in the Return to Flight Space Shuttle re-certification effort.
Using cystoscopy to segment bladder tumors with a multivariate approach in different color spaces.
Freitas, Nuno R; Vieira, Pedro M; Lima, Estevao; Lima, Carlos S
2017-07-01
Nowadays the diagnosis of bladder lesions relies upon cystoscopy examination and depends on the interpreter's experience. State of the art of bladder tumor identification are based on 3D reconstruction, using CT images (Virtual Cystoscopy) or images where the structures are exalted with the use of pigmentation, but none uses white light cystoscopy images. An initial attempt to automatically identify tumoral tissue was already developed by the authors and this paper will develop this idea. Traditional cystoscopy images processing has a huge potential to improve early tumor detection and allows a more effective treatment. In this paper is described a multivariate approach to do segmentation of bladder cystoscopy images, that will be used to automatically detect and improve physician diagnose. Each region can be assumed as a normal distribution with specific parameters, leading to the assumption that the distribution of intensities is a Gaussian Mixture Model (GMM). Region of high grade and low grade tumors, usually appears with higher intensity than normal regions. This paper proposes a Maximum a Posteriori (MAP) approach based on pixel intensities read simultaneously in different color channels from RGB, HSV and CIELab color spaces. The Expectation-Maximization (EM) algorithm is used to estimate the best multivariate GMM parameters. Experimental results show that the proposed method does bladder tumor segmentation into two classes in a more efficient way in RGB even in cases where the tumor shape is not well defined. Results also show that the elimination of component L from CIELab color space does not allow definition of the tumor shape.
Moving overlapping grids with adaptive mesh refinement for high-speed reactive and non-reactive flow
NASA Astrophysics Data System (ADS)
Henshaw, William D.; Schwendeman, Donald W.
2006-08-01
We consider the solution of the reactive and non-reactive Euler equations on two-dimensional domains that evolve in time. The domains are discretized using moving overlapping grids. In a typical grid construction, boundary-fitted grids are used to represent moving boundaries, and these grids overlap with stationary background Cartesian grids. Block-structured adaptive mesh refinement (AMR) is used to resolve fine-scale features in the flow such as shocks and detonations. Refinement grids are added to base-level grids according to an estimate of the error, and these refinement grids move with their corresponding base-level grids. The numerical approximation of the governing equations takes place in the parameter space of each component grid which is defined by a mapping from (fixed) parameter space to (moving) physical space. The mapped equations are solved numerically using a second-order extension of Godunov's method. The stiff source term in the reactive case is handled using a Runge-Kutta error-control scheme. We consider cases when the boundaries move according to a prescribed function of time and when the boundaries of embedded bodies move according to the surface stress exerted by the fluid. In the latter case, the Newton-Euler equations describe the motion of the center of mass of the each body and the rotation about it, and these equations are integrated numerically using a second-order predictor-corrector scheme. Numerical boundary conditions at slip walls are described, and numerical results are presented for both reactive and non-reactive flows that demonstrate the use and accuracy of the numerical approach.
Delineating parameter unidentifiabilities in complex models
NASA Astrophysics Data System (ADS)
Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis
2017-03-01
Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.
NASA Astrophysics Data System (ADS)
Salvatore, Gerard Micheal
The conceptual foundations for a deterministic quantum mechanics are presented with the Socratic method. The theory is attacked and weaknesses elucidated. These are compared against those of convention. Directions for future research are proposed.
DEFINING THE CHEMICAL SPACE OF PUBLIC GENOMIC DATA (S)
The current project aims to chemically index the genomics content of public genomic databases to make these data accessible in relation to other publicly available, chemically-indexed toxicological information. By defining the chemical space of public genomic data, it is possibl...
14 CFR 1214.117 - Launch and orbit parameters for a standard launch.
Code of Federal Regulations, 2013 CFR
2013-01-01
...) Launch from Kennedy Space Center (KSC) into the customer's choice of two standard mission orbits: 160 NM... 14 Aeronautics and Space 5 2013-01-01 2013-01-01 false Launch and orbit parameters for a standard launch. 1214.117 Section 1214.117 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION...
14 CFR 1214.117 - Launch and orbit parameters for a standard launch.
Code of Federal Regulations, 2012 CFR
2012-01-01
...) Launch from Kennedy Space Center (KSC) into the customer's choice of two standard mission orbits: 160 NM... 14 Aeronautics and Space 5 2012-01-01 2012-01-01 false Launch and orbit parameters for a standard launch. 1214.117 Section 1214.117 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION...
14 CFR 1214.117 - Launch and orbit parameters for a standard launch.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) Launch from Kennedy Space Center (KSC) into the customer's choice of two standard mission orbits: 160 NM... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Launch and orbit parameters for a standard launch. 1214.117 Section 1214.117 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION...
A fast Bayesian approach to discrete object detection in astronomical data sets - PowellSnakes I
NASA Astrophysics Data System (ADS)
Carvalho, Pedro; Rocha, Graça; Hobson, M. P.
2009-03-01
A new fast Bayesian approach is introduced for the detection of discrete objects immersed in a diffuse background. This new method, called PowellSnakes, speeds up traditional Bayesian techniques by (i) replacing the standard form of the likelihood for the parameters characterizing the discrete objects by an alternative exact form that is much quicker to evaluate; (ii) using a simultaneous multiple minimization code based on Powell's direction set algorithm to locate rapidly the local maxima in the posterior and (iii) deciding whether each located posterior peak corresponds to a real object by performing a Bayesian model selection using an approximate evidence value based on a local Gaussian approximation to the peak. The construction of this Gaussian approximation also provides the covariance matrix of the uncertainties in the derived parameter values for the object in question. This new approach provides a speed up in performance by a factor of `100' as compared to existing Bayesian source extraction methods that use Monte Carlo Markov chain to explore the parameter space, such as that presented by Hobson & McLachlan. The method can be implemented in either real or Fourier space. In the case of objects embedded in a homogeneous random field, working in Fourier space provides a further speed up that takes advantage of the fact that the correlation matrix of the background is circulant. We illustrate the capabilities of the method by applying to some simplified toy models. Furthermore, PowellSnakes has the advantage of consistently defining the threshold for acceptance/rejection based on priors which cannot be said of the frequentist methods. We present here the first implementation of this technique (version I). Further improvements to this implementation are currently under investigation and will be published shortly. The application of the method to realistic simulated Planck observations will be presented in a forthcoming publication.
NASA Astrophysics Data System (ADS)
Ozheredov, V. A.; Chibisov, S. M.; Blagonravov, M. L.; Khodorovich, N. A.; Demurov, E. A.; Goryachev, V. A.; Kharlitskaya, E. V.; Eremina, I. S.; Meladze, Z. A.
2017-05-01
There are many references in the literature related to connection between the space weather and the state of human organism. The search of external factors influence on humans is a multi-factor problem and it is well known that humans have a meteo-sensitivity. A direct problem of finding the earth weather conditions, under which the space weather manifests itself most strongly, is discussed in the present work for the first time in the helio-biology. From a formal point of view, this problem requires identification of subset (magnetobiotropic region) in three-dimensional earth's weather parameters such as pressure, temperature, and humidity, corresponding to the days when the human body is the most sensitive to changes in the geomagnetic field variations and when it reacts by statistically significant increase (or decrease) of a particular physiological parameter. This formulation defines the optimization of the problem, and the solution of the latter is not possible without the involvement of powerful metaheuristic methods of searching. Using the algorithm of differential evolution, we prove the existence of magnetobiotropic regions in the earth's weather parameters, which exhibit magneto-sensitivity of systolic, diastolic blood pressure, and heart rate of healthy young subjects for three weather areas (combinations of atmospheric temperature, pressure, and humidity). The maximum value of the correlation confidence for the measurements attributable to the days of the weather conditions that fall into each of three magnetobiotropic areas is an order of 0.006, that is almost 10 times less than the confidence, equal to 0.05, accepted in many helio-biological researches.
Form drag in rivers due to small-scale natural topographic features: 2. Irregular sequences
Kean, J.W.; Smith, J.D.
2006-01-01
The size, shape, and spacing of small-scale topographic features found on the boundaries of natural streams, rivers, and floodplains can be quite variable. Consequently, a procedure for determining the form drag on irregular sequences of different-sized topographic features is essential for calculating near-boundary flows and sediment transport. A method for carrying out such calculations is developed in this paper. This method builds on the work of Kean and Smith (2006), which describes the flow field for the simpler case of a regular sequence of identical topographic features. Both approaches model topographic features as two-dimensional elements with Gaussian-shaped cross sections defined in terms of three parameters. Field measurements of bank topography are used to show that (1) the magnitude of these shape parameters can vary greatly between adjacent topographic features and (2) the variability of these shape parameters follows a lognormal distribution. Simulations using an irregular set of topographic roughness elements show that the drag on an individual element is primarily controlled by the size and shape of the feature immediately upstream and that the spatial average of the boundary shear stress over a large set of randomly ordered elements is relatively insensitive to the sequence of the elements. In addition, a method to transform the topography of irregular surfaces into an equivalently rough surface of regularly spaced, identical topographic elements also is given. The methods described in this paper can be used to improve predictions of flow resistance in rivers as well as quantify bank roughness.
Ozheredov, V A; Chibisov, S M; Blagonravov, M L; Khodorovich, N A; Demurov, E A; Goryachev, V A; Kharlitskaya, E V; Eremina, I S; Meladze, Z A
2017-05-01
There are many references in the literature related to connection between the space weather and the state of human organism. The search of external factors influence on humans is a multi-factor problem and it is well known that humans have a meteo-sensitivity. A direct problem of finding the earth weather conditions, under which the space weather manifests itself most strongly, is discussed in the present work for the first time in the helio-biology. From a formal point of view, this problem requires identification of subset (magnetobiotropic region) in three-dimensional earth's weather parameters such as pressure, temperature, and humidity, corresponding to the days when the human body is the most sensitive to changes in the geomagnetic field variations and when it reacts by statistically significant increase (or decrease) of a particular physiological parameter. This formulation defines the optimization of the problem, and the solution of the latter is not possible without the involvement of powerful metaheuristic methods of searching. Using the algorithm of differential evolution, we prove the existence of magnetobiotropic regions in the earth's weather parameters, which exhibit magneto-sensitivity of systolic, diastolic blood pressure, and heart rate of healthy young subjects for three weather areas (combinations of atmospheric temperature, pressure, and humidity). The maximum value of the correlation confidence for the measurements attributable to the days of the weather conditions that fall into each of three magnetobiotropic areas is an order of 0.006, that is almost 10 times less than the confidence, equal to 0.05, accepted in many helio-biological researches.
On parametric Gevrey asymptotics for some nonlinear initial value Cauchy problems
NASA Astrophysics Data System (ADS)
Lastra, A.; Malek, S.
2015-11-01
We study a nonlinear initial value Cauchy problem depending upon a complex perturbation parameter ɛ with vanishing initial data at complex time t = 0 and whose coefficients depend analytically on (ɛ, t) near the origin in C2 and are bounded holomorphic on some horizontal strip in C w.r.t. the space variable. This problem is assumed to be non-Kowalevskian in time t, therefore analytic solutions at t = 0 cannot be expected in general. Nevertheless, we are able to construct a family of actual holomorphic solutions defined on a common bounded open sector with vertex at 0 in time and on the given strip above in space, when the complex parameter ɛ belongs to a suitably chosen set of open bounded sectors whose union form a covering of some neighborhood Ω of 0 in C*. These solutions are achieved by means of Laplace and Fourier inverse transforms of some common ɛ-depending function on C × R, analytic near the origin and with exponential growth on some unbounded sectors with appropriate bisecting directions in the first variable and exponential decay in the second, when the perturbation parameter belongs to Ω. Moreover, these solutions satisfy the remarkable property that the difference between any two of them is exponentially flat for some integer order w.r.t. ɛ. With the help of the classical Ramis-Sibuya theorem, we obtain the existence of a formal series (generally divergent) in ɛ which is the common Gevrey asymptotic expansion of the built up actual solutions considered above.
NASA Astrophysics Data System (ADS)
Jia, Bing
2014-03-01
A comb-shaped chaotic region has been simulated in multiple two-dimensional parameter spaces using the Hindmarsh—Rose (HR) neuron model in many recent studies, which can interpret almost all of the previously simulated bifurcation processes with chaos in neural firing patterns. In the present paper, a comb-shaped chaotic region in a two-dimensional parameter space was reproduced, which presented different processes of period-adding bifurcations with chaos with changing one parameter and fixed the other parameter at different levels. In the biological experiments, different period-adding bifurcation scenarios with chaos by decreasing the extra-cellular calcium concentration were observed from some neural pacemakers at different levels of extra-cellular 4-aminopyridine concentration and from other pacemakers at different levels of extra-cellular caesium concentration. By using the nonlinear time series analysis method, the deterministic dynamics of the experimental chaotic firings were investigated. The period-adding bifurcations with chaos observed in the experiments resembled those simulated in the comb-shaped chaotic region using the HR model. The experimental results show that period-adding bifurcations with chaos are preserved in different two-dimensional parameter spaces, which provides evidence of the existence of the comb-shaped chaotic region and a demonstration of the simulation results in different two-dimensional parameter spaces in the HR neuron model. The results also present relationships between different firing patterns in two-dimensional parameter spaces.
SCA Waveform Development for Space Telemetry
NASA Technical Reports Server (NTRS)
Mortensen, Dale J.; Kifle, Multi; Hall, C. Steve; Quinn, Todd M.
2004-01-01
The NASA Glenn Research Center is investigating and developing suitable reconfigurable radio architectures for future NASA missions. This effort is examining software-based open-architectures for space based transceivers, as well as common hardware platform architectures. The Joint Tactical Radio System's (JTRS) Software Communications Architecture (SCA) is a candidate for the software approach, but may need modifications or adaptations for use in space. An in-house SCA compliant waveform development focuses on increasing understanding of software defined radio architectures and more specifically the JTRS SCA. Space requirements put a premium on size, mass, and power. This waveform development effort is key to evaluating tradeoffs with the SCA for space applications. Existing NASA telemetry links, as well as Space Exploration Initiative scenarios, are the basis for defining the waveform requirements. Modeling and simulations are being developed to determine signal processing requirements associated with a waveform and a mission-specific computational burden. Implementation of the waveform on a laboratory software defined radio platform is proceeding in an iterative fashion. Parallel top-down and bottom-up design approaches are employed.
Parameter-space metric of semicoherent searches for continuous gravitational waves
NASA Astrophysics Data System (ADS)
Pletsch, Holger J.
2010-08-01
Continuous gravitational-wave (CW) signals such as emitted by spinning neutron stars are an important target class for current detectors. However, the enormous computational demand prohibits fully coherent broadband all-sky searches for prior unknown CW sources over wide ranges of parameter space and for yearlong observation times. More efficient hierarchical “semicoherent” search strategies divide the data into segments much shorter than one year, which are analyzed coherently; then detection statistics from different segments are combined incoherently. To optimally perform the incoherent combination, understanding of the underlying parameter-space structure is requisite. This problem is addressed here by using new coordinates on the parameter space, which yield the first analytical parameter-space metric for the incoherent combination step. This semicoherent metric applies to broadband all-sky surveys (also embedding directed searches at fixed sky position) for isolated CW sources. Furthermore, the additional metric resolution attained through the combination of segments is studied. From the search parameters (sky position, frequency, and frequency derivatives), solely the metric resolution in the frequency derivatives is found to significantly increase with the number of segments.
Evolution of a Reconfigurable Processing Platform for a Next Generation Space Software Defined Radio
NASA Technical Reports Server (NTRS)
Kacpura, Thomas J.; Downey, Joseph A.; Anderson, Keffery R.; Baldwin, Keith
2014-01-01
The National Aeronautics and Space Administration (NASA)Harris Ka-Band Software Defined Radio (SDR) is the first, fully reprogrammable space-qualified SDR operating in the Ka-Band frequency range. Providing exceptionally higher data communication rates than previously possible, this SDR offers in-orbit reconfiguration, multi-waveform operation, and fast deployment due to its highly modular hardware and software architecture. Currently in operation on the International Space Station (ISS), this new paradigm of reconfigurable technology is enabling experimenters to investigate navigation and networking in the space environment.The modular SDR and the NASA developed Space Telecommunications Radio System (STRS) architecture standard are the basis for Harris reusable, digital signal processing space platform trademarked as AppSTAR. As a result, two new space radio products are a synthetic aperture radar payload and an Automatic Detection Surveillance Broadcast (ADS-B) receiver. In addition, Harris is currently developing many new products similar to the Ka-Band software defined radio for other applications. For NASAs next generation flight Ka-Band radio development, leveraging these advancements could lead to a more robust and more capable software defined radio.The space environment has special considerations different from terrestrial applications that must be considered for any system operated in space. Each space mission has unique requirements that can make these systems unique. These unique requirements can make products that are expensive and limited in reuse. Space systems put a premium on size, weight and power. A key trade is the amount of reconfigurability in a space system. The more reconfigurable the hardware platform, the easier it is to adapt to the platform to the next mission, and this reduces the amount of non-recurring engineering costs. However, the more reconfigurable platforms often use more spacecraft resources. Software has similar considerations to hardware. Having an architecture standard promotes reuse of software and firmware. Space platforms have limited processor capability, which makes the trade on the amount of amount of flexibility paramount.
On parametrized cold dense matter equation-of-state inference
NASA Astrophysics Data System (ADS)
Riley, Thomas E.; Raaijmakers, Geert; Watts, Anna L.
2018-07-01
Constraining the equation of state of cold dense matter in compact stars is a major science goal for observing programmes being conducted using X-ray, radio, and gravitational wave telescopes. We discuss Bayesian hierarchical inference of parametrized dense matter equations of state. In particular, we generalize and examine two inference paradigms from the literature: (i) direct posterior equation-of-state parameter estimation, conditioned on observations of a set of rotating compact stars; and (ii) indirect parameter estimation, via transformation of an intermediary joint posterior distribution of exterior spacetime parameters (such as gravitational masses and coordinate equatorial radii). We conclude that the former paradigm is not only tractable for large-scale analyses, but is principled and flexible from a Bayesian perspective while the latter paradigm is not. The thematic problem of Bayesian prior definition emerges as the crux of the difference between these paradigms. The second paradigm should in general only be considered as an ill-defined approach to the problem of utilizing archival posterior constraints on exterior spacetime parameters; we advocate for an alternative approach whereby such information is repurposed as an approximative likelihood function. We also discuss why conditioning on a piecewise-polytropic equation-of-state model - currently standard in the field of dense matter study - can easily violate conditions required for transformation of a probability density distribution between spaces of exterior (spacetime) and interior (source matter) parameters.
On parametrised cold dense matter equation of state inference
NASA Astrophysics Data System (ADS)
Riley, Thomas E.; Raaijmakers, Geert; Watts, Anna L.
2018-04-01
Constraining the equation of state of cold dense matter in compact stars is a major science goal for observing programmes being conducted using X-ray, radio, and gravitational wave telescopes. We discuss Bayesian hierarchical inference of parametrised dense matter equations of state. In particular we generalise and examine two inference paradigms from the literature: (i) direct posterior equation of state parameter estimation, conditioned on observations of a set of rotating compact stars; and (ii) indirect parameter estimation, via transformation of an intermediary joint posterior distribution of exterior spacetime parameters (such as gravitational masses and coordinate equatorial radii). We conclude that the former paradigm is not only tractable for large-scale analyses, but is principled and flexible from a Bayesian perspective whilst the latter paradigm is not. The thematic problem of Bayesian prior definition emerges as the crux of the difference between these paradigms. The second paradigm should in general only be considered as an ill-defined approach to the problem of utilising archival posterior constraints on exterior spacetime parameters; we advocate for an alternative approach whereby such information is repurposed as an approximative likelihood function. We also discuss why conditioning on a piecewise-polytropic equation of state model - currently standard in the field of dense matter study - can easily violate conditions required for transformation of a probability density distribution between spaces of exterior (spacetime) and interior (source matter) parameters.