NASA Technical Reports Server (NTRS)
Vontiesenhausen, G. F.
1977-01-01
A program implementation model is presented which covers the in-space construction of certain large space systems from extraterrestrial materials. The model includes descriptions of major program elements and subelements and their operational requirements and technology readiness requirements. It provides a structure for future analysis and development.
Definition of ground test for verification of large space structure control
NASA Technical Reports Server (NTRS)
Doane, G. B., III; Glaese, J. R.; Tollison, D. K.; Howsman, T. G.; Curtis, S. (Editor); Banks, B.
1984-01-01
Control theory and design, dynamic system modelling, and simulation of test scenarios are the main ideas discussed. The overall effort is the achievement at Marshall Space Flight Center of a successful ground test experiment of a large space structure. A simplified planar model of ground test experiment of a large space structure. A simplified planar model of ground test verification was developed. The elimination from that model of the uncontrollable rigid body modes was also examined. Also studied was the hardware/software of computation speed.
Modeling, Analysis, and Optimization Issues for Large Space Structures
NASA Technical Reports Server (NTRS)
Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)
1983-01-01
Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.
State-space reduction and equivalence class sampling for a molecular self-assembly model.
Packwood, Daniel M; Han, Patrick; Hitosugi, Taro
2016-07-01
Direct simulation of a model with a large state space will generate enormous volumes of data, much of which is not relevant to the questions under study. In this paper, we consider a molecular self-assembly model as a typical example of a large state-space model, and present a method for selectively retrieving 'target information' from this model. This method partitions the state space into equivalence classes, as identified by an appropriate equivalence relation. The set of equivalence classes H, which serves as a reduced state space, contains none of the superfluous information of the original model. After construction and characterization of a Markov chain with state space H, the target information is efficiently retrieved via Markov chain Monte Carlo sampling. This approach represents a new breed of simulation techniques which are highly optimized for studying molecular self-assembly and, moreover, serves as a valuable guideline for analysis of other large state-space models.
Recent literature on structural modeling, identification, and analysis
NASA Technical Reports Server (NTRS)
Craig, Roy R., Jr.
1990-01-01
The literature on the mathematical modeling of large space structures is first reviewed, with attention given to continuum models, model order reduction, substructuring, and computational techniques. System identification and mode verification are then discussed with reference to the verification of mathematical models of large space structures. In connection with analysis, the paper surveys recent research on eigensolvers and dynamic response solvers for large-order finite-element-based models.
Modeling space-time correlations of velocity fluctuations in wind farms
NASA Astrophysics Data System (ADS)
Lukassen, Laura J.; Stevens, Richard J. A. M.; Meneveau, Charles; Wilczek, Michael
2018-07-01
An analytical model for the streamwise velocity space-time correlations in turbulent flows is derived and applied to the special case of velocity fluctuations in large wind farms. The model is based on the Kraichnan-Tennekes random sweeping hypothesis, capturing the decorrelation in time while including a mean wind velocity in the streamwise direction. In the resulting model, the streamwise velocity space-time correlation is expressed as a convolution of the pure space correlation with an analytical temporal decorrelation kernel. Hence, the spatio-temporal structure of velocity fluctuations in wind farms can be derived from the spatial correlations only. We then explore the applicability of the model to predict spatio-temporal correlations in turbulent flows in wind farms. Comparisons of the model with data from a large eddy simulation of flow in a large, spatially periodic wind farm are performed, where needed model parameters such as spatial and temporal integral scales and spatial correlations are determined from the large eddy simulation. Good agreement is obtained between the model and large eddy simulation data showing that spatial data may be used to model the full temporal structure of fluctuations in wind farms.
Modelling the large-scale redshift-space 3-point correlation function of galaxies
NASA Astrophysics Data System (ADS)
Slepian, Zachary; Eisenstein, Daniel J.
2017-08-01
We present a configuration-space model of the large-scale galaxy 3-point correlation function (3PCF) based on leading-order perturbation theory and including redshift-space distortions (RSD). This model should be useful in extracting distance-scale information from the 3PCF via the baryon acoustic oscillation method. We include the first redshift-space treatment of biasing by the baryon-dark matter relative velocity. Overall, on large scales the effect of RSD is primarily a renormalization of the 3PCF that is roughly independent of both physical scale and triangle opening angle; for our adopted Ωm and bias values, the rescaling is a factor of ˜1.8. We also present an efficient scheme for computing 3PCF predictions from our model, important for allowing fast exploration of the space of cosmological parameters in future analyses.
The Space Station as a Construction Base for Large Space Structures
NASA Technical Reports Server (NTRS)
Gates, R. M.
1985-01-01
The feasibility of using the Space Station as a construction site for large space structures is examined. An overview is presented of the results of a program entitled Definition of Technology Development Missions (TDM's) for Early Space Stations - Large Space Structures. The definition of LSS technology development missions must be responsive to the needs of future space missions which require large space structures. Long range plans for space were assembled by reviewing Space System Technology Models (SSTM) and other published sources. Those missions which will use large space structures were reviewed to determine the objectives which must be demonstrated by technology development missions. The three TDM's defined during this study are: (1) a construction storage/hangar facility; (2) a passive microwave radiometer; and (3) a precision optical system.
Parametric Cost Models for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd; Dollinger, Courtney
2010-01-01
Multivariable parametric cost models for space telescopes provide several benefits to designers and space system project managers. They identify major architectural cost drivers and allow high-level design trades. They enable cost-benefit analysis for technology development investment. And, they provide a basis for estimating total project cost. A survey of historical models found that there is no definitive space telescope cost model. In fact, published models vary greatly [1]. Thus, there is a need for parametric space telescopes cost models. An effort is underway to develop single variable [2] and multi-variable [3] parametric space telescope cost models based on the latest available data and applying rigorous analytical techniques. Specific cost estimating relationships (CERs) have been developed which show that aperture diameter is the primary cost driver for large space telescopes; technology development as a function of time reduces cost at the rate of 50% per 17 years; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and increasing mass reduces cost.
On the accuracy of modelling the dynamics of large space structures
NASA Technical Reports Server (NTRS)
Diarra, C. M.; Bainum, P. M.
1985-01-01
Proposed space missions will require large scale, light weight, space based structural systems. Large space structure technology (LSST) systems will have to accommodate (among others): ocean data systems; electronic mail systems; large multibeam antenna systems; and, space based solar power systems. The structures are to be delivered into orbit by the space shuttle. Because of their inherent size, modelling techniques and scaling algorithms must be developed so that system performance can be predicted accurately prior to launch and assembly. When the size and weight-to-area ratio of proposed LSST systems dictate that the entire system be considered flexible, there are two basic modeling methods which can be used. The first is a continuum approach, a mathematical formulation for predicting the motion of a general orbiting flexible body, in which elastic deformations are considered small compared with characteristic body dimensions. This approach is based on an a priori knowledge of the frequencies and shape functions of all modes included within the system model. Alternatively, finite element techniques can be used to model the entire structure as a system of lumped masses connected by a series of (restoring) springs and possibly dampers. In addition, a computational algorithm was developed to evaluate the coefficients of the various coupling terms in the equations of motion as applied to the finite element model of the Hoop/Column.
NASA Technical Reports Server (NTRS)
Joshi, S. M.; Groom, N. J.
1980-01-01
A finite element structural model of a 30.48 m x 30.48 m x 2.54 mm completely free aluminum plate is described and modal frequencies and mode shape data for the first 44 modes are presented. An explanation of the procedure for using the data is also presented. The model should prove useful for the investigation of controller design approaches for large flexible space structures.
Shape control of large space structures
NASA Technical Reports Server (NTRS)
Hagan, M. T.
1982-01-01
A survey has been conducted to determine the types of control strategies which have been proposed for controlling the vibrations in large space structures. From this survey several representative control strategies were singled out for detailed analyses. The application of these strategies to a simplified model of a large space structure has been simulated. These simulations demonstrate the implementation of the control algorithms and provide a basis for a preliminary comparison of their suitability for large space structure control.
Learning in the model space for cognitive fault diagnosis.
Chen, Huanhuan; Tino, Peter; Rodan, Ali; Yao, Xin
2014-01-01
The emergence of large sensor networks has facilitated the collection of large amounts of real-time data to monitor and control complex engineering systems. However, in many cases the collected data may be incomplete or inconsistent, while the underlying environment may be time-varying or unformulated. In this paper, we develop an innovative cognitive fault diagnosis framework that tackles the above challenges. This framework investigates fault diagnosis in the model space instead of the signal space. Learning in the model space is implemented by fitting a series of models using a series of signal segments selected with a sliding window. By investigating the learning techniques in the fitted model space, faulty models can be discriminated from healthy models using a one-class learning algorithm. The framework enables us to construct a fault library when unknown faults occur, which can be regarded as cognitive fault isolation. This paper also theoretically investigates how to measure the pairwise distance between two models in the model space and incorporates the model distance into the learning algorithm in the model space. The results on three benchmark applications and one simulated model for the Barcelona water distribution network confirm the effectiveness of the proposed framework.
Evaluation of Genetic Algorithm Concepts using Model Problems. Part 1; Single-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic-algorithm-based optimization approach is described and evaluated using a simple hill-climbing model problem. The model problem utilized herein allows for the broad specification of a large number of search spaces including spaces with an arbitrary number of genes or decision variables and an arbitrary number hills or modes. In the present study, only single objective problems are considered. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all problems attempted. The most difficult problems - those with large hyper-volumes and multi-mode search spaces containing a large number of genes - require a large number of function evaluations for GA convergence, but they always converge.
Active control of large space structures: An introduction and overview
NASA Technical Reports Server (NTRS)
Doane, G. B., III; Tollison, D. K.; Waites, H. B.
1985-01-01
An overview of the large space structure (LSS) control system design problem is presented. The LSS is defined as a class of system, and LSS modeling techniques are discussed. Model truncation, control system objectives, current control law design techniques, and particular problem areas are discussed.
Optimal estimation of large structure model errors. [in Space Shuttle controller design
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1979-01-01
In-flight estimation of large structure model errors is usually required as a means of detecting inevitable deficiencies in large structure controller/estimator models. The present paper deals with a least-squares formulation which seeks to minimize a quadratic functional of the model errors. The properties of these error estimates are analyzed. It is shown that an arbitrary model error can be decomposed as the sum of two components that are orthogonal in a suitably defined function space. Relations between true and estimated errors are defined. The estimates are found to be approximations that retain many of the significant dynamics of the true model errors. Current efforts are directed toward application of the analytical results to a reference large structure model.
Modelling of Tethered Space-Web Structures
NASA Astrophysics Data System (ADS)
McKenzie, D. J.; Cartnell, M. P.
Large structures in space are an essential milestone in the path of many projects, from solar power collectors to space stations. In space, as on Earth, these large projects may be split up into more manageable sections, dividing the task into multiple replicable parts. Specially constructed spider robots could assemble these structures piece by piece over a membrane or space- web, giving a method for building a structure while on orbit. The modelling and applications of these space-webs are discussed, along with the derivation of the equations of motion of the structure. The presentation of some preliminary results from the solution of these equations will show that space-webs can take a variety of different forms, and give some guidelines for configuring the space-web system.
Shape determination and control for large space structures
NASA Technical Reports Server (NTRS)
Weeks, C. J.
1981-01-01
An integral operator approach is used to derive solutions to static shape determination and control problems associated with large space structures. Problem assumptions include a linear self-adjoint system model, observations and control forces at discrete points, and performance criteria for the comparison of estimates or control forms. Results are illustrated by simulations in the one dimensional case with a flexible beam model, and in the multidimensional case with a finite model of a large space antenna. Modal expansions for terms in the solution algorithms are presented, using modes from the static or associated dynamic mode. These expansions provide approximated solutions in the event that a used form analytical solution to the system boundary value problem is not available.
NASA Technical Reports Server (NTRS)
Smith, Suzanne Weaver; Beattie, Christopher A.
1991-01-01
On-orbit testing of a large space structure will be required to complete the certification of any mathematical model for the structure dynamic response. The process of establishing a mathematical model that matches measured structure response is referred to as model correlation. Most model correlation approaches have an identification technique to determine structural characteristics from the measurements of the structure response. This problem is approached with one particular class of identification techniques - matrix adjustment methods - which use measured data to produce an optimal update of the structure property matrix, often the stiffness matrix. New methods were developed for identification to handle problems of the size and complexity expected for large space structures. Further development and refinement of these secant-method identification algorithms were undertaken. Also, evaluation of these techniques is an approach for model correlation and damage location was initiated.
Solar EUV irradiance for space weather applications
NASA Astrophysics Data System (ADS)
Viereck, R. A.
2015-12-01
Solar EUV irradiance is an important driver of space weather models. Large changes in EUV and x-ray irradiances create large variability in the ionosphere and thermosphere. Proxies such as the F10.7 cm radio flux, have provided reasonable estimates of the EUV flux but as the space weather models become more accurate and the demands of the customers become more stringent, proxies are no longer adequate. Furthermore, proxies are often provided only on a daily basis and shorter time scales are becoming important. Also, there is a growing need for multi-day forecasts of solar EUV irradiance to drive space weather forecast models. In this presentation we will describe the needs and requirements for solar EUV irradiance information from the space weather modeler's perspective. We will then translate these requirements into solar observational requirements such as spectral resolution and irradiance accuracy. We will also describe the activities at NOAA to provide long-term solar EUV irradiance observations and derived products that are needed for real-time space weather modeling.
Effectively-truncated large-scale shell-model calculations and nuclei around 100Sn
NASA Astrophysics Data System (ADS)
Gargano, A.; Coraggio, L.; Itaco, N.
2017-09-01
This paper presents a short overview of a procedure we have recently introduced, dubbed the double-step truncation method, which is aimed to reduce the computational complexity of large-scale shell-model calculations. Within this procedure, one starts with a realistic shell-model Hamiltonian defined in a large model space, and then, by analyzing the effective single particle energies of this Hamiltonian as a function of the number of valence protons and/or neutrons, reduced model spaces are identified containing only the single-particle orbitals relevant to the description of the spectroscopic properties of a certain class of nuclei. As a final step, new effective shell-model Hamiltonians defined within the reduced model spaces are derived by way of a unitary transformation of the original large-scale Hamiltonian. A detailed account of this transformation is given and the merit of the double-step truncation method is illustrated by discussing few selected results for 96Mo, described as four protons and four neutrons outside 88Sr. Some new preliminary results for light odd-tin isotopes from A = 101 to 107 are also reported.
NASA Technical Reports Server (NTRS)
Buchanan, H. J.
1983-01-01
Work performed in Large Space Structures Controls research and development program at Marshall Space Flight Center is described. Studies to develop a multilevel control approach which supports a modular or building block approach to the buildup of space platforms are discussed. A concept has been developed and tested in three-axis computer simulation utilizing a five-body model of a basic space platform module. Analytical efforts have continued to focus on extension of the basic theory and subsequent application. Consideration is also given to specifications to evaluate several algorithms for controlling the shape of Large Space Structures.
NASA Technical Reports Server (NTRS)
1979-01-01
the development of large space structure technology is discussed. A detailed thermal analysis of a model space fabricated 1 meter beam is presented. Alternative thermal coatings are evaluated, and deflections, stresses, and stiffness variations resulting from flight orientations and solar conditions are predicted.
Analysis and Ground Testing for Validation of the Inflatable Sunshield in Space (ISIS) Experiment
NASA Technical Reports Server (NTRS)
Lienard, Sebastien; Johnston, John; Adams, Mike; Stanley, Diane; Alfano, Jean-Pierre; Romanacci, Paolo
2000-01-01
The Next Generation Space Telescope (NGST) design requires a large sunshield to protect the large aperture mirror and instrument module from constant solar exposure at its L2 orbit. The structural dynamics of the sunshield must be modeled in order to predict disturbances to the observatory attitude control system and gauge effects on the line of site jitter. Models of large, non-linear membrane systems are not well understood and have not been successfully demonstrated. To answer questions about sunshield dynamic behavior and demonstrate controlled deployment, the NGST project is flying a Pathfinder experiment, the Inflatable Sunshield in Space (ISIS). This paper discusses in detail the modeling and ground-testing efforts performed at the Goddard Space Flight Center to: validate analytical tools for characterizing the dynamic behavior of the deployed sunshield, qualify the experiment for the Space Shuttle, and verify the functionality of the system. Included in the discussion will be test parameters, test setups, problems encountered, and test results.
Control technology development
NASA Astrophysics Data System (ADS)
Schaechter, D. B.
1982-03-01
The main objectives of the control technology development task are given in the slide below. The first is to develop control design techniques based on flexible structural models, rather than simple rigid-body models. Since large space structures are distributed parameter systems, a new degree of freedom, that of sensor/actuator placement, may be exercised for improving control system performance. Another characteristic of large space structures is numerous oscillatory modes within the control bandwidth. Reduced-order controller design models must be developed which produce stable closed-loop systems when combined with the full-order system. Since the date of an actual large-space-structure flight is rapidly approaching, it is vitally important that theoretical developments are tested in actual hardware. Experimental verification is a vital counterpart of all current theoretical developments.
NASA Technical Reports Server (NTRS)
Ragusa, J. M.
1975-01-01
An optimum hypothetical organizational structure was studied for a large earth-orbiting, multidisciplinary research and applications space base manned by a crew of technologists. Because such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than with the empirical testing of the model. The essential finding of this research was that a four-level project type total matrix model will optimize the efficiency and effectiveness of space base technologists.
NASA/Howard University Large Space Structures Institute
NASA Technical Reports Server (NTRS)
Broome, T. H., Jr.
1984-01-01
Basic research on the engineering behavior of large space structures is presented. Methods of structural analysis, control, and optimization of large flexible systems are examined. Topics of investigation include the Load Correction Method (LCM) modeling technique, stabilization of flexible bodies by feedback control, mathematical refinement of analysis equations, optimization of the design of structural components, deployment dynamics, and the use of microprocessors in attitude and shape control of large space structures. Information on key personnel, budgeting, support plans and conferences is included.
Interactive computer graphics and its role in control system design of large space structures
NASA Technical Reports Server (NTRS)
Reddy, A. S. S. R.
1985-01-01
This paper attempts to show the relevance of interactive computer graphics in the design of control systems to maintain attitude and shape of large space structures to accomplish the required mission objectives. The typical phases of control system design, starting from the physical model such as modeling the dynamics, modal analysis, and control system design methodology are reviewed and the need of the interactive computer graphics is demonstrated. Typical constituent parts of large space structures such as free-free beams and free-free plates are used to demonstrate the complexity of the control system design and the effectiveness of the interactive computer graphics.
Hurdles to Overcome to Model Carrington Class Events
NASA Astrophysics Data System (ADS)
Engel, M.; Henderson, M. G.; Jordanova, V. K.; Morley, S.
2017-12-01
Large geomagnetic storms pose a threat to both space and ground based infrastructure. In order to help mitigate that threat a better understanding of the specifics of these storms is required. Various computer models are being used around the world to analyze the magnetospheric environment, however they are largely inadequate for analyzing the large and extreme storm time environments. Here we report on the first steps towards expanding and robustifying the RAM-SCB inner magnetospheric model, used in conjunction with BATS-R-US and the Space Weather Modeling Framework, in order to simulate storms with Dst > -400. These results will then be used to help expand our modelling capabilities towards including Carrington-class events.
Simple Deterministically Constructed Recurrent Neural Networks
NASA Astrophysics Data System (ADS)
Rodan, Ali; Tiňo, Peter
A large number of models for time series processing, forecasting or modeling follows a state-space formulation. Models in the specific class of state-space approaches, referred to as Reservoir Computing, fix their state-transition function. The state space with the associated state transition structure forms a reservoir, which is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be potentially exploited by the reservoir-to-output readout mapping. The largely "black box" character of reservoirs prevents us from performing a deeper theoretical investigation of the dynamical properties of successful reservoirs. Reservoir construction is largely driven by a series of (more-or-less) ad-hoc randomized model building stages, with both the researchers and practitioners having to rely on a series of trials and errors. We show that a very simple deterministically constructed reservoir with simple cycle topology gives performances comparable to those of the Echo State Network (ESN) on a number of time series benchmarks. Moreover, we argue that the memory capacity of such a model can be made arbitrarily close to the proved theoretical limit.
Efficacy of the SU(3) scheme for ab initio large-scale calculations beyond the lightest nuclei
Dytrych, T.; Maris, P.; Launey, K. D.; ...
2016-06-22
We report on the computational characteristics of ab initio nuclear structure calculations in a symmetry-adapted no-core shell model (SA-NCSM) framework. We examine the computational complexity of the current implementation of the SA-NCSM approach, dubbed LSU3shell, by analyzing ab initio results for 6Li and 12C in large harmonic oscillator model spaces and SU3-selected subspaces. We demonstrate LSU3shell’s strong-scaling properties achieved with highly-parallel methods for computing the many-body matrix elements. Results compare favorably with complete model space calculations and significant memory savings are achieved in physically important applications. In particular, a well-chosen symmetry-adapted basis affords memory savings in calculations of states withmore » a fixed total angular momentum in large model spaces while exactly preserving translational invariance.« less
Efficacy of the SU(3) scheme for ab initio large-scale calculations beyond the lightest nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dytrych, T.; Maris, Pieter; Launey, K. D.
2016-06-09
We report on the computational characteristics of ab initio nuclear structure calculations in a symmetry-adapted no-core shell model (SA-NCSM) framework. We examine the computational complexity of the current implementation of the SA-NCSM approach, dubbed LSU3shell, by analyzing ab initio results for 6Li and 12C in large harmonic oscillator model spaces and SU(3)-selected subspaces. We demonstrate LSU3shell's strong-scaling properties achieved with highly-parallel methods for computing the many-body matrix elements. Results compare favorably with complete model space calculations and signi cant memory savings are achieved in physically important applications. In particular, a well-chosen symmetry-adapted basis a ords memory savings in calculations ofmore » states with a fixed total angular momentum in large model spaces while exactly preserving translational invariance.« less
NASA Technical Reports Server (NTRS)
1971-01-01
The optimal allocation of resources to the national space program over an extended time period requires the solution of a large combinatorial problem in which the program elements are interdependent. The computer model uses an accelerated search technique to solve this problem. The model contains a large number of options selectable by the user to provide flexible input and a broad range of output for use in sensitivity analyses of all entering elements. Examples of these options are budget smoothing under varied appropriation levels, entry of inflation and discount effects, and probabilistic output which provides quantified degrees of certainty that program costs will remain within planned budget. Criteria and related analytic procedures were established for identifying potential new space program directions. Used in combination with the optimal resource allocation model, new space applications can be analyzed in realistic perspective, including the advantage gain from existing space program plant and on-going programs such as the space transportation system.
Two Mathematical Models of Nonlinear Vibrations
NASA Technical Reports Server (NTRS)
Brugarolas, Paul; Bayard, David; Spanos, John; Breckenridge, William
2007-01-01
Two innovative mathematical models of nonlinear vibrations, and methods of applying them, have been conceived as byproducts of an effort to develop a Kalman filter for highly precise estimation of bending motions of a large truss structure deployed in outer space from a space-shuttle payload bay. These models are also applicable to modeling and analysis of vibrations in other engineering disciplines, on Earth as well as in outer space.
Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takemasa, Yuichi; Togari, Satoshi; Arai, Yoshinobu
1996-11-01
Vertical temperature differences tend to be great in a large indoor space such as an atrium, and it is important to predict variations of vertical temperature distribution in the early stage of the design. The authors previously developed and reported on a new simplified unsteady-state calculation model for predicting vertical temperature distribution in a large space. In this paper, this model is applied to predicting the vertical temperature distribution in an existing low-rise atrium that has a skylight and is affected by transmitted solar radiation. Detailed calculation procedures that use the model are presented with all the boundary conditions, andmore » analytical simulations are carried out for the cooling condition. Calculated values are compared with measured results. The results of the comparison demonstrate that the calculation model can be applied to the design of a large space. The effects of occupied-zone cooling are also discussed and compared with those of all-zone cooling.« less
Preliminary results on the dynamics of large and flexible space structures in Halo orbits
NASA Astrophysics Data System (ADS)
Colagrossi, Andrea; Lavagna, Michèle
2017-05-01
The global exploration roadmap suggests, among other ambitious future space programmes, a possible manned outpost in lunar vicinity, to support surface operations and further astronaut training for longer and deeper space missions and transfers. In particular, a Lagrangian point orbit location - in the Earth- Moon system - is suggested for a manned cis-lunar infrastructure; proposal which opens an interesting field of study from the astrodynamics perspective. Literature offers a wide set of scientific research done on orbital dynamics under the Three-Body Problem modelling approach, while less of it includes the attitude dynamics modelling as well. However, whenever a large space structure (ISS-like) is considered, not only the coupled orbit-attitude dynamics should be modelled to run more accurate analyses, but the structural flexibility should be included too. The paper, starting from the well-known Circular Restricted Three-Body Problem formulation, presents some preliminary results obtained by adding a coupled orbit-attitude dynamical model and the effects due to the large structure flexibility. In addition, the most relevant perturbing phenomena, such as the Solar Radiation Pressure (SRP) and the fourth-body (Sun) gravity, are included in the model as well. A multi-body approach has been preferred to represent possible configurations of the large cis-lunar infrastructure: interconnected simple structural elements - such as beams, rods or lumped masses linked by springs - build up the space segment. To better investigate the relevance of the flexibility effects, the lumped parameters approach is compared with a distributed parameters semi-analytical technique. A sensitivity analysis of system dynamics, with respect to different configurations and mechanical properties of the extended structure, is also presented, in order to highlight drivers for the lunar outpost design. Furthermore, a case study for a large and flexible space structure in Halo orbits around one of the Earth-Moon collinear Lagrangian points, L1 or L2, is discussed to point out some relevant outcomes for the potential implementation of such a mission.
Mathematical modeling and simulation of the space shuttle imaging radar antennas
NASA Technical Reports Server (NTRS)
Campbell, R. W.; Melick, K. E.; Coffey, E. L., III
1978-01-01
Simulations of space shuttle synthetic aperture radar antennas under the influence of space environmental conditions were carried out at L, C, and X-band. Mathematical difficulties in modeling large, non-planar array antennas are discussed, and an approximate modeling technique is presented. Results for several antenna error conditions are illustrated in far-field profile patterns, earth surface footprint contours, and summary graphs.
Emulating a flexible space structure: Modeling
NASA Technical Reports Server (NTRS)
Waites, H. B.; Rice, S. C.; Jones, V. L.
1988-01-01
Control Dynamics, in conjunction with Marshall Space Flight Center, has participated in the modeling and testing of Flexible Space Structures. Through the series of configurations tested and the many techniques used for collecting, analyzing, and modeling the data, many valuable insights have been gained and important lessons learned. This paper discusses the background of the Large Space Structure program, Control Dynamics' involvement in testing and modeling of the configurations (especially the Active Control Technique Evaluation for Spacecraft (ACES) configuration), the results from these two processes, and insights gained from this work.
Citygml and the Streets of New York - a Proposal for Detailed Street Space Modelling
NASA Astrophysics Data System (ADS)
Beil, C.; Kolbe, T. H.
2017-10-01
Three-dimensional semantic city models are increasingly used for the analysis of large urban areas. Until now the focus has mostly been on buildings. Nonetheless many applications could also benefit from detailed models of public street space for further analysis. However, there are only few guidelines for representing roads within city models. Therefore, related standards dealing with street modelling are examined and discussed. Nearly all street representations are based on linear abstractions. However, there are many use cases that require or would benefit from the detailed geometrical and semantic representation of street space. A variety of potential applications for detailed street space models are presented. Subsequently, based on related standards as well as on user requirements, a concept for a CityGML-compliant representation of street space in multiple levels of detail is developed. In the course of this process, the CityGML Transportation model of the currently valid OGC standard CityGML2.0 is examined to discover possibilities for further developments. Moreover, a number of improvements are presented. Finally, based on open data sources, the proposed concept is implemented within a semantic 3D city model of New York City generating a detailed 3D street space model for the entire city. As a result, 11 thematic classes, such as roadbeds, sidewalks or traffic islands are generated and enriched with a large number of thematic attributes.
Exploring theory space with Monte Carlo reweighting
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; ...
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less
Subgrid-scale models for large-eddy simulation of rotating turbulent flows
NASA Astrophysics Data System (ADS)
Silvis, Maurits; Trias, Xavier; Abkar, Mahdi; Bae, Hyunji Jane; Lozano-Duran, Adrian; Verstappen, Roel
2016-11-01
This paper discusses subgrid models for large-eddy simulation of anisotropic flows using anisotropic grids. In particular, we are looking into ways to model not only the subgrid dissipation, but also transport processes, since these are expected to play an important role in rotating turbulent flows. We therefore consider subgrid-scale models of the form τ = - 2νt S +μt (SΩ - ΩS) , where the eddy-viscosity νt is given by the minimum-dissipation model, μt represents a transport coefficient; S is the symmetric part of the velocity gradient and Ω the skew-symmetric part. To incorporate the effect of mesh anisotropy the filter length is taken in such a way that it minimizes the difference between the turbulent stress in physical and computational space, where the physical space is covered by an anisotropic mesh and the computational space is isotropic. The resulting model is successfully tested for rotating homogeneous isotropic turbulence and rotating plane-channel flows. The research was largely carried out during the CTR SP 2016. M.S, and R.V. acknowledge the financial support to attend this Summer Program.
Research on the control of large space structures
NASA Technical Reports Server (NTRS)
Denman, E. D.
1983-01-01
The research effort on the control of large space structures at the University of Houston has concentrated on the mathematical theory of finite-element models; identification of the mass, damping, and stiffness matrix; assignment of damping to structures; and decoupling of structure dynamics. The objective of the work has been and will continue to be the development of efficient numerical algorithms for analysis, control, and identification of large space structures. The major consideration in the development of the algorithms has been the large number of equations that must be handled by the algorithm as well as sensitivity of the algorithms to numerical errors.
Scientific Instrument Package for the large space telescope (SIP)
NASA Technical Reports Server (NTRS)
1972-01-01
The feasibility of a scientific instrument package (SIP) that will satisfy the requirements of the large space telescope was established. A reference configuration serving as a study model and data which will aid in the trade-off studies leading to the final design configuration are reported.
An optimum organizational structure for a large earth-orbiting multidisciplinary Space Base
NASA Technical Reports Server (NTRS)
Ragusa, J. M.
1973-01-01
The purpose of this exploratory study was to identify an optimum hypothetical organizational structure for a large earth-orbiting multidisciplinary research and applications (R&A) Space Base manned by a mixed crew of technologists. Since such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than the empirical testing of it. The essential finding of this research was that a four-level project type 'total matrix' model will optimize the efficiency and effectiveness of Space Base technologists.
Proceedings of the Workshop on Identification and Control of Flexible Space Structures, volume 1
NASA Technical Reports Server (NTRS)
Rodriguez, G. (Editor)
1985-01-01
Identification and control of flexible space structures were studied. Exploration of the most advanced modeling estimation, identification and control methodologies to flexible space structures was discussed. The following general areas were discussed: space platforms, antennas, and flight experiments; control/structure interactions - modeling, integrated design and optimization, control and stabilization, and shape control; control technology; control of space stations; large antenna control, dynamics and control experiments, and control/structure interaction experiments.
NASA Technical Reports Server (NTRS)
Rodriguez, G. (Editor)
1983-01-01
Two general themes in the control of large space structures are addressed: control theory for distributed parameter systems and distributed control for systems requiring spatially-distributed multipoint sensing and actuation. Topics include modeling and control, stabilization, and estimation and identification.
Dynamic analysis of space structures including elastic, multibody, and control behavior
NASA Technical Reports Server (NTRS)
Pinson, Larry; Soosaar, Keto
1989-01-01
The problem is to develop analysis methods, modeling stategies, and simulation tools to predict with assurance the on-orbit performance and integrity of large complex space structures that cannot be verified on the ground. The problem must incorporate large reliable structural models, multi-body flexible dynamics, multi-tier controller interaction, environmental models including 1g and atmosphere, various on-board disturbances, and linkage to mission-level performance codes. All areas are in serious need of work, but the weakest link is multi-body flexible dynamics.
Boberg, P R; Tylka, A J; Adams, J H; Beahm, L P; Fluckiger, E O; Kleis, T; Kobel, E
1996-01-01
The large solar energetic particle (SEP) events and simultaneous large geomagnetic disturbances observed during October 1989 posed a significant, rapidly evolving space radiation hazard. Using data from the GOES-7, NOAA-10, IMP-8 and LDEF satellites, we determined the geomagnetic transmission, heavy ion fluences, mean Fe ionic charge state, and effective radiation hazard observed in low Earth orbit (LEO) for these SEPs. We modeled the geomagnetic transmission by tracing particles through the combination of the internal International Geomagnetic Reference Field (IGRF) and the Tsyganenko (1989) magnetospheric field models, extending the modeling to large geomagnetic disturbances. We used our results to assess the radiation hazard such very large SEP events would pose in the anticipated 52 degrees inclination space station orbit.
An optimal beam alignment method for large-scale distributed space surveillance radar system
NASA Astrophysics Data System (ADS)
Huang, Jian; Wang, Dongya; Xia, Shuangzhi
2018-06-01
Large-scale distributed space surveillance radar is a very important ground-based equipment to maintain a complete catalogue for Low Earth Orbit (LEO) space debris. However, due to the thousands of kilometers distance between each sites of the distributed radar system, how to optimally implement the Transmitting/Receiving (T/R) beams alignment in a great space using the narrow beam, which proposed a special and considerable technical challenge in the space surveillance area. According to the common coordinate transformation model and the radar beam space model, we presented a two dimensional projection algorithm for T/R beam using the direction angles, which could visually describe and assess the beam alignment performance. Subsequently, the optimal mathematical models for the orientation angle of the antenna array, the site location and the T/R beam coverage are constructed, and also the beam alignment parameters are precisely solved. At last, we conducted the optimal beam alignment experiments base on the site parameters of Air Force Space Surveillance System (AFSSS). The simulation results demonstrate the correctness and effectiveness of our novel method, which can significantly stimulate the construction for the LEO space debris surveillance equipment.
The impact of galaxy formation on satellite kinematics and redshift-space distortions
NASA Astrophysics Data System (ADS)
Orsi, Álvaro A.; Angulo, Raúl E.
2018-04-01
Galaxy surveys aim to map the large-scale structure of the Universe and use redshift-space distortions to constrain deviations from general relativity and probe the existence of massive neutrinos. However, the amount of information that can be extracted is limited by the accuracy of theoretical models used to analyse the data. Here, by using the L-Galaxies semi-analytical model run over the Millennium-XXL N-body simulation, we assess the impact of galaxy formation on satellite kinematics and the theoretical modelling of redshift-space distortions. We show that different galaxy selection criteria lead to noticeable differences in the radial distributions and velocity structure of satellite galaxies. Specifically, whereas samples of stellar mass selected galaxies feature satellites that roughly follow the dark matter, emission line satellite galaxies are located preferentially in the outskirts of haloes and display net infall velocities. We demonstrate that capturing these differences is crucial for modelling the multipoles of the correlation function in redshift space, even on large scales. In particular, we show how modelling small-scale velocities with a single Gaussian distribution leads to a poor description of the measured clustering. In contrast, we propose a parametrization that is flexible enough to model the satellite kinematics and that leads to an accurate description of the correlation function down to sub-Mpc scales. We anticipate that our model will be a necessary ingredient in improved theoretical descriptions of redshift-space distortions, which together could result in significantly tighter cosmological constraints and a more optimal exploitation of future large data sets.
The curious case of large-N expansions on a (pseudo)sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polyakov, Alexander M.; Saleem, Zain H.; Stokes, James
We elucidate the large-N dynamics of one-dimensional sigma models with spherical and hyperbolic target spaces and find a duality between the Lagrange multiplier and the angular momentum. In the hyperbolic model we propose a new class of operators based on the irreducible representations of hyperbolic space. We also uncover unexpected zero modes which lead to the double scaling of the 1/N expansion and explore these modes using Gelfand-Dikiy equations.
The curious case of large-N expansions on a (pseudo)sphere
Polyakov, Alexander M.; Saleem, Zain H.; Stokes, James
2015-02-03
We elucidate the large-N dynamics of one-dimensional sigma models with spherical and hyperbolic target spaces and find a duality between the Lagrange multiplier and the angular momentum. In the hyperbolic model we propose a new class of operators based on the irreducible representations of hyperbolic space. We also uncover unexpected zero modes which lead to the double scaling of the 1/N expansion and explore these modes using Gelfand-Dikiy equations.
Ionizing Radiation Environments and Exposure Risks
NASA Astrophysics Data System (ADS)
Kim, M. H. Y.
2015-12-01
Space radiation environments for historically large solar particle events (SPE) and galactic cosmic rays (GCR) are simulated to characterize exposures to radio-sensitive organs for missions to low-Earth orbit (LEO), moon, near-Earth asteroid, and Mars. Primary and secondary particles for SPE and GCR are transported through the respective atmospheres of Earth or Mars, space vehicle, and astronaut's body tissues using NASA's HZETRN/QMSFRG computer code. Space radiation protection methods, which are derived largely from ground-based methods recommended by the National Council on Radiation Protection and Measurements (NCRP) or International Commission on Radiological Protections (ICRP), are built on the principles of risk justification, limitation, and ALARA (as low as reasonably achievable). However, because of the large uncertainties in high charge and energy (HZE) particle radiobiology and the small population of space crews, NASA develops distinct methods to implement a space radiation protection program. For the fatal cancer risks, which have been considered the dominant risk for GCR, the NASA Space Cancer Risk (NSCR) model has been developed from recommendations by NCRP; and undergone external review by the National Research Council (NRC), NCRP, and through peer-review publications. The NSCR model uses GCR environmental models, particle transport codes describing the GCR modification by atomic and nuclear interactions in atmospheric shielding coupled with spacecraft and tissue shielding, and NASA-defined quality factors for solid cancer and leukemia risk estimates for HZE particles. By implementing the NSCR model, the exposure risks from various heliospheric conditions are assessed for the radiation environments for various-class mission types to understand architectures and strategies of human exploration missions and ultimately to contribute to the optimization of radiation safety and well-being of space crewmembers participating in long-term space missions.
Optimal control of large space structures via generalized inverse matrix
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.; Fang, Xiaowen
1987-01-01
Independent Modal Space Control (IMSC) is a control scheme that decouples the space structure into n independent second-order subsystems according to n controlled modes and controls each mode independently. It is well-known that the IMSC eliminates control and observation spillover caused when the conventional coupled modal control scheme is employed. The independent control of each mode requires that the number of actuators be equal to the number of modelled modes, which is very high for a faithful modeling of large space structures. A control scheme is proposed that allows one to use a reduced number of actuators to control all modeled modes suboptimally. In particular, the method of generalized inverse matrices is employed to implement the actuators such that the eigenvalues of the closed-loop system are as closed as possible to those specified by the optimal IMSC. Computer simulation of the proposed control scheme on a simply supported beam is given.
Identification of flexible structures by frequency-domain observability range context
NASA Astrophysics Data System (ADS)
Hopkins, M. A.
2013-04-01
The well known frequency-domain observability range space extraction (FORSE) algorithm provides a powerful multivariable system-identification tool with inherent flexibility, to create state-space models from frequency-response data (FRD). This paper presents a method of using FORSE to create "context models" of a lightly damped system, from which models of individual resonant modes can be extracted. Further, it shows how to combine the extracted models of many individual modes into one large state-space model. Using this method, the author has created very high-order state-space models that accurately match measured FRD over very broad bandwidths, i.e., resonant peaks spread across five orders-of-magnitude of frequency bandwidth.
Models Archive and ModelWeb at NSSDC
NASA Astrophysics Data System (ADS)
Bilitza, D.; Papitashvili, N.; King, J. H.
2002-05-01
In addition to its large data holdings, NASA's National Space Science Data Center (NSSDC) also maintains an archive of space physics models for public use (ftp://nssdcftp.gsfc.nasa.gov/models/). The more than 60 model entries cover a wide range of parameters from the atmosphere, to the ionosphere, to the magnetosphere, to the heliosphere. The models are primarily empirical models developed by the respective model authors based on long data records from ground and space experiments. An online model catalog (http://nssdc.gsfc.nasa.gov/space/model/) provides information about these and other models and links to the model software if available. We will briefly review the existing model holdings and highlight some of its usages and users. In response to a growing need by the user community, NSSDC began to develop web-interfaces for the most frequently requested models. These interfaces enable users to compute and plot model parameters online for the specific conditions that they are interested in. Currently included in the Modelweb system (http://nssdc.gsfc.nasa.gov/space/model/) are the following models: the International Reference Ionosphere (IRI) model, the Mass Spectrometer Incoherent Scatter (MSIS) E90 model, the International Geomagnetic Reference Field (IGRF) and the AP/AE-8 models for the radiation belt electrons and protons. User accesses to both systems have been steadily increasing over the last years with occasional spikes prior to large scientific meetings. The current monthly rate is between 5,000 to 10,000 accesses for either system; in February 2002 13,872 accesses were recorded to the Modelsweb and 7092 accesses to the models archive.
Large-scale shell-model study of the Sn isotopes
NASA Astrophysics Data System (ADS)
Osnes, Eivind; Engeland, Torgeir; Hjorth-Jensen, Morten
2015-05-01
We summarize the results of an extensive study of the structure of the Sn isotopes using a large shell-model space and effective interactions evaluated from realistic two-nucleon potentials. For a fuller account, see ref. [1].
Efficient development and processing of thermal math models of very large space truss structures
NASA Technical Reports Server (NTRS)
Warren, Andrew H.; Arelt, Joseph E.; Lalicata, Anthony L.
1993-01-01
As the spacecraft moves along the orbit, the truss members are subjected to direct and reflected solar, albedo and planetary infra-red (IR) heating rates, as well as IR heating and shadowing from other spacecraft components. This is a transient process with continuously changing heating loads and the shadowing effects. The resulting nonuniform temperature distribution may cause nonuniform thermal expansion, deflection and stress in the truss elements, truss warping and thermal distortions. There are three challenges in the thermal-structural analysis of the large truss structures. The first is the development of the thermal and structural math models, the second - model processing, and the third - the data transfer between the models. All three tasks require considerable time and computer resources to be done because of a very large number of components involved. To address these challenges a series of techniques of automated thermal math modeling and efficient processing of very large space truss structures were developed. In the process the finite element and finite difference methods are interfaced. A very substantial reduction of the quantity of computations was achieved while assuring a desired accuracy of the results. The techniques are illustrated on the thermal analysis of a segment of the Space Station main truss.
NASA Astrophysics Data System (ADS)
Valach, F.; Revallo, M.; Hejda, P.; Bochníček, J.
2010-12-01
Our modern society with its advanced technology is becoming increasingly vulnerable to the Earth's system disorders originating in explosive processes on the Sun. Coronal mass ejections (CMEs) blasted into interplanetary space as gigantic clouds of ionized gas can hit Earth within a few hours or days and cause, among other effects, geomagnetic storms - perhaps the best known manifestation of solar wind interaction with Earth's magnetosphere. Solar energetic particles (SEP), accelerated to near relativistic energy during large solar storms, arrive at the Earth's orbit even in few minutes and pose serious risk to astronauts traveling through the interplanetary space. These and many other threats are the reason why experts pay increasing attention to space weather and its predictability. For research on space weather, it is typically necessary to examine a large number of parameters which are interrelated in a complex non-linear way. One way to cope with such a task is to use an artificial neural network for space weather modeling, a tool originally developed for artificial intelligence. In our contribution, we focus on practical aspects of the neural networks application to modeling and forecasting selected space weather parameters.
NASA Astrophysics Data System (ADS)
Shobe, Charles M.; Tucker, Gregory E.; Barnhart, Katherine R.
2017-12-01
Models of landscape evolution by river erosion are often either transport-limited (sediment is always available but may or may not be transportable) or detachment-limited (sediment must be detached from the bed but is then always transportable). While several models incorporate elements of, or transition between, transport-limited and detachment-limited behavior, most require that either sediment or bedrock, but not both, are eroded at any given time. Modeling landscape evolution over large spatial and temporal scales requires a model that can (1) transition freely between transport-limited and detachment-limited behavior, (2) simultaneously treat sediment transport and bedrock erosion, and (3) run in 2-D over large grids and be coupled with other surface process models. We present SPACE (stream power with alluvium conservation and entrainment) 1.0, a new model for simultaneous evolution of an alluvium layer and a bedrock bed based on conservation of sediment mass both on the bed and in the water column. The model treats sediment transport and bedrock erosion simultaneously, embracing the reality that many rivers (even those commonly defined as bedrock
rivers) flow over a partially alluviated bed. SPACE improves on previous models of bedrock-alluvial rivers by explicitly calculating sediment erosion and deposition rather than relying on a flux-divergence (Exner) approach. The SPACE model is a component of the Landlab modeling toolkit, a Python-language library used to create models of Earth surface processes. Landlab allows efficient coupling between the SPACE model and components simulating basin hydrology, hillslope evolution, weathering, lithospheric flexure, and other surface processes. Here, we first derive the governing equations of the SPACE model from existing sediment transport and bedrock erosion formulations and explore the behavior of local analytical solutions for sediment flux and alluvium thickness. We derive steady-state analytical solutions for channel slope, alluvium thickness, and sediment flux, and show that SPACE matches predicted behavior in detachment-limited, transport-limited, and mixed conditions. We provide an example of landscape evolution modeling in which SPACE is coupled with hillslope diffusion, and demonstrate that SPACE provides an effective framework for simultaneously modeling 2-D sediment transport and bedrock erosion.
Environmentally-induced discharge transient coupling to spacecraft
NASA Technical Reports Server (NTRS)
Viswanathan, R.; Barbay, G.; Stevens, N. J.
1985-01-01
The Hughes SCREENS (Space Craft Response to Environments of Space) technique was applied to generic spin and 3-axis stabilized spacecraft models. It involved the NASCAP modeling for surface charging and lumped element modeling for transients coupling into a spacecraft. A differential voltage between antenna and spun shelf of approx. 400 V and current of 12 A resulted from discharge at antenna for the spinner and approx. 3 kv and 0.3 A from a discharge at solar panels for the 3-axis stabilized Spacecraft. A typical interface circuit response was analyzed to show that the transients would couple into the Spacecraft System through ground points, which are most vulnerable. A compilation and review was performed on 15 years of available data from electron and ion current collection phenomena. Empirical models were developed to match data and compared with flight data of Pix-1 and Pix-2 mission. It was found that large space power systems would float negative and discharge if operated at or above 300 V. Several recommendations are given to improve the models and to apply them to large space systems.
A k-space method for large-scale models of wave propagation in tissue.
Mast, T D; Souriau, L P; Liu, D L; Tabei, M; Nachman, A I; Waag, R C
2001-03-01
Large-scale simulation of ultrasonic pulse propagation in inhomogeneous tissue is important for the study of ultrasound-tissue interaction as well as for development of new imaging methods. Typical scales of interest span hundreds of wavelengths; most current two-dimensional methods, such as finite-difference and finite-element methods, are unable to compute propagation on this scale with the efficiency needed for imaging studies. Furthermore, for most available methods of simulating ultrasonic propagation, large-scale, three-dimensional computations of ultrasonic scattering are infeasible. Some of these difficulties have been overcome by previous pseudospectral and k-space methods, which allow substantial portions of the necessary computations to be executed using fast Fourier transforms. This paper presents a simplified derivation of the k-space method for a medium of variable sound speed and density; the derivation clearly shows the relationship of this k-space method to both past k-space methods and pseudospectral methods. In the present method, the spatial differential equations are solved by a simple Fourier transform method, and temporal iteration is performed using a k-t space propagator. The temporal iteration procedure is shown to be exact for homogeneous media, unconditionally stable for "slow" (c(x) < or = c0) media, and highly accurate for general weakly scattering media. The applicability of the k-space method to large-scale soft tissue modeling is shown by simulating two-dimensional propagation of an incident plane wave through several tissue-mimicking cylinders as well as a model chest wall cross section. A three-dimensional implementation of the k-space method is also employed for the example problem of propagation through a tissue-mimicking sphere. Numerical results indicate that the k-space method is accurate for large-scale soft tissue computations with much greater efficiency than that of an analogous leapfrog pseudospectral method or a 2-4 finite difference time-domain method. However, numerical results also indicate that the k-space method is less accurate than the finite-difference method for a high contrast scatterer with bone-like properties, although qualitative results can still be obtained by the k-space method with high efficiency. Possible extensions to the method, including representation of absorption effects, absorbing boundary conditions, elastic-wave propagation, and acoustic nonlinearity, are discussed.
New Models and Methods for the Electroweak Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carpenter, Linda
2017-09-26
This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently beingmore » measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac Gaugino Models.« less
Population Coding of Visual Space: Modeling
Lehky, Sidney R.; Sereno, Anne B.
2011-01-01
We examine how the representation of space is affected by receptive field (RF) characteristics of the encoding population. Spatial responses were defined by overlapping Gaussian RFs. These responses were analyzed using multidimensional scaling to extract the representation of global space implicit in population activity. Spatial representations were based purely on firing rates, which were not labeled with RF characteristics (tuning curve peak location, for example), differentiating this approach from many other population coding models. Because responses were unlabeled, this model represents space using intrinsic coding, extracting relative positions amongst stimuli, rather than extrinsic coding where known RF characteristics provide a reference frame for extracting absolute positions. Two parameters were particularly important: RF diameter and RF dispersion, where dispersion indicates how broadly RF centers are spread out from the fovea. For large RFs, the model was able to form metrically accurate representations of physical space on low-dimensional manifolds embedded within the high-dimensional neural population response space, suggesting that in some cases the neural representation of space may be dimensionally isomorphic with 3D physical space. Smaller RF sizes degraded and distorted the spatial representation, with the smallest RF sizes (present in early visual areas) being unable to recover even a topologically consistent rendition of space on low-dimensional manifolds. Finally, although positional invariance of stimulus responses has long been associated with large RFs in object recognition models, we found RF dispersion rather than RF diameter to be the critical parameter. In fact, at a population level, the modeling suggests that higher ventral stream areas with highly restricted RF dispersion would be unable to achieve positionally-invariant representations beyond this narrow region around fixation. PMID:21344012
NASA Astrophysics Data System (ADS)
Kotik, A.; Usyukin, V.; Vinogradov, I.; Arkhipov, M.
2017-11-01
he realization of astrophysical researches requires the development of high-sensitive centimeterband parabolic space radiotelescopes (SRT) with the large-size mirrors. Constructively such SRT with the mirror size more than 10 m can be realized as deployable rigid structures. Mesh-structures of such size do not provide the reflector reflecting surface accuracy which is necessary for the centimeter band observations. Now such telescope with the 10 m diameter mirror is developed in Russia in the frame of "SPECTR - R" program. External dimensions of the telescope is more than the size of existing thermo-vacuum chambers used to prove SRT reflecting surface accuracy parameters under the action of space environment factors. That's why the numerical simulation turns out to be the basis required to accept the taken designs. Such modeling should be based on experimental working of the basic constructive materials and elements of the future reflector. In the article computational modeling of reflecting surface deviations of a centimeter-band of a large-sized deployable space reflector at a stage of his orbital functioning is considered. The analysis of the factors that determines the deviations - both determined (temperatures fields) and not-determined (telescope manufacturing and installation faults; the deformations caused by features of composite materials behavior in space) is carried out. The finite-element model and complex of methods are developed. They allow to carry out computational modeling of reflecting surface deviations caused by influence of all factors and to take into account the deviations correction by space vehicle orientation system. The results of modeling for two modes of functioning (orientation at the Sun) SRT are presented.
Space Shuttle wind tunnel testing program
NASA Technical Reports Server (NTRS)
Whitnah, A. M.; Hillje, E. R.
1984-01-01
A major phase of the Space Shuttle Vehicle (SSV) Development Program was the acquisition of data through the space shuttle wind tunnel testing program. It became obvious that the large number of configuration/environment combinations would necessitate an extremely large wind tunnel testing program. To make the most efficient use of available test facilities and to assist the prime contractor for orbiter design and space shuttle vehicle integration, a unique management plan was devised for the design and development phase. The space shuttle program is reviewed together with the evolutional development of the shuttle configuration. The wind tunnel testing rationale and the associated test program management plan and its overall results is reviewed. Information is given for the various facilities and models used within this program. A unique posttest documentation procedure and a summary of the types of test per disciplines, per facility, and per model are presented with detailed listing of the posttest documentation.
The isentropic quantum drift-diffusion model in two or three space dimensions
NASA Astrophysics Data System (ADS)
Chen, Xiuqing
2009-05-01
We investigate the isentropic quantum drift-diffusion model, a fourth order parabolic system, in space dimensions d = 2, 3. First, we establish the global weak solutions with large initial value and periodic boundary conditions. Then we show the semiclassical limit by delicate interpolation estimates and compactness argument.
Status of DSMT research program
NASA Technical Reports Server (NTRS)
Mcgowan, Paul E.; Javeed, Mehzad; Edighoffer, Harold H.
1991-01-01
The status of the Dynamic Scale Model Technology (DSMT) research program is presented. DSMT is developing scale model technology for large space structures as part of the Control Structure Interaction (CSI) program at NASA Langley Research Center (LaRC). Under DSMT a hybrid-scale structural dynamics model of Space Station Freedom was developed. Space Station Freedom was selected as the focus structure for DSMT since the station represents the first opportunity to obtain flight data on a complex, three-dimensional space structure. Included is an overview of DSMT including the development of the space station scale model and the resulting hardware. Scaling technology was developed for this model to achieve a ground test article which existing test facilities can accommodate while employing realistically scaled hardware. The model was designed and fabricated by the Lockheed Missile and Space Co., and is assembled at LaRc for dynamic testing. Also, results from ground tests and analyses of the various model components are presented along with plans for future subassembly and matted model tests. Finally, utilization of the scale model for enhancing analysis verification of the full-scale space station is also considered.
NASA Technical Reports Server (NTRS)
Brooks, George W.
1985-01-01
The options for the design, construction, and testing of a dynamic model of the space station were evaluated. Since the definition of the space station structure is still evolving, the Initial Operating Capacity (IOC) reference configuration was used as the general guideline. The results of the studies treat: general considerations of the need for and use of a dynamic model; factors which deal with the model design and construction; and a proposed system for supporting the dynamic model in the planned Large Spacecraft Laboratory.
1996-06-20
Engineers at one of MSFC's vacuum chambers begin testing a microthruster model. The purpose of these tests are to collect sufficient data that will enabe NASA to develop microthrusters that will move the Space Shuttle, a future space station, or any other space related vehicle with the least amount of expended energy. When something is sent into outer space, the forces that try to pull it back to Earth (gravity) are very small so that it only requires a very small force to move very large objects. In space, a force equal to a paperclip can move an object as large as a car. Microthrusters are used to produce these small forces.
Improved dynamic analysis method using load-dependent Ritz vectors
NASA Technical Reports Server (NTRS)
Escobedo-Torres, J.; Ricles, J. M.
1993-01-01
The dynamic analysis of large space structures is important in order to predict their behavior under operating conditions. Computer models of large space structures are characterized by having a large number of degrees of freedom, and the computational effort required to carry out the analysis is very large. Conventional methods of solution utilize a subset of the eigenvectors of the system, but for systems with many degrees of freedom, the solution of the eigenproblem is in many cases the most costly phase of the analysis. For this reason, alternate solution methods need to be considered. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. The load dependent Ritz vector method is presented as an alternative to the classical normal mode methods for obtaining dynamic responses of large space structures. A simplified model of a space station is used to compare results. Results show that the load dependent Ritz vector method predicts the dynamic response better than the classical normal mode method. Even though this alternate method is very promising, further studies are necessary to fully understand its attributes and limitations.
NASA Astrophysics Data System (ADS)
Owolabi, Kolade M.
2018-03-01
In this work, we are concerned with the solution of non-integer space-fractional reaction-diffusion equations with the Riemann-Liouville space-fractional derivative in high dimensions. We approximate the Riemann-Liouville derivative with the Fourier transform method and advance the resulting system in time with any time-stepping solver. In the numerical experiments, we expect the travelling wave to arise from the given initial condition on the computational domain (-∞, ∞), which we terminate in the numerical experiments with a large but truncated value of L. It is necessary to choose L large enough to allow the waves to have enough space to distribute. Experimental results in high dimensions on the space-fractional reaction-diffusion models with applications to biological models (Fisher and Allen-Cahn equations) are considered. Simulation results reveal that fractional reaction-diffusion equations can give rise to a range of physical phenomena when compared to non-integer-order cases. As a result, most meaningful and practical situations are found to be modelled with the concept of fractional calculus.
A Hierarchical Framework for State-Space Matrix Inference and Clustering.
Zuo, Chandler; Chen, Kailei; Hewitt, Kyle J; Bresnick, Emery H; Keleş, Sündüz
2016-09-01
In recent years, a large number of genomic and epigenomic studies have been focusing on the integrative analysis of multiple experimental datasets measured over a large number of observational units. The objectives of such studies include not only inferring a hidden state of activity for each unit over individual experiments, but also detecting highly associated clusters of units based on their inferred states. Although there are a number of methods tailored for specific datasets, there is currently no state-of-the-art modeling framework for this general class of problems. In this paper, we develop the MBASIC ( M atrix B ased A nalysis for S tate-space I nference and C lustering) framework. MBASIC consists of two parts: state-space mapping and state-space clustering. In state-space mapping, it maps observations onto a finite state-space, representing the activation states of units across conditions. In state-space clustering, MBASIC incorporates a finite mixture model to cluster the units based on their inferred state-space profiles across all conditions. Both the state-space mapping and clustering can be simultaneously estimated through an Expectation-Maximization algorithm. MBASIC flexibly adapts to a large number of parametric distributions for the observed data, as well as the heterogeneity in replicate experiments. It allows for imposing structural assumptions on each cluster, and enables model selection using information criterion. In our data-driven simulation studies, MBASIC showed significant accuracy in recovering both the underlying state-space variables and clustering structures. We applied MBASIC to two genome research problems using large numbers of datasets from the ENCODE project. The first application grouped genes based on transcription factor occupancy profiles of their promoter regions in two different cell types. The second application focused on identifying groups of loci that are similar to a GATA2 binding site that is functional at its endogenous locus by utilizing transcription factor occupancy data and illustrated applicability of MBASIC in a wide variety of problems. In both studies, MBASIC showed higher levels of raw data fidelity than analyzing these data with a two-step approach using ENCODE results on transcription factor occupancy data.
NASA Technical Reports Server (NTRS)
Curreri, Peter A.; Detweiler, Michael
2010-01-01
Creating large space habitats by launching all materials from Earth is prohibitively expensive. Using space resources and space based labor to build space solar power satellites can yield extraordinary profits after a few decades. The economic viability of this program depends on the use of space resources and space labor. To maximize the return on the investment, the early use of high density bolo habitats is required. Other shapes do not allow for the small initial scale required for a quick population increase in space. This study found that 5 Man Year, or 384 person bolo high density habitats will be the most economically feasible for a program started at year 2010 and will cause a profit by year 24 of the program, put over 45,000 people into space, and create a large system of space infrastructure for the further exploration and development of space.
An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Dali; Zhao, Ziliang; Shaw, Shih-Lung
2011-01-01
In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on highmore » performance computing platforms.« less
Space-Time Controls on Carbon Sequestration Over Large-Scale Amazon Basin
NASA Technical Reports Server (NTRS)
Smith, Eric A.; Cooper, Harry J.; Gu, Jiujing; Grose, Andrew; Norman, John; daRocha, Humberto R.; Starr, David O. (Technical Monitor)
2002-01-01
A major research focus of the LBA Ecology Program is an assessment of the carbon budget and the carbon sequestering capacity of the large scale forest-pasture system that dominates the Amazonia landscape, and its time-space heterogeneity manifest in carbon fluxes across the large scale Amazon basin ecosystem. Quantification of these processes requires a combination of in situ measurements, remotely sensed measurements from space, and a realistically forced hydrometeorological model coupled to a carbon assimilation model, capable of simulating details within the surface energy and water budgets along with the principle modes of photosynthesis and respiration. Here we describe the results of an investigation concerning the space-time controls of carbon sources and sinks distributed over the large scale Amazon basin. The results are derived from a carbon-water-energy budget retrieval system for the large scale Amazon basin, which uses a coupled carbon assimilation-hydrometeorological model as an integrating system, forced by both in situ meteorological measurements and remotely sensed radiation fluxes and precipitation retrieval retrieved from a combination of GOES, SSM/I, TOMS, and TRMM satellite measurements. Brief discussion concerning validation of (a) retrieved surface radiation fluxes and precipitation based on 30-min averaged surface measurements taken at Ji-Parana in Rondonia and Manaus in Amazonas, and (b) modeled carbon fluxes based on tower CO2 flux measurements taken at Reserva Jaru, Manaus and Fazenda Nossa Senhora. The space-time controls on carbon sequestration are partitioned into sets of factors classified by: (1) above canopy meteorology, (2) incoming surface radiation, (3) precipitation interception, and (4) indigenous stomatal processes varied over the different land covers of pristine rainforest, partially, and fully logged rainforests, and pasture lands. These are the principle meteorological, thermodynamical, hydrological, and biophysical control paths which perturb net carbon fluxes and sequestration, produce time-space switching of carbon sources and sinks, undergo modulation through atmospheric boundary layer feedbacks, and respond to any discontinuous intervention on the landscape itself such as produced by human intervention in converting rainforest to pasture or conducting selective/clearcut logging operations.
Smirnova, Olga A; Cucinotta, Francis A
2018-02-01
A recently developed biologically motivated dynamical model of the assessment of the excess relative risk (ERR) for radiogenic leukemia among acutely/continuously irradiated humans (Smirnova, 2015, 2017) is applied to estimate the ERR for radiogenic leukemia among astronauts engaged in long-term interplanetary space missions. Numerous scenarios of space radiation exposure during space missions are used in the modeling studies. The dependence of the ERR for leukemia among astronauts on several mission parameters including the dose equivalent rates of galactic cosmic rays (GCR) and large solar particle events (SPEs), the number of large SPEs, the time interval between SPEs, mission duration, the degree of astronaut's additional shielding during SPEs, the degree of their additional 12-hour's daily shielding, as well as the total mission dose equivalent, is examined. The results of the estimation of ERR for radiogenic leukemia among astronauts, which are obtained in the framework of the developed dynamical model for various scenarios of space radiation exposure, are compared with the corresponding results, computed by the commonly used linear model. It is revealed that the developed dynamical model along with the linear model can be applied to estimate ERR for radiogenic leukemia among astronauts engaged in long-term interplanetary space missions in the range of applicability of the latter. In turn, the developed dynamical model is capable of predicting the ERR for leukemia among astronauts for the irradiation regimes beyond the applicability range of the linear model in emergency cases. As a supplement to the estimations of cancer incidence and death (REIC and REID) (Cucinotta et al., 2013, 2017), the developed dynamical model for the assessment of the ERR for leukemia can be employed on the pre-mission design phase for, e.g., the optimization of the regimes of astronaut's additional shielding in the course of interplanetary space missions. The developed model can also be used on the phase of the real-time responses during the space mission to make the decisions on the operational application of appropriate countermeasures to minimize the risks of occurrences of leukemia, especially, for emergency cases. Copyright © 2017 The Committee on Space Research (COSPAR). Published by Elsevier Ltd. All rights reserved.
Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.
Onorante, Luca; Raftery, Adrian E
2016-01-01
Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.
Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam’s Window*
Onorante, Luca; Raftery, Adrian E.
2015-01-01
Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam’s window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods. PMID:26917859
Experimental Test of Coupled Wave Model of Large Coils
1985-06-01
46556 Abstract: Recent data from Time Domain Pulse Reflectometry experiments on a three turn coil in the form of a race track corroborate the...Domain Pulse Reflectometry experiments on a three turn coil in the form of a race track corroborate the theory of coupled wave model for large coils...Gabriel, "Coupled Wave Model for Large Magnet Coils", NASA Contractor Report 3332, National Aeronautics and Space Administration, Washington, DC
Fifty years of human space travel: implications for bone and calcium research
USDA-ARS?s Scientific Manuscript database
Calcium and bone metabolism remain key concerns for space travelers, and ground-based models of space flight have provided a vast literature to complement the smaller set of reports from flight studies. Increased bone resorption and largely unchanged bone formation result in the loss of calcium and ...
Model verification of large structural systems. [space shuttle model response
NASA Technical Reports Server (NTRS)
Lee, L. T.; Hasselman, T. K.
1978-01-01
A computer program for the application of parameter identification on the structural dynamic models of space shuttle and other large models with hundreds of degrees of freedom is described. Finite element, dynamic, analytic, and modal models are used to represent the structural system. The interface with math models is such that output from any structural analysis program applied to any structural configuration can be used directly. Processed data from either sine-sweep tests or resonant dwell tests are directly usable. The program uses measured modal data to condition the prior analystic model so as to improve the frequency match between model and test. A Bayesian estimator generates an improved analytical model and a linear estimator is used in an iterative fashion on highly nonlinear equations. Mass and stiffness scaling parameters are generated for an improved finite element model, and the optimum set of parameters is obtained in one step.
Transforming community access to space science models
NASA Astrophysics Data System (ADS)
MacNeice, Peter; Hesse, Michael; Kuznetsova, Maria; Maddox, Marlo; Rastaetter, Lutz; Berrios, David; Pulkkinen, Antti
2012-04-01
Researching and forecasting the ever changing space environment (often referred to as space weather) and its influence on humans and their activities are model-intensive disciplines. This is true because the physical processes involved are complex, but, in contrast to terrestrial weather, the supporting observations are typically sparse. Models play a vital role in establishing a physically meaningful context for interpreting limited observations, testing theory, and producing both nowcasts and forecasts. For example, with accurate forecasting of hazardous space weather conditions, spacecraft operators can place sensitive systems in safe modes, and power utilities can protect critical network components from damage caused by large currents induced in transmission lines by geomagnetic storms.
Transforming Community Access to Space Science Models
NASA Technical Reports Server (NTRS)
MacNeice, Peter; Heese, Michael; Kunetsova, Maria; Maddox, Marlo; Rastaetter, Lutz; Berrios, David; Pulkkinen, Antti
2012-01-01
Researching and forecasting the ever changing space environment (often referred to as space weather) and its influence on humans and their activities are model-intensive disciplines. This is true because the physical processes involved are complex, but, in contrast to terrestrial weather, the supporting observations are typically sparse. Models play a vital role in establishing a physically meaningful context for interpreting limited observations, testing theory, and producing both nowcasts and forecasts. For example, with accurate forecasting of hazardous space weather conditions, spacecraft operators can place sensitive systems in safe modes, and power utilities can protect critical network components from damage caused by large currents induced in transmission lines by geomagnetic storms.
Propagative selection of tilted array patterns in directional solidification
NASA Astrophysics Data System (ADS)
Song, Younggil; Akamatsu, Silvère; Bottin-Rousseau, Sabine; Karma, Alain
2018-05-01
We investigate the dynamics of tilted cellular/dendritic array patterns that form during directional solidification of a binary alloy when a preferred-growth crystal axis is misoriented with respect to the temperature gradient. In situ experimental observations and phase-field simulations in thin samples reveal the existence of a propagative source-sink mechanism of array spacing selection that operates on larger space and time scales than the competitive growth at play during the initial solidification transient. For tilted arrays, tertiary branching at the diverging edge of the sample acts as a source of new cells with a spacing that can be significantly larger than the initial average spacing. A spatial domain of large spacing then invades the sample propagatively. It thus yields a uniform spacing everywhere, selected independently of the initial conditions, except in a small region near the converging edge of the sample, which acts as a sink of cells. We propose a discrete geometrical model that describes the large-scale evolution of the spatial spacing profile based on the local dependence of the cell drift velocity on the spacing. We also derive a nonlinear advection equation that predicts the invasion velocity of the large-spacing domain, and sheds light on the fundamental nature of this process. The models also account for more complex spacing modulations produced by an irregular dynamics at the source, in good quantitative agreement with both phase-field simulations and experiments. This basic knowledge provides a theoretical basis to improve the processing of single crystals or textured polycrystals for advanced materials.
West, Amanda; Kumar, Sunil; Jarnevich, Catherine S.
2016-01-01
Regional analysis of large wildfire potential given climate change scenarios is crucial to understanding areas most at risk in the future, yet wildfire models are not often developed and tested at this spatial scale. We fit three historical climate suitability models for large wildfires (i.e. ≥ 400 ha) in Colorado andWyoming using topography and decadal climate averages corresponding to wildfire occurrence at the same temporal scale. The historical models classified points of known large wildfire occurrence with high accuracies. Using a novel approach in wildfire modeling, we applied the historical models to independent climate and wildfire datasets, and the resulting sensitivities were 0.75, 0.81, and 0.83 for Maxent, Generalized Linear, and Multivariate Adaptive Regression Splines, respectively. We projected the historic models into future climate space using data from 15 global circulation models and two representative concentration pathway scenarios. Maps from these geospatial analyses can be used to evaluate the changing spatial distribution of climate suitability of large wildfires in these states. April relative humidity was the most important covariate in all models, providing insight to the climate space of large wildfires in this region. These methods incorporate monthly and seasonal climate averages at a spatial resolution relevant to land management (i.e. 1 km2) and provide a tool that can be modified for other regions of North America, or adapted for other parts of the world.
Space Generic Open Avionics Architecture (SGOAA): Overview
NASA Technical Reports Server (NTRS)
Wray, Richard B.; Stovall, John R.
1992-01-01
A space generic open avionics architecture created for NASA is described. It will serve as the basis for entities in spacecraft core avionics, capable of being tailored by NASA for future space program avionics ranging from small vehicles such as Moon ascent/descent vehicles to large ones such as Mars transfer vehicles or orbiting stations. The standard consists of: (1) a system architecture; (2) a generic processing hardware architecture; (3) a six class architecture interface model; (4) a system services functional subsystem architectural model; and (5) an operations control functional subsystem architectural model.
Thermal/structural design verification strategies for large space structures
NASA Technical Reports Server (NTRS)
Benton, David
1988-01-01
Requirements for space structures of increasing size, complexity, and precision have engendered a search for thermal design verification methods that do not impose unreasonable costs, that fit within the capabilities of existing facilities, and that still adequately reduce technical risk. This requires a combination of analytical and testing methods. This requires two approaches. The first is to limit thermal testing to sub-elements of the total system only in a compact configuration (i.e., not fully deployed). The second approach is to use a simplified environment to correlate analytical models with test results. These models can then be used to predict flight performance. In practice, a combination of these approaches is needed to verify the thermal/structural design of future very large space systems.
Characterizing Space Environments with Long-Term Space Plasma Archive Resources
NASA Technical Reports Server (NTRS)
Minow, Joseph I.; Miller, J. Scott; Diekmann, Anne M.; Parker, Linda N.
2009-01-01
A significant scientific benefit of establishing and maintaining long-term space plasma data archives is the ready access the archives afford to resources required for characterizing spacecraft design environments. Space systems must be capable of operating in the mean environments driven by climatology as well as the extremes that occur during individual space weather events. Long- term time series are necessary to obtain quantitative information on environment variability and extremes that characterize the mean and worst case environments that may be encountered during a mission. In addition, analysis of large data sets are important to scientific studies of flux limiting processes that provide a basis for establishing upper limits to environment specifications used in radiation or charging analyses. We present applications using data from existing archives and highlight their contributions to space environment models developed at Marshall Space Flight Center including the Chandra Radiation Model, ionospheric plasma variability models, and plasma models of the L2 space environment.
2D data-space cross-gradient joint inversion of MT, gravity and magnetic data
NASA Astrophysics Data System (ADS)
Pak, Yong-Chol; Li, Tonglin; Kim, Gang-Sop
2017-08-01
We have developed a data-space multiple cross-gradient joint inversion algorithm, and validated it through synthetic tests and applied it to magnetotelluric (MT), gravity and magnetic datasets acquired along a 95 km profile in Benxi-Ji'an area of northeastern China. To begin, we discuss a generalized cross-gradient joint inversion for multiple datasets and model parameters sets, and formulate it in data space. The Lagrange multiplier required for the structural coupling in the data-space method is determined using an iterative solver to avoid calculation of the inverse matrix in solving the large system of equations. Next, using model-space and data-space methods, we inverted the synthetic data and field data. Based on our result, the joint inversion in data-space not only delineates geological bodies more clearly than the separate inversion, but also yields nearly equal results with the one in model-space while consuming much less memory.
Radiation health for a Mars mission
NASA Technical Reports Server (NTRS)
Robbins, Donald E.
1992-01-01
Uncertainties in risk assessments for exposure of a Mars mission crew to space radiation place limitations on mission design and operation. Large shielding penalties are imposed in order to obtain acceptable safety margins. Galactic cosmic rays (GCR) and solar particle events (SPE) are the major concern. A warning system and 'safe-haven' are needed to protect the crew from large SPE which produce lethal doses. A model developed at NASA Johnson Space Center (JSC) to describe solar modulation of GCR intensities reduces that uncertainty to less than 10 percent. Radiation transport models used to design spacecraft shielding have large uncertainties in nuclear fragmentation cross sections for GCR which interact with spacecraft materials. Planned space measurements of linear energy transfer (LET) spectra behind various shielding thicknesses will reduce uncertainties in dose-versus-shielding thickness relationships to 5-10 percent. The largest remaining uncertainty is in biological effects of space radiation. Data on effects of energetic ions in human are nonexistent. Experimental research on effects in animals and cell is needed to allow extrapolation to the risk of carcinogenesis in humans.
Impact of large-scale tides on cosmological distortions via redshift-space power spectrum
NASA Astrophysics Data System (ADS)
Akitsu, Kazuyuki; Takada, Masahiro
2018-03-01
Although large-scale perturbations beyond a finite-volume survey region are not direct observables, these affect measurements of clustering statistics of small-scale (subsurvey) perturbations in large-scale structure, compared with the ensemble average, via the mode-coupling effect. In this paper we show that a large-scale tide induced by scalar perturbations causes apparent anisotropic distortions in the redshift-space power spectrum of galaxies in a way depending on an alignment between the tide, wave vector of small-scale modes and line-of-sight direction. Using the perturbation theory of structure formation, we derive a response function of the redshift-space power spectrum to large-scale tide. We then investigate the impact of large-scale tide on estimation of cosmological distances and the redshift-space distortion parameter via the measured redshift-space power spectrum for a hypothetical large-volume survey, based on the Fisher matrix formalism. To do this, we treat the large-scale tide as a signal, rather than an additional source of the statistical errors, and show that a degradation in the parameter is restored if we can employ the prior on the rms amplitude expected for the standard cold dark matter (CDM) model. We also discuss whether the large-scale tide can be constrained at an accuracy better than the CDM prediction, if the effects up to a larger wave number in the nonlinear regime can be included.
Methods for evaluating the predictive accuracy of structural dynamic models
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, Jon D.
1990-01-01
Uncertainty of frequency response using the fuzzy set method and on-orbit response prediction using laboratory test data to refine an analytical model are emphasized with respect to large space structures. Two aspects of the fuzzy set approach were investigated relative to its application to large structural dynamics problems: (1) minimizing the number of parameters involved in computing possible intervals; and (2) the treatment of extrema which may occur in the parameter space enclosed by all possible combinations of the important parameters of the model. Extensive printer graphics were added to the SSID code to help facilitate model verification, and an application of this code to the LaRC Ten Bay Truss is included in the appendix to illustrate this graphics capability.
Adaptive control of large space structures using recursive lattice filters
NASA Technical Reports Server (NTRS)
Sundararajan, N.; Goglia, G. L.
1985-01-01
The use of recursive lattice filters for identification and adaptive control of large space structures is studied. Lattice filters were used to identify the structural dynamics model of the flexible structures. This identification model is then used for adaptive control. Before the identified model and control laws are integrated, the identified model is passed through a series of validation procedures and only when the model passes these validation procedures is control engaged. This type of validation scheme prevents instability when the overall loop is closed. Another important area of research, namely that of robust controller synthesis, was investigated using frequency domain multivariable controller synthesis methods. The method uses the Linear Quadratic Guassian/Loop Transfer Recovery (LQG/LTR) approach to ensure stability against unmodeled higher frequency modes and achieves the desired performance.
Preliminary design, analysis, and costing of a dynamic scale model of the NASA space station
NASA Technical Reports Server (NTRS)
Gronet, M. J.; Pinson, E. D.; Voqui, H. L.; Crawley, E. F.; Everman, M. R.
1987-01-01
The difficulty of testing the next generation of large flexible space structures on the ground places an emphasis on other means for validating predicted on-orbit dynamic behavior. Scale model technology represents one way of verifying analytical predictions with ground test data. This study investigates the preliminary design, scaling and cost trades for a Space Station dynamic scale model. The scaling of nonlinear joint behavior is studied from theoretical and practical points of view. Suspension system interaction trades are conducted for the ISS Dual Keel Configuration and Build-Up Stages suspended in the proposed NASA/LaRC Large Spacecraft Laboratory. Key issues addressed are scaling laws, replication vs. simulation of components, manufacturing, suspension interactions, joint behavior, damping, articulation capability, and cost. These issues are the subject of parametric trades versus the scale model factor. The results of these detailed analyses are used to recommend scale factors for four different scale model options, each with varying degrees of replication. Potential problems in constructing and testing the scale model are identified, and recommendations for further study are outlined.
Emulation: A fast stochastic Bayesian method to eliminate model space
NASA Astrophysics Data System (ADS)
Roberts, Alan; Hobbs, Richard; Goldstein, Michael
2010-05-01
Joint inversion of large 3D datasets has been the goal of geophysicists ever since the datasets first started to be produced. There are two broad approaches to this kind of problem, traditional deterministic inversion schemes and more recently developed Bayesian search methods, such as MCMC (Markov Chain Monte Carlo). However, using both these kinds of schemes has proved prohibitively expensive, both in computing power and time cost, due to the normally very large model space which needs to be searched using forward model simulators which take considerable time to run. At the heart of strategies aimed at accomplishing this kind of inversion is the question of how to reliably and practicably reduce the size of the model space in which the inversion is to be carried out. Here we present a practical Bayesian method, known as emulation, which can address this issue. Emulation is a Bayesian technique used with considerable success in a number of technical fields, such as in astronomy, where the evolution of the universe has been modelled using this technique, and in the petroleum industry where history matching is carried out of hydrocarbon reservoirs. The method of emulation involves building a fast-to-compute uncertainty-calibrated approximation to a forward model simulator. We do this by modelling the output data from a number of forward simulator runs by a computationally cheap function, and then fitting the coefficients defining this function to the model parameters. By calibrating the error of the emulator output with respect to the full simulator output, we can use this to screen out large areas of model space which contain only implausible models. For example, starting with what may be considered a geologically reasonable prior model space of 10000 models, using the emulator we can quickly show that only models which lie within 10% of that model space actually produce output data which is plausibly similar in character to an observed dataset. We can thus much more tightly constrain the input model space for a deterministic inversion or MCMC method. By using this technique jointly on several datasets (specifically seismic, gravity, and magnetotelluric (MT) describing the same region), we can include in our modelling uncertainties in the data measurements, the relationships between the various physical parameters involved, as well as the model representation uncertainty, and at the same time further reduce the range of plausible models to several percent of the original model space. Being stochastic in nature, the output posterior parameter distributions also allow our understanding of/beliefs about a geological region can be objectively updated, with full assessment of uncertainties, and so the emulator is also an inversion-type tool in it's own right, with the advantage (as with any Bayesian method) that our uncertainties from all sources (both data and model) can be fully evaluated.
Simulation study on dynamics model of two kinds of on-orbit soft-contact mechanism
NASA Astrophysics Data System (ADS)
Ye, X.; Dong, Z. H.; Yang, F.
2018-05-01
Aiming at the problem that the operating conditions of the space manipulator is harsh and the space manipulator could not bear the large collision momentum, this paper presents a new concept and technical method, namely soft contact technology. Based on ADAMS dynamics software, this paper compares and simulates the mechanism model of on-orbit soft-contact mechanism based on the bionic model and the integrated double joint model. The main purpose is to verify the path planning ability and the momentum buffering ability based on the different design concept mechanism. The simulation results show that both the two mechanism models have the path planning function before the space target contact, and also has the momentum buffer and controllability during the space target contact process.
New Paradigms for Ensuring the Enduring Viability of the Space Science Enterprise
NASA Astrophysics Data System (ADS)
Arenberg, Jonathan; Conti, Alberto
2018-01-01
Pursuing ground breaking science in a highly cost and funding constrained environment presents new challenges to the development of future large space astrophysics missions. Within the conventional cost models for large observatories, executing a flagship “mission after next” appears to be unstainable. To achieve our nation’s space astrophysics ambitions requires new paradigms in system design, development and manufacture. Implementation of this new paradigm requires that the space astrophysics community adopt new answers to a new set of questions. This poster will present our recent results on the origins of these new questions and the steps to their answers.
NASA Astrophysics Data System (ADS)
Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.
2015-12-01
Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.
Parallel computing method for simulating hydrological processesof large rivers under climate change
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, Y.
2016-12-01
Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.
An Overview of Quantitative Risk Assessment of Space Shuttle Propulsion Elements
NASA Technical Reports Server (NTRS)
Safie, Fayssal M.
1998-01-01
Since the Space Shuttle Challenger accident in 1986, NASA has been working to incorporate quantitative risk assessment (QRA) in decisions concerning the Space Shuttle and other NASA projects. One current major NASA QRA study is the creation of a risk model for the overall Space Shuttle system. The model is intended to provide a tool to estimate Space Shuttle risk and to perform sensitivity analyses/trade studies, including the evaluation of upgrades. Marshall Space Flight Center (MSFC) is a part of the NASA team conducting the QRA study; MSFC responsibility involves modeling the propulsion elements of the Space Shuttle, namely: the External Tank (ET), the Solid Rocket Booster (SRB), the Reusable Solid Rocket Motor (RSRM), and the Space Shuttle Main Engine (SSME). This paper discusses the approach that MSFC has used to model its Space Shuttle elements, including insights obtained from this experience in modeling large scale, highly complex systems with a varying availability of success/failure data. Insights, which are applicable to any QRA study, pertain to organizing the modeling effort, obtaining customer buy-in, preparing documentation, and using varied modeling methods and data sources. Also provided is an overall evaluation of the study results, including the strengths and the limitations of the MSFC QRA approach and of qRA technology in general.
Zhang, Kechen
2016-01-01
The problem of how the hippocampus encodes both spatial and nonspatial information at the cellular network level remains largely unresolved. Spatial memory is widely modeled through the theoretical framework of attractor networks, but standard computational models can only represent spaces that are much smaller than the natural habitat of an animal. We propose that hippocampal networks are built on a basic unit called a “megamap,” or a cognitive attractor map in which place cells are flexibly recombined to represent a large space. Its inherent flexibility gives the megamap a huge representational capacity and enables the hippocampus to simultaneously represent multiple learned memories and naturally carry nonspatial information at no additional cost. On the other hand, the megamap is dynamically stable, because the underlying network of place cells robustly encodes any location in a large environment given a weak or incomplete input signal from the upstream entorhinal cortex. Our results suggest a general computational strategy by which a hippocampal network enjoys the stability of attractor dynamics without sacrificing the flexibility needed to represent a complex, changing world. PMID:27193320
Improving parallel I/O autotuning with performance modeling
Behzad, Babak; Byna, Surendra; Wild, Stefan M.; ...
2014-01-01
Various layers of the parallel I/O subsystem offer tunable parameters for improving I/O performance on large-scale computers. However, searching through a large parameter space is challenging. We are working towards an autotuning framework for determining the parallel I/O parameters that can achieve good I/O performance for different data write patterns. In this paper, we characterize parallel I/O and discuss the development of predictive models for use in effectively reducing the parameter space. Furthermore, applying our technique on tuning an I/O kernel derived from a large-scale simulation code shows that the search time can be reduced from 12 hours to 2more » hours, while achieving 54X I/O performance speedup.« less
Development of a large scale Chimera grid system for the Space Shuttle Launch Vehicle
NASA Technical Reports Server (NTRS)
Pearce, Daniel G.; Stanley, Scott A.; Martin, Fred W., Jr.; Gomez, Ray J.; Le Beau, Gerald J.; Buning, Pieter G.; Chan, William M.; Chiu, Ing-Tsau; Wulf, Armin; Akdag, Vedat
1993-01-01
The application of CFD techniques to large problems has dictated the need for large team efforts. This paper offers an opportunity to examine the motivations, goals, needs, problems, as well as the methods, tools, and constraints that defined NASA's development of a 111 grid/16 million point grid system model for the Space Shuttle Launch Vehicle. The Chimera approach used for domain decomposition encouraged separation of the complex geometry into several major components each of which was modeled by an autonomous team. ICEM-CFD, a CAD based grid generation package, simplified the geometry and grid topology definition by provoding mature CAD tools and patch independent meshing. The resulting grid system has, on average, a four inch resolution along the surface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mizuno, T
2004-09-03
Cosmic-ray background fluxes were modeled based on existing measurements and theories and are presented here. The model, originally developed for the Gamma-ray Large Area Space Telescope (GLAST) Balloon Experiment, covers the entire solid angle (4{pi} sr), the sensitive energy range of the instrument ({approx} 10 MeV to 100 GeV) and abundant components (proton, alpha, e{sup -}, e{sup +}, {mu}{sup -}, {mu}{sup +} and gamma). It is expressed in analytic functions in which modulations due to the solar activity and the Earth geomagnetism are parameterized. Although the model is intended to be used primarily for the GLAST Balloon Experiment, model functionsmore » in low-Earth orbit are also presented and can be used for other high energy astrophysical missions. The model has been validated via comparison with the data of the GLAST Balloon Experiment.« less
NASA Technical Reports Server (NTRS)
Noor, A. K.
1983-01-01
Advances in continuum modeling, progress in reduction methods, and analysis and modeling needs for large space structures are covered with specific attention given to repetitive lattice trusses. As far as continuum modeling is concerned, an effective and verified analysis capability exists for linear thermoelastic stress, birfurcation buckling, and free vibration problems of repetitive lattices. However, application of continuum modeling to nonlinear analysis needs more development. Reduction methods are very effective for bifurcation buckling and static (steady-state) nonlinear analysis. However, more work is needed to realize their full potential for nonlinear dynamic and time-dependent problems. As far as analysis and modeling needs are concerned, three areas are identified: loads determination, modeling and nonclassical behavior characteristics, and computational algorithms. The impact of new advances in computer hardware, software, integrated analysis, CAD/CAM stems, and materials technology is also discussed.
Deriving Tools from Real-Time Runs: A New CCMC Support for SEC and AFWA
NASA Technical Reports Server (NTRS)
Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha
2007-01-01
The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. In particular, the CCMC provides to the research community the execution of "runs-on-request" for specific events of interest to space science researchers. Through this activity and the concurrent development of advanced visualization tools, CCMC provides, to the general science community, unprecedented access to a large number of state-of-the-art research models. CCMC houses models that cover the entire domain from the Sun to the Earth. In this presentation, we will provide an overview of CCMC modeling services that are available to support activities at the Space Environment Center, or at the Air Force Weather Agency.
Space construction base control system
NASA Technical Reports Server (NTRS)
1978-01-01
Aspects of an attitude control system were studied and developed for a large space base that is structurally flexible and whose mass properties change rather dramatically during its orbital lifetime. Topics of discussion include the following: (1) space base orbital pointing and maneuvering; (2) angular momentum sizing of actuators; (3) momentum desaturation selection and sizing; (4) multilevel control technique applied to configuration one; (5) one-dimensional model simulation; (6) N-body discrete coordinate simulation; (7) structural analysis math model formulation; and (8) discussion of control problems and control methods.
Dynamics of a neuron model in different two-dimensional parameter-spaces
NASA Astrophysics Data System (ADS)
Rech, Paulo C.
2011-03-01
We report some two-dimensional parameter-space diagrams numerically obtained for the multi-parameter Hindmarsh-Rose neuron model. Several different parameter planes are considered, and we show that regardless of the combination of parameters, a typical scenario is preserved: for all choice of two parameters, the parameter-space presents a comb-shaped chaotic region immersed in a large periodic region. We also show that exist regions close these chaotic region, separated by the comb teeth, organized themselves in period-adding bifurcation cascades.
On the apparent insignificance of the randomness of flexible joints on large space truss dynamics
NASA Technical Reports Server (NTRS)
Koch, R. M.; Klosner, J. M.
1993-01-01
Deployable periodic large space structures have been shown to exhibit high dynamic sensitivity to period-breaking imperfections and uncertainties. These can be brought on by manufacturing or assembly errors, structural imperfections, as well as nonlinear and/or nonconservative joint behavior. In addition, the necessity of precise pointing and position capability can require the consideration of these usually negligible and unknown parametric uncertainties and their effect on the overall dynamic response of large space structures. This work describes the use of a new design approach for the global dynamic solution of beam-like periodic space structures possessing parametric uncertainties. Specifically, the effect of random flexible joints on the free vibrations of simply-supported periodic large space trusses is considered. The formulation is a hybrid approach in terms of an extended Timoshenko beam continuum model, Monte Carlo simulation scheme, and first-order perturbation methods. The mean and mean-square response statistics for a variety of free random vibration problems are derived for various input random joint stiffness probability distributions. The results of this effort show that, although joint flexibility has a substantial effect on the modal dynamic response of periodic large space trusses, the effect of any reasonable uncertainty or randomness associated with these joint flexibilities is insignificant.
An Integrated, Optimization-Based Approach to the Design and Control of Large Space Structures.
1984-05-01
investigator.s shall use a nonlinear beam model for the large motions, and they shall use a linear beam model to describe the small displacements as a... use a nonlinear beam model for the large motions, and we shall use a linear beam model to describe the small displacements as a perturbation around the...of the angular velocity, wt as follows 0 = 0 - 0 (2. ) -01 G, - f- 0. The use of a quaternion avoids singularities which are often encountered in
NASA Technical Reports Server (NTRS)
Wyman, C. L.; Griner, D. B.; Hurd, W. A.; Shelton, G. B.; Hunt, G. H.; Fannin, B. B.; Brealt, R. P.; Hawkins, C. A. (Inventor)
1978-01-01
An apparatus is described for measuring the effectiveness of stray light suppression light shields and baffle arrangements used in optical space experiments and large space telescopes. The light shield and baffle arrangement and a telescope model are contained in a vacuum chamber. A source of short, high-powered light energy illuminates portions of the light shield and baffle arrangement and reflects a portion of same to a photomultiplier tube by virtue of multipath scattering. The resulting signal is transferred to time-channel electronics timed by the firing of the high energy light source allowing time discrimination of the signal thereby enabling the light scattered and suppressed by the model to be distinguished from the walls and holders around the apparatus.
NASA Astrophysics Data System (ADS)
Simpson, R.; Broussely, M.; Edwards, G.; Robinson, D.; Cozzani, A.; Casarosa, G.
2012-07-01
The National Physical Laboratory (NPL) and The European Space Research and Technology Centre (ESTEC) have performed for the first time successful surface temperature measurements using infrared thermal imaging in the ESTEC Large Space Simulator (LSS) under vacuum and with the Sun Simulator (SUSI) switched on during thermal qualification tests of the GAIA Deployable Sunshield Assembly (DSA). The thermal imager temperature measurements, with radiosity model corrections, show good agreement with thermocouple readings on well characterised regions of the spacecraft. In addition, the thermal imaging measurements identified potentially misleading thermocouple temperature readings and provided qualitative real-time observations of the thermal and spatial evolution of surface structure changes and heat dissipation during hot test loadings, which may yield additional thermal and physical measurement information through further research.
Increased intracranial pressure in mini-pigs exposed to simulated solar particle event radiation
NASA Astrophysics Data System (ADS)
Sanzari, Jenine K.; Muehlmatt, Amy; Savage, Alexandria; Lin, Liyong; Kennedy, Ann R.
2014-02-01
Changes in intracranial pressure (ICP) during space flight have stimulated an area of research in space medicine. It is widely speculated that elevations in ICP contribute to structural and functional ocular changes, including deterioration in vision, which is also observed during space flight. The aim of this study was to investigate changes in opening pressure (OP) occurring as a result of ionizing radiation exposure (at doses and dose-rates relevant to solar particle event radiation). We used a large animal model, the Yucatan mini-pig, and were able to obtain measurements over a 90 day period. This is the first investigation to show long term recordings of ICP in a large animal model without an invasive craniotomy procedure. Further, this is the first investigation reporting increased ICP after radiation exposure.
NASA Technical Reports Server (NTRS)
Watson, Michael D.; Kelley, Gary W.
2012-01-01
The Department of Defense (DoD) defined System Operational Effectiveness (SOE) model provides an exceptional framework for an affordable approach to the development and operation of space launch vehicles and their supporting infrastructure. The SOE model provides a focal point from which to direct and measure technical effectiveness and process efficiencies of space launch vehicles. The application of the SOE model to a space launch vehicle's development and operation effort leads to very specific approaches and measures that require consideration during the design phase. This paper provides a mapping of the SOE model to the development of space launch vehicles for human exploration by addressing the SOE model key points of measurement including System Performance, System Availability, Technical Effectiveness, Process Efficiency, System Effectiveness, Life Cycle Cost, and Affordable Operational Effectiveness. In addition, the application of the SOE model to the launch vehicle development process is defined providing the unique aspects of space launch vehicle production and operations in lieu of the traditional broader SOE context that examines large quantities of fielded systems. The tailoring and application of the SOE model to space launch vehicles provides some key insights into the operational design drivers, capability phasing, and operational support systems.
The future of management: The NASA paradigm
NASA Technical Reports Server (NTRS)
Harris, Philip R.
1992-01-01
Prototypes of 21st century management, especially for large scale enterprises, may well be found within the aerospace industry. The space era inaugurated a number of projects of such scope and magnitude that another type of management had to be created to ensure successful achievement. The challenges will be not just in terms of technology and its management, but also human and cultural in dimension. Futurists, students of management, and those concerned with technological administration would do well to review the literature of emerging space management for its wider implications. NASA offers a paradigm, or demonstrated model, of future trends in the field of management at large. More research is needed on issues of leadership for Earth based project in space and space based programs with managers there. It is needed to realize that large scale technical enterprises, such as are undertaken in space, require a new form of management. NASA and other responsible agencies are urged to study excellence in space macromanagement, including the necessary multidisciplinary skills. Two recommended targets are the application of general living systems theory and macromanagement concepts for space stations in the 1990s.
The Hantzsche-Wendt manifold in cosmic topology
NASA Astrophysics Data System (ADS)
Aurich, R.; Lustig, S.
2014-08-01
The Hantzsche-Wendt space is one of the 17 multiply connected spaces of the three-dimensional Euclidean space {{{E}}^{3}}. It is a compact and orientable manifold which can serve as a model for a spatial finite universe. Since it possesses much fewer matched back-to-back circle pairs on the cosmic microwave background (CMB) sky than the other compact flat spaces, it can escape the detection by a search for matched circle pairs. The suppression of temperature correlations C(\\vartheta ) on large angular scales on the CMB sky is studied. It is shown that the large-scale correlations are of the same order as for the three-torus topology but express a much larger variability. The Hantzsche-Wendt manifold provides a topological possibility with reduced large-angle correlations that can hide from searches for matched back-to-back circle pairs.
Towards a large-scale scalable adaptive heart model using shallow tree meshes
NASA Astrophysics Data System (ADS)
Krause, Dorian; Dickopf, Thomas; Potse, Mark; Krause, Rolf
2015-10-01
Electrophysiological heart models are sophisticated computational tools that place high demands on the computing hardware due to the high spatial resolution required to capture the steep depolarization front. To address this challenge, we present a novel adaptive scheme for resolving the deporalization front accurately using adaptivity in space. Our adaptive scheme is based on locally structured meshes. These tensor meshes in space are organized in a parallel forest of trees, which allows us to resolve complicated geometries and to realize high variations in the local mesh sizes with a minimal memory footprint in the adaptive scheme. We discuss both a non-conforming mortar element approximation and a conforming finite element space and present an efficient technique for the assembly of the respective stiffness matrices using matrix representations of the inclusion operators into the product space on the so-called shallow tree meshes. We analyzed the parallel performance and scalability for a two-dimensional ventricle slice as well as for a full large-scale heart model. Our results demonstrate that the method has good performance and high accuracy.
Large craters on the meteoroid and space debris impact experiment
NASA Technical Reports Server (NTRS)
Humes, Donald H.
1991-01-01
The distribution around the Long Duration Exposure Facility (LDEF) of 532 large craters in the Al plates from the Meteoroid and Space Debris Impact Experiment (S0001) is discussed along with 74 additional large craters in Al plates donated to the Meteoroid and Debris Special Investigation Group by other LDEF experimenters. The craters are 0.5 mm in diameter and larger. Crater shape is discussed. The number of craters and their distribution around the spacecraft are compared with values predicted with models of the meteoroid environment and the manmade orbital debris environment.
NASA Technical Reports Server (NTRS)
Cleveland, Paul E.; Parrish, Keith A.
2005-01-01
A thorough and unique thermal verification and model validation plan has been developed for NASA s James Webb Space Telescope. The JWST observatory consists of a large deployed aperture optical telescope passively cooled to below 50 Kelvin along with a suite of several instruments passively and actively cooled to below 37 Kelvin and 7 Kelvin, respectively. Passive cooling to these extremely low temperatures is made feasible by the use of a large deployed high efficiency sunshield and an orbit location at the L2 Lagrange point. Another enabling feature is the scale or size of the observatory that allows for large radiator sizes that are compatible with the expected power dissipation of the instruments and large format Mercury Cadmium Telluride (HgCdTe) detector arrays. This passive cooling concept is simple, reliable, and mission enabling when compared to the alternatives of mechanical coolers and stored cryogens. However, these same large scale observatory features, which make passive cooling viable, also prevent the typical flight configuration fully-deployed thermal balance test that is the keystone to most space missions thermal verification plan. JWST is simply too large in its deployed configuration to be properly thermal balance tested in the facilities that currently exist. This reality, when combined with a mission thermal concept with little to no flight heritage, has necessitated the need for a unique and alternative approach to thermal system verification and model validation. This paper describes the thermal verification and model validation plan that has been developed for JWST. The plan relies on judicious use of cryogenic and thermal design margin, a completely independent thermal modeling cross check utilizing different analysis teams and software packages, and finally, a comprehensive set of thermal tests that occur at different levels of JWST assembly. After a brief description of the JWST mission and thermal architecture, a detailed description of the three aspects of the thermal verification and model validation plan is presented.
A method for modeling contact dynamics for automated capture mechanisms
NASA Technical Reports Server (NTRS)
Williams, Philip J.
1991-01-01
Logicon Control Dynamics develops contact dynamics models for space-based docking and berthing vehicles. The models compute contact forces for the physical contact between mating capture mechanism surfaces. Realistic simulation requires proportionality constants, for calculating contact forces, to approximate surface stiffness of contacting bodies. Proportionality for rigid metallic bodies becomes quite large. Small penetrations of surface boundaries can produce large contact forces.
Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling
NASA Technical Reports Server (NTRS)
Hojnicki, Jeffrey S.; Rusick, Jeffrey J.
2005-01-01
Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).
Efficient design of nanoplasmonic waveguide devices using the space mapping algorithm.
Dastmalchi, Pouya; Veronis, Georgios
2013-12-30
We show that the space mapping algorithm, originally developed for microwave circuit optimization, can enable the efficient design of nanoplasmonic waveguide devices which satisfy a set of desired specifications. Space mapping utilizes a physics-based coarse model to approximate a fine model accurately describing a device. Here the fine model is a full-wave finite-difference frequency-domain (FDFD) simulation of the device, while the coarse model is based on transmission line theory. We demonstrate that simply optimizing the transmission line model of the device is not enough to obtain a device which satisfies all the required design specifications. On the other hand, when the iterative space mapping algorithm is used, it converges fast to a design which meets all the specifications. In addition, full-wave FDFD simulations of only a few candidate structures are required before the iterative process is terminated. Use of the space mapping algorithm therefore results in large reductions in the required computation time when compared to any direct optimization method of the fine FDFD model.
Guiding Conformation Space Search with an All-Atom Energy Potential
Brunette, TJ; Brock, Oliver
2009-01-01
The most significant impediment for protein structure prediction is the inadequacy of conformation space search. Conformation space is too large and the energy landscape too rugged for existing search methods to consistently find near-optimal minima. To alleviate this problem, we present model-based search, a novel conformation space search method. Model-based search uses highly accurate information obtained during search to build an approximate, partial model of the energy landscape. Model-based search aggregates information in the model as it progresses, and in turn uses this information to guide exploration towards regions most likely to contain a near-optimal minimum. We validate our method by predicting the structure of 32 proteins, ranging in length from 49 to 213 amino acids. Our results demonstrate that model-based search is more effective at finding low-energy conformations in high-dimensional conformation spaces than existing search methods. The reduction in energy translates into structure predictions of increased accuracy. PMID:18536015
Time simulation of flutter with large stiffness changes
NASA Technical Reports Server (NTRS)
Karpel, M.; Wieseman, C. D.
1992-01-01
Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few a priori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.
Time simulation of flutter with large stiffness changes
NASA Technical Reports Server (NTRS)
Karpel, Mordechay; Wieseman, Carol D.
1992-01-01
Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for a basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness, and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few apriori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.
CSI related dynamics and control issues in space robotics
NASA Technical Reports Server (NTRS)
Schmitz, Eric; Ramey, Madison
1993-01-01
The research addressed includes: (1) CSI issues in space robotics; (2) control of elastic payloads, which includes 1-DOF example, and 3-DOF harmonic drive arm with elastic beam; and (3) control of large space arms with elastic links, which includes testbed description, modeling, and experimental implementation of colocated PD and end-point tip position controllers.
Managing Analysis Models in the Design Process
NASA Technical Reports Server (NTRS)
Briggs, Clark
2006-01-01
Design of large, complex space systems depends on significant model-based support for exploration of the design space. Integrated models predict system performance in mission-relevant terms given design descriptions and multiple physics-based numerical models. Both the design activities and the modeling activities warrant explicit process definitions and active process management to protect the project from excessive risk. Software and systems engineering processes have been formalized and similar formal process activities are under development for design engineering and integrated modeling. JPL is establishing a modeling process to define development and application of such system-level models.
Phenomenological Modeling of Infrared Sources: Recent Advances
NASA Technical Reports Server (NTRS)
Leung, Chun Ming; Kwok, Sun (Editor)
1993-01-01
Infrared observations from planned space facilities (e.g., ISO (Infrared Space Observatory), SIRTF (Space Infrared Telescope Facility)) will yield a large and uniform sample of high-quality data from both photometric and spectroscopic measurements. To maximize the scientific returns of these space missions, complementary theoretical studies must be undertaken to interpret these observations. A crucial step in such studies is the construction of phenomenological models in which we parameterize the observed radiation characteristics in terms of the physical source properties. In the last decade, models with increasing degree of physical realism (in terms of grain properties, physical processes, and source geometry) have been constructed for infrared sources. Here we review current capabilities available in the phenomenological modeling of infrared sources and discuss briefly directions for future research in this area.
Structural Health Monitoring of Large Structures
NASA Technical Reports Server (NTRS)
Kim, Hyoung M.; Bartkowicz, Theodore J.; Smith, Suzanne Weaver; Zimmerman, David C.
1994-01-01
This paper describes a damage detection and health monitoring method that was developed for large space structures using on-orbit modal identification. After evaluating several existing model refinement and model reduction/expansion techniques, a new approach was developed to identify the location and extent of structural damage with a limited number of measurements. A general area of structural damage is first identified and, subsequently, a specific damaged structural component is located. This approach takes advantage of two different model refinement methods (optimal-update and design sensitivity) and two different model size matching methods (model reduction and eigenvector expansion). Performance of the proposed damage detection approach was demonstrated with test data from two different laboratory truss structures. This space technology can also be applied to structural inspection of aircraft, offshore platforms, oil tankers, ridges, and buildings. In addition, its applications to model refinement will improve the design of structural systems such as automobiles and electronic packaging.
NASA Technical Reports Server (NTRS)
2002-01-01
Cosmic-ray background fluxes were modeled based on existing measurements and theories and are presented here. The model, originally developed for the Gamma-ray Large Area Space Telescope (GLAST) Balloon Experiment, covers the entire solid angle (4(pi) sr), the sensitive energy range of the instrument ((approx) 10 MeV to 100 GeV) and abundant components (proton, alpha, e(sup -), e(sup +), (mu)(sup -), (mu)(sup +) and gamma). It is expressed in analytic functions in which modulations due to the solar activity and the Earth geomagnetism are parameterized. Although the model is intended to be used primarily for the GLAST Balloon Experiment, model functions in low-Earth orbit are also presented and can be used for other high energy astrophysical missions. The model has been validated via comparison with the data of the GLAST Balloon Experiment.
Preliminary Multi-Variable Cost Model for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Hendrichs, Todd
2010-01-01
Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. This paper reviews the methodology used to develop space telescope cost models; summarizes recently published single variable models; and presents preliminary results for two and three variable cost models. Some of the findings are that increasing mass reduces cost; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and technology development as a function of time reduces cost at the rate of 50% per 17 years.
NASA Astrophysics Data System (ADS)
Straus, D. M.
2006-12-01
The transitions between portions of the state space of the large-scale flow is studied from daily wintertime data over the Pacific North America region using the NCEP reanalysis data set (54 winters) and very large suites of hindcasts made with the COLA atmospheric GCM with observed SST (55 members for each of 18 winters). The partition of the large-scale state space is guided by cluster analysis, whose statistical significance and relationship to SST is reviewed (Straus and Molteni, 2004; Straus, Corti and Molteni, 2006). The determination of the global nature of the flow through state space is studied using Markov Chains (Crommelin, 2004). In particular the non-diffusive part of the flow is contrasted in nature (small data sample) and the AGCM (large data sample). The intrinsic error growth associated with different portions of the state space is studied through sets of identical twin AGCM simulations. The goal is to obtain realistic estimates of predictability times for large-scale transitions that should be useful in long-range forecasting.
Fault-tolerant control of large space structures using the stable factorization approach
NASA Technical Reports Server (NTRS)
Razavi, H. C.; Mehra, R. K.; Vidyasagar, M.
1986-01-01
Large space structures are characterized by the following features: they are in general infinite-dimensional systems, and have large numbers of undamped or lightly damped poles. Any attempt to apply linear control theory to large space structures must therefore take into account these features. Phase I consisted of an attempt to apply the recently developed Stable Factorization (SF) design philosophy to problems of large space structures, with particular attention to the aspects of robustness and fault tolerance. The final report on the Phase I effort consists of four sections, each devoted to one task. The first three sections report theoretical results, while the last consists of a design example. Significant results were obtained in all four tasks of the project. More specifically, an innovative approach to order reduction was obtained, stabilizing controller structures for plants with an infinite number of unstable poles were determined under some conditions, conditions for simultaneous stabilizability of an infinite number of plants were explored, and a fault tolerance controller design that stabilizes a flexible structure model was obtained which is robust against one failure condition.
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tassev, Svetlin, E-mail: tassev@astro.princeton.edu
We present a pedagogical systematic investigation of the accuracy of Eulerian and Lagrangian perturbation theories of large-scale structure. We show that significant differences exist between them especially when trying to model the Baryon Acoustic Oscillations (BAO). We find that the best available model of the BAO in real space is the Zel'dovich Approximation (ZA), giving an accuracy of ∼<3% at redshift of z = 0 in modelling the matter 2-pt function around the acoustic peak. All corrections to the ZA around the BAO scale are perfectly perturbative in real space. Any attempt to achieve better precision requires calibrating the theorymore » to simulations because of the need to renormalize those corrections. In contrast, theories which do not fully preserve the ZA as their solution, receive O(1) corrections around the acoustic peak in real space at z = 0, and are thus of suspicious convergence at low redshift around the BAO. As an example, we find that a similar accuracy of 3% for the acoustic peak is achieved by Eulerian Standard Perturbation Theory (SPT) at linear order only at z ≈ 4. Thus even when SPT is perturbative, one needs to include loop corrections for z∼<4 in real space. In Fourier space, all models perform similarly, and are controlled by the overdensity amplitude, thus recovering standard results. However, that comes at a price. Real space cleanly separates the BAO signal from non-linear dynamics. In contrast, Fourier space mixes signal from short mildly non-linear scales with the linear signal from the BAO to the level that non-linear contributions from short scales dominate. Therefore, one has little hope in constructing a systematic theory for the BAO in Fourier space.« less
Schüle, Steffen Andreas; Gabriel, Katharina M A; Bolte, Gabriele
2017-06-01
The environmental justice framework states that besides environmental burdens also resources may be social unequally distributed both on the individual and on the neighbourhood level. This ecological study investigated whether neighbourhood socioeconomic position (SEP) was associated with neighbourhood public green space availability in a large German city with more than 1 million inhabitants. Two different measures were defined for green space availability. Firstly, percentage of green space within neighbourhoods was calculated with the additional consideration of various buffers around the boundaries. Secondly, percentage of green space was calculated based on various radii around the neighbourhood centroid. An index of neighbourhood SEP was calculated with principal component analysis. Log-gamma regression from the group of generalized linear models was applied in order to consider the non-normal distribution of the response variable. All models were adjusted for population density. Low neighbourhood SEP was associated with decreasing neighbourhood green space availability including 200m up to 1000m buffers around the neighbourhood boundaries. Low neighbourhood SEP was also associated with decreasing green space availability based on catchment areas measured from neighbourhood centroids with different radii (1000m up to 3000 m). With an increasing radius the strength of the associations decreased. Social unequally distributed green space may amplify environmental health inequalities in an urban context. Thus, the identification of vulnerable neighbourhoods and population groups plays an important role for epidemiological research and healthy city planning. As a methodical aspect, log-gamma regression offers an adequate parametric modelling strategy for positively distributed environmental variables. Copyright © 2017 Elsevier GmbH. All rights reserved.
Space Weather Research at the National Science Foundation
NASA Astrophysics Data System (ADS)
Moretto, T.
2015-12-01
There is growing recognition that the space environment can have substantial, deleterious, impacts on society. Consequently, research enabling specification and forecasting of hazardous space effects has become of great importance and urgency. This research requires studying the entire Sun-Earth system to understand the coupling of regions all the way from the source of disturbances in the solar atmosphere to the Earth's upper atmosphere. The traditional, region-based structure of research programs in Solar and Space physics is ill suited to fully support the change in research directions that the problem of space weather dictates. On the observational side, dense, distributed networks of observations are required to capture the full large-scale dynamics of the space environment. However, the cost of implementing these is typically prohibitive, especially for measurements in space. Thus, by necessity, the implementation of such new capabilities needs to build on creative and unconventional solutions. A particularly powerful idea is the utilization of new developments in data engineering and informatics research (big data). These new technologies make it possible to build systems that can collect and process huge amounts of noisy and inaccurate data and extract from them useful information. The shift in emphasis towards system level science for geospace also necessitates the development of large-scale and multi-scale models. The development of large-scale models capable of capturing the global dynamics of the Earth's space environment requires investment in research team efforts that go beyond what can typically be funded under the traditional grants programs. This calls for effective interdisciplinary collaboration and efficient leveraging of resources both nationally and internationally. This presentation will provide an overview of current and planned initiatives, programs, and activities at the National Science Foundation pertaining to space weathe research.
NASA Astrophysics Data System (ADS)
Jubb, Thomas; Kirk, Matthew; Lenz, Alexander
2017-12-01
We have considered a model of Dark Minimal Flavour Violation (DMFV), in which a triplet of dark matter particles couple to right-handed up-type quarks via a heavy colour-charged scalar mediator. By studying a large spectrum of possible constraints, and assessing the entire parameter space using a Markov Chain Monte Carlo (MCMC), we can place strong restrictions on the allowed parameter space for dark matter models of this type.
Biofidelic Human Activity Modeling and Simulation with Large Variability
2014-11-25
A systematic approach was developed for biofidelic human activity modeling and simulation by using body scan data and motion capture data to...replicate a human activity in 3D space. Since technologies for simultaneously capturing human motion and dynamic shapes are not yet ready for practical use, a...that can replicate a human activity in 3D space with the true shape and true motion of a human. Using this approach, a model library was built to
2016-11-01
space houses, etc.), and the unique weather environments that occur in the Urban Heat Island. A detailed urban terrain model was developed in a...affected by urban infrastructure (large buildings, roadways, densely space houses, etc.). A detailed urban terrain model was developed ERDC TR-15-5...different points in the model provided insight to complex propagation paths characteristic of urban environments. ERDC TR-15-5; Report 2 20 4
Global fits of GUT-scale SUSY models with GAMBIT
NASA Astrophysics Data System (ADS)
Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; de Austri, Roberto Ruiz; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin
2017-12-01
We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.
Laboratory Investigation of Space and Planetary Dust Grains
NASA Technical Reports Server (NTRS)
Spann, James
2005-01-01
Dust in space is ubiquitous and impacts diverse observed phenomena in various ways. Understanding the dominant mechanisms that control dust grain properties and its impact on surrounding environments is basic to improving our understanding observed processes at work in space. There is a substantial body of work on the theory and modeling of dust in space and dusty plasmas. To substantiate and validate theory and models, laboratory investigations and space borne observations have been conducted. Laboratory investigations are largely confined to an assembly of dust grains immersed in a plasma environment. Frequently the behaviors of these complex dusty plasmas in the laboratory have raised more questions than verified theories. Space borne observations have helped us characterize planetary environments. The complex behavior of dust grains in space indicates the need to understand the microphysics of individual grains immersed in a plasma or space environment.
NASA Technical Reports Server (NTRS)
Merchant, D. H.; Gates, R. M.; Straayer, J. W.
1975-01-01
The effect of localized structural damping on the excitability of higher-order large space telescope spacecraft modes is investigated. A preprocessor computer program is developed to incorporate Voigt structural joint damping models in a finite-element dynamic model. A postprocessor computer program is developed to select critical modes for low-frequency attitude control problems and for higher-frequency fine-stabilization problems. The selection is accomplished by ranking the flexible modes based on coefficients for rate gyro, position gyro, and optical sensor, and on image-plane motions due to sinusoidal or random PSD force and torque inputs.
NASA Technical Reports Server (NTRS)
Muheim, Danniella; Menzel, Michael; Mosier, Gary; Irish, Sandra; Maghami, Peiman; Mehalick, Kimberly; Parrish, Keith
2010-01-01
The James Web Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2014. System-level verification of critical performance requirements will rely on integrated observatory models that predict the wavefront error accurately enough to verify that allocated top-level wavefront error of 150 nm root-mean-squared (rms) through to the wave-front sensor focal plane is met. The assembled models themselves are complex and require the insight of technical experts to assess their ability to meet their objectives. This paper describes the systems engineering and modeling approach used on the JWST through the detailed design phase.
NASA Astrophysics Data System (ADS)
Colagrossi, Andrea; Lavagna, Michèle
2018-03-01
A space station in the vicinity of the Moon can be exploited as a gateway for future human and robotic exploration of the solar system. The natural location for a space system of this kind is about one of the Earth-Moon libration points. The study addresses the dynamics during rendezvous and docking operations with a very large space infrastructure in an EML2 Halo orbit. The model takes into account the coupling effects between the orbital and the attitude motion in a circular restricted three-body problem environment. The flexibility of the system is included, and the interaction between the modes of the structure and those related with the orbital motion is investigated. A lumped parameter technique is used to represents the flexible dynamics. The parameters of the space station are maintained as generic as possible, in a way to delineate a global scenario of the mission. However, the developed model can be tuned and updated according to the information that will be available in the future, when the whole system will be defined with a higher level of precision.
NASA Astrophysics Data System (ADS)
Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; Gibbs, Paul J.; Gibbs, John W.; Karma, Alain
2015-08-01
We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. We focus on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues for investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.
2011-12-11
CAPE CANAVERAL, Fla. – The high-fidelity space shuttle model that was on display at the NASA Kennedy Space Center Visitor Complex in Florida nears the intersection of NASA Causeway and Kennedy Parkway. It is standard procedure for large payloads and equipment to travel against the normal flow of traffic under the supervision of a move crew when being transported on or off center property. The model is being moved from the visitor complex to NASA Kennedy Space Center's Launch Complex 39 turn basin. The shuttle was part of a display at the visitor complex that also included an external tank and two solid rocket boosters that were used to show visitors the size of actual space shuttle components. The full-scale shuttle model is being transferred from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will stay at the turn basin for a few months until it is ready to be transported to Texas via barge. The move also helps clear the way for the Kennedy Space Center Visitor Complex to begin construction of a new facility next year to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Dimitri Gerondidakis
Policy model for space economy infrastructure
NASA Astrophysics Data System (ADS)
Komerath, Narayanan; Nally, James; Zilin Tang, Elizabeth
2007-12-01
Extraterrestrial infrastructure is key to the development of a space economy. Means for accelerating transition from today's isolated projects to a broad-based economy are considered. A large system integration approach is proposed. The beginnings of an economic simulation model are presented, along with examples of how interactions and coordination bring down costs. A global organization focused on space infrastructure and economic expansion is proposed to plan, coordinate, fund and implement infrastructure construction. This entity also opens a way to raise low-cost capital and solve the legal and public policy issues of access to extraterrestrial resources.
An Integrated Analysis of the Physiological Effects of Space Flight: Executive Summary
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1985-01-01
A large array of models were applied in a unified manner to solve problems in space flight physiology. Mathematical simulation was used as an alternative way of looking at physiological systems and maximizing the yield from previous space flight experiments. A medical data analysis system was created which consist of an automated data base, a computerized biostatistical and data analysis system, and a set of simulation models of physiological systems. Five basic models were employed: (1) a pulsatile cardiovascular model; (2) a respiratory model; (3) a thermoregulatory model; (4) a circulatory, fluid, and electrolyte balance model; and (5) an erythropoiesis regulatory model. Algorithms were provided to perform routine statistical tests, multivariate analysis, nonlinear regression analysis, and autocorrelation analysis. Special purpose programs were prepared for rank correlation, factor analysis, and the integration of the metabolic balance data.
Modeling extreme (Carrington-type) space weather events using three-dimensional MHD code simulations
NASA Astrophysics Data System (ADS)
Ngwira, C. M.; Pulkkinen, A. A.; Kuznetsova, M. M.; Glocer, A.
2013-12-01
There is growing concern over possible severe societal consequences related to adverse space weather impacts on man-made technological infrastructure and systems. In the last two decades, significant progress has been made towards the modeling of space weather events. Three-dimensional (3-D) global magnetohydrodynamics (MHD) models have been at the forefront of this transition, and have played a critical role in advancing our understanding of space weather. However, the modeling of extreme space weather events is still a major challenge even for existing global MHD models. In this study, we introduce a specially adapted University of Michigan 3-D global MHD model for simulating extreme space weather events that have a ground footprint comparable (or larger) to the Carrington superstorm. Results are presented for an initial simulation run with ``very extreme'' constructed/idealized solar wind boundary conditions driving the magnetosphere. In particular, we describe the reaction of the magnetosphere-ionosphere system and the associated ground induced geoelectric field to such extreme driving conditions. We also discuss the results and what they might mean for the accuracy of the simulations. The model is further tested using input data for an observed space weather event to verify the MHD model consistence and to draw guidance for future work. This extreme space weather MHD model is designed specifically for practical application to the modeling of extreme geomagnetically induced electric fields, which can drive large currents in earth conductors such as power transmission grids.
Effects of SO(10)-inspired scalar non-universality on the MSSM parameter space at large tanβ
NASA Astrophysics Data System (ADS)
Ramage, M. R.
2005-08-01
We analyze the parameter space of the ( μ>0, A=0) CMSSM at large tanβ with a small degree of non-universality originating from D-terms and Higgs-sfermion splitting inspired by SO(10) GUT models. The effects of such non-universalities on the sparticle spectrum and observables such as (, B(b→Xγ), the SUSY threshold corrections to the bottom mass and Ωh are examined in detail and the consequences for the allowed parameter space of the model are investigated. We find that even small deviations to universality can result in large qualitative differences compared to the universal case; for certain values of the parameters, we find, even at low m and m, that radiative electroweak symmetry breaking fails as a consequence of either |<0 or mA2<0. We find particularly large departures from the mSugra case for the neutralino relic density, which is sensitive to significant changes in the position and shape of the A resonance and a substantial increase in the Higgsino component of the LSP. However, we find that the corrections to the bottom mass are not sufficient to allow for Yukawa unification.
Modeling velocity space-time correlations in wind farms
NASA Astrophysics Data System (ADS)
Lukassen, Laura J.; Stevens, Richard J. A. M.; Meneveau, Charles; Wilczek, Michael
2016-11-01
Turbulent fluctuations of wind velocities cause power-output fluctuations in wind farms. The statistics of velocity fluctuations can be described by velocity space-time correlations in the atmospheric boundary layer. In this context, it is important to derive simple physics-based models. The so-called Tennekes-Kraichnan random sweeping hypothesis states that small-scale velocity fluctuations are passively advected by large-scale velocity perturbations in a random fashion. In the present work, this hypothesis is used with an additional mean wind velocity to derive a model for the spatial and temporal decorrelation of velocities in wind farms. It turns out that in the framework of this model, space-time correlations are a convolution of the spatial correlation function with a temporal decorrelation kernel. In this presentation, first results on the comparison to large eddy simulations will be presented and the potential of the approach to characterize power output fluctuations of wind farms will be discussed. Acknowledgements: 'Fellowships for Young Energy Scientists' (YES!) of FOM, the US National Science Foundation Grant IIA 1243482, and support by the Max Planck Society.
Cognitive engineering models in space systems
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.
1992-01-01
NASA space systems, including mission operations on the ground and in space, are complex, dynamic, predominantly automated systems in which the human operator is a supervisory controller. The human operator monitors and fine-tunes computer-based control systems and is responsible for ensuring safe and efficient system operation. In such systems, the potential consequences of human mistakes and errors may be very large, and low probability of such events is likely. Thus, models of cognitive functions in complex systems are needed to describe human performance and form the theoretical basis of operator workstation design, including displays, controls, and decision support aids. The operator function model represents normative operator behavior-expected operator activities given current system state. The extension of the theoretical structure of the operator function model and its application to NASA Johnson mission operations and space station applications is discussed.
NASA Technical Reports Server (NTRS)
Gorski, Krzysztof M.; Silk, Joseph; Vittorio, Nicola
1992-01-01
A new technique is used to compute the correlation function for large-angle cosmic microwave background anisotropies resulting from both the space and time variations in the gravitational potential in flat, vacuum-dominated, cold dark matter cosmological models. Such models with Omega sub 0 of about 0.2, fit the excess power, relative to the standard cold dark matter model, observed in the large-scale galaxy distribution and allow a high value for the Hubble constant. The low order multipoles and quadrupole anisotropy that are potentially observable by COBE and other ongoing experiments should definitively test these models.
Performance/price estimates for cortex-scale hardware: a design space exploration.
Zaveri, Mazad S; Hammerstrom, Dan
2011-04-01
In this paper, we revisit the concept of virtualization. Virtualization is useful for understanding and investigating the performance/price and other trade-offs related to the hardware design space. Moreover, it is perhaps the most important aspect of a hardware design space exploration. Such a design space exploration is a necessary part of the study of hardware architectures for large-scale computational models for intelligent computing, including AI, Bayesian, bio-inspired and neural models. A methodical exploration is needed to identify potentially interesting regions in the design space, and to assess the relative performance/price points of these implementations. As an example, in this paper we investigate the performance/price of (digital and mixed-signal) CMOS and hypothetical CMOL (nanogrid) technology based hardware implementations of human cortex-scale spiking neural systems. Through this analysis, and the resulting performance/price points, we demonstrate, in general, the importance of virtualization, and of doing these kinds of design space explorations. The specific results suggest that hybrid nanotechnology such as CMOL is a promising candidate to implement very large-scale spiking neural systems, providing a more efficient utilization of the density and storage benefits of emerging nano-scale technologies. In general, we believe that the study of such hypothetical designs/architectures will guide the neuromorphic hardware community towards building large-scale systems, and help guide research trends in intelligent computing, and computer engineering. Copyright © 2010 Elsevier Ltd. All rights reserved.
A controller design approach for large flexible space structures
NASA Technical Reports Server (NTRS)
Joshi, S. M.
1981-01-01
A controller design approach for large space structures is presented, which consists of a primary attitude controller and a secondary or damping enhancement controller. The secondary controller, which uses several Annular Momentum Control Device (AMCD's), is shown to make the closed loop system asymptotically stable under relatively simple conditions. The primary controller using torque actuators (or AMCD's) and colocated attitude and rate sensors is shown to be stable. It is shown that the same AMCD's can be used for simultaneous actuation of primary and secondary controllers. Numerical results are obtained for a large, thin, completely free plate model.
Laboratory development and testing of spacecraft diagnostics
NASA Astrophysics Data System (ADS)
Amatucci, William; Tejero, Erik; Blackwell, Dave; Walker, Dave; Gatling, George; Enloe, Lon; Gillman, Eric
2017-10-01
The Naval Research Laboratory's Space Chamber experiment is a large-scale laboratory device dedicated to the creation of large-volume plasmas with parameters scaled to realistic space plasmas. Such devices make valuable contributions to the investigation of space plasma phenomena under controlled, reproducible conditions, allowing for the validation of theoretical models being applied to space data. However, in addition to investigations such as plasma wave and instability studies, such devices can also make valuable contributions to the development and testing of space plasma diagnostics. One example is the plasma impedance probe developed at NRL. Originally developed as a laboratory diagnostic, the sensor has now been flown on a sounding rocket, is included on a CubeSat experiment, and will be included on the DoD Space Test Program's STP-H6 experiment on the International Space Station. In this talk, we will describe how the laboratory simulation of space plasmas made this development path possible. Work sponsored by the US Naval Research Laboratory Base Program.
Model error estimation for distributed systems described by elliptic equations
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1983-01-01
A function space approach is used to develop a theory for estimation of the errors inherent in an elliptic partial differential equation model for a distributed parameter system. By establishing knowledge of the inevitable deficiencies in the model, the error estimates provide a foundation for updating the model. The function space solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for static shape determination of large flexible systems.
2011-12-11
CAPE CANAVERAL, Fla. – The high-fidelity space shuttle model that was on display at the NASA Kennedy Space Center Visitor Complex in Florida travels northbound along Kennedy Parkway toward NASA Kennedy Space Center's Launch Complex 39 turn basin. It is standard procedure for large payloads and equipment to travel against the normal flow of traffic under the supervision of a move crew when being transported on or off center property. The shuttle was part of a display at the visitor complex that also included an external tank and two solid rocket boosters that were used to show visitors the size of actual space shuttle components. The full-scale shuttle model is being transferred from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will stay at the turn basin for a few months until it is ready to be transported to Texas via barge. The move also helps clear the way for the Kennedy Space Center Visitor Complex to begin construction of a new facility next year to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Dimitri Gerondidakis
Cooling Technology for Large Space Telescopes
NASA Technical Reports Server (NTRS)
DiPirro, Michael; Cleveland, Paul; Durand, Dale; Klavins, Andy; Muheim, Daniella; Paine, Christopher; Petach, Mike; Tenerelli, Domenick; Tolomeo, Jason; Walyus, Keith
2007-01-01
NASA's New Millennium Program funded an effort to develop a system cooling technology, which is applicable to all future infrared, sub-millimeter and millimeter cryogenic space telescopes. In particular, this technology is necessary for the proposed large space telescope Single Aperture Far-Infrared Telescope (SAFIR) mission. This technology will also enhance the performance and lower the risk and cost for other cryogenic missions. The new paradigm for cooling to low temperatures will involve passive cooling using lightweight deployable membranes that serve both as sunshields and V-groove radiators, in combination with active cooling using mechanical coolers operating down to 4 K. The Cooling Technology for Large Space Telescopes (LST) mission planned to develop and demonstrate a multi-layered sunshield, which is actively cooled by a multi-stage mechanical cryocooler, and further the models and analyses critical to scaling to future missions. The outer four layers of the sunshield cool passively by radiation, while the innermost layer is actively cooled to enable the sunshield to decrease the incident solar irradiance by a factor of more than one million. The cryocooler cools the inner layer of the sunshield to 20 K, and provides cooling to 6 K at a telescope mounting plate. The technology readiness level (TRL) of 7 will be achieved by the active cooling technology following the technology validation flight in Low Earth Orbit. In accordance with the New Millennium charter, tests and modeling are tightly integrated to advance the technology and the flight design for "ST-class" missions. Commercial off-the-shelf engineering analysis products are used to develop validated modeling capabilities to allow the techniques and results from LST to apply to a wide variety of future missions. The LST mission plans to "rewrite the book" on cryo-thermal testing and modeling techniques, and validate modeling techniques to scale to future space telescopes such as SAFIR.
Sensitivity study of Space Station Freedom operations cost and selected user resources
NASA Technical Reports Server (NTRS)
Accola, Anne; Fincannon, H. J.; Williams, Gregory J.; Meier, R. Timothy
1990-01-01
The results of sensitivity studies performed to estimate probable ranges for four key Space Station parameters using the Space Station Freedom's Model for Estimating Space Station Operations Cost (MESSOC) are discussed. The variables examined are grouped into five main categories: logistics, crew, design, space transportation system, and training. The modification of these variables implies programmatic decisions in areas such as orbital replacement unit (ORU) design, investment in repair capabilities, and crew operations policies. The model utilizes a wide range of algorithms and an extensive trial logistics data base to represent Space Station operations. The trial logistics data base consists largely of a collection of the ORUs that comprise the mature station, and their characteristics based on current engineering understanding of the Space Station. A nondimensional approach is used to examine the relative importance of variables on parameters.
Large scale structure in universes dominated by cold dark matter
NASA Technical Reports Server (NTRS)
Bond, J. Richard
1986-01-01
The theory of Gaussian random density field peaks is applied to a numerical study of the large-scale structure developing from adiabatic fluctuations in models of biased galaxy formation in universes with Omega = 1, h = 0.5 dominated by cold dark matter (CDM). The angular anisotropy of the cross-correlation function demonstrates that the far-field regions of cluster-scale peaks are asymmetric, as recent observations indicate. These regions will generate pancakes or filaments upon collapse. One-dimensional singularities in the large-scale bulk flow should arise in these CDM models, appearing as pancakes in position space. They are too rare to explain the CfA bubble walls, but pancakes that are just turning around now are sufficiently abundant and would appear to be thin walls normal to the line of sight in redshift space. Large scale streaming velocities are significantly smaller than recent observations indicate. To explain the reported 700 km/s coherent motions, mass must be significantly more clustered than galaxies with a biasing factor of less than 0.4 and a nonlinear redshift at cluster scales greater than one for both massive neutrino and cold models.
1994 Annual Tropical Cyclone Report
1995-01-01
force winds exist near the center. . . . The NOGAPS model does not analyze Tropical Depression 20W as a distinct feature, nor does it develop the...NOGAPS model for very small westward-moving trop- ical cyclones (Figure 3-20-8). According to Carr, NOGAPS effective grid spacing is too large to properly...analyze a very small to small tropical cyclone. The bogus vortex inserted into the analysis starts out too large and usually expands if the model
Laws of Large Numbers and Langevin Approximations for Stochastic Neural Field Equations
2013-01-01
In this study, we consider limit theorems for microscopic stochastic models of neural fields. We show that the Wilson–Cowan equation can be obtained as the limit in uniform convergence on compacts in probability for a sequence of microscopic models when the number of neuron populations distributed in space and the number of neurons per population tend to infinity. This result also allows to obtain limits for qualitatively different stochastic convergence concepts, e.g., convergence in the mean. Further, we present a central limit theorem for the martingale part of the microscopic models which, suitably re-scaled, converges to a centred Gaussian process with independent increments. These two results provide the basis for presenting the neural field Langevin equation, a stochastic differential equation taking values in a Hilbert space, which is the infinite-dimensional analogue of the chemical Langevin equation in the present setting. On a technical level, we apply recently developed law of large numbers and central limit theorems for piecewise deterministic processes taking values in Hilbert spaces to a master equation formulation of stochastic neuronal network models. These theorems are valid for processes taking values in Hilbert spaces, and by this are able to incorporate spatial structures of the underlying model. Mathematics Subject Classification (2000): 60F05, 60J25, 60J75, 92C20. PMID:23343328
NASA Technical Reports Server (NTRS)
Hsia, Wei-Shen
1986-01-01
In the Control Systems Division of the Systems Dynamics Laboratory of the NASA/MSFC, a Ground Facility (GF), in which the dynamics and control system concepts being considered for Large Space Structures (LSS) applications can be verified, was designed and built. One of the important aspects of the GF is to design an analytical model which will be as close to experimental data as possible so that a feasible control law can be generated. Using Hyland's Maximum Entropy/Optimal Projection Approach, a procedure was developed in which the maximum entropy principle is used for stochastic modeling and the optimal projection technique is used for a reduced-order dynamic compensator design for a high-order plant.
Statistical analysis of modeling error in structural dynamic systems
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, J. D.
1990-01-01
The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.
Cost Modeling for Space Optical Telescope Assemblies
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda
2011-01-01
Parametric cost models are used to plan missions, compare concepts and justify technology investments. This paper reviews an on-going effort to develop cost modes for space telescopes. This paper summarizes the methodology used to develop cost models and documents how changes to the database have changed previously published preliminary cost models. While the cost models are evolving, the previously published findings remain valid: it costs less per square meter of collecting aperture to build a large telescope than a small telescope; technology development as a function of time reduces cost; and lower areal density telescopes cost more than more massive telescopes.
A Scalable Approach to Probabilistic Latent Space Inference of Large-Scale Networks
Yin, Junming; Ho, Qirong; Xing, Eric P.
2014-01-01
We propose a scalable approach for making inference about latent spaces of large networks. With a succinct representation of networks as a bag of triangular motifs, a parsimonious statistical model, and an efficient stochastic variational inference algorithm, we are able to analyze real networks with over a million vertices and hundreds of latent roles on a single machine in a matter of hours, a setting that is out of reach for many existing methods. When compared to the state-of-the-art probabilistic approaches, our method is several orders of magnitude faster, with competitive or improved accuracy for latent space recovery and link prediction. PMID:25400487
NASA Technical Reports Server (NTRS)
Tinker, Michael L.
1998-01-01
Application of the free-suspension residual flexibility modal test method to the International Space Station Pathfinder structure is described. The Pathfinder, a large structure of the general size and weight of Space Station module elements, was also tested in a large fixed-base fixture to simulate Shuttle Orbiter payload constraints. After correlation of the Pathfinder finite element model to residual flexibility test data, the model was coupled to a fixture model, and constrained modes and frequencies were compared to fixed-base test. modes. The residual flexibility model compared very favorably to results of the fixed-base test. This is the first known direct comparison of free-suspension residual flexibility and fixed-base test results for a large structure. The model correlation approach used by the author for residual flexibility data is presented. Frequency response functions (FRF) for the regions of the structure that interface with the environment (a test fixture or another structure) are shown to be the primary tools for model correlation that distinguish or characterize the residual flexibility approach. A number of critical issues related to use of the structure interface FRF for correlating the model are then identified and discussed, including (1) the requirement of prominent stiffness lines, (2) overcoming problems with measurement noise which makes the antiresonances or minima in the functions difficult to identify, and (3) the use of interface stiffness and lumped mass perturbations to bring the analytical responses into agreement with test data. It is shown that good comparison of analytical-to-experimental FRF is the key to obtaining good agreement of the residual flexibility values.
Consequence modeling using the fire dynamics simulator.
Ryder, Noah L; Sutula, Jason A; Schemel, Christopher F; Hamer, Andrew J; Van Brunt, Vincent
2004-11-11
The use of Computational Fluid Dynamics (CFD) and in particular Large Eddy Simulation (LES) codes to model fires provides an efficient tool for the prediction of large-scale effects that include plume characteristics, combustion product dispersion, and heat effects to adjacent objects. This paper illustrates the strengths of the Fire Dynamics Simulator (FDS), an LES code developed by the National Institute of Standards and Technology (NIST), through several small and large-scale validation runs and process safety applications. The paper presents two fire experiments--a small room fire and a large (15 m diameter) pool fire. The model results are compared to experimental data and demonstrate good agreement between the models and data. The validation work is then extended to demonstrate applicability to process safety concerns by detailing a model of a tank farm fire and a model of the ignition of a gaseous fuel in a confined space. In this simulation, a room was filled with propane, given time to disperse, and was then ignited. The model yields accurate results of the dispersion of the gas throughout the space. This information can be used to determine flammability and explosive limits in a space and can be used in subsequent models to determine the pressure and temperature waves that would result from an explosion. The model dispersion results were compared to an experiment performed by Factory Mutual. Using the above examples, this paper will demonstrate that FDS is ideally suited to build realistic models of process geometries in which large scale explosion and fire failure risks can be evaluated with several distinct advantages over more traditional CFD codes. Namely transient solutions to fire and explosion growth can be produced with less sophisticated hardware (lower cost) than needed for traditional CFD codes (PC type computer verses UNIX workstation) and can be solved for longer time histories (on the order of hundreds of seconds of computed time) with minimal computer resources and length of model run. Additionally results that are produced can be analyzed, viewed, and tabulated during and following a model run within a PC environment. There are some tradeoffs, however, as rapid computations in PC's may require a sacrifice in the grid resolution or in the sub-grid modeling, depending on the size of the geometry modeled.
Control system design for the large space systems technology reference platform
NASA Technical Reports Server (NTRS)
Edmunds, R. S.
1982-01-01
Structural models and classical frequency domain control system designs were developed for the large space systems technology (LSST) reference platform which consists of a central bus structure, solar panels, and platform arms on which a variety of experiments may be mounted. It is shown that operation of multiple independently articulated payloads on a single platform presents major problems when subarc second pointing stability is required. Experiment compatibility will be an important operational consideration for systems of this type.
Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; ...
2015-05-27
We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. The focus is on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues formore » investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.« less
Combining Static Analysis and Model Checking for Software Analysis
NASA Technical Reports Server (NTRS)
Brat, Guillaume; Visser, Willem; Clancy, Daniel (Technical Monitor)
2003-01-01
We present an iterative technique in which model checking and static analysis are combined to verify large software systems. The role of the static analysis is to compute partial order information which the model checker uses to reduce the state space. During exploration, the model checker also computes aliasing information that it gives to the static analyzer which can then refine its analysis. The result of this refined analysis is then fed back to the model checker which updates its partial order reduction. At each step of this iterative process, the static analysis computes optimistic information which results in an unsafe reduction of the state space. However we show that the process converges to a fired point at which time the partial order information is safe and the whole state space is explored.
Numerical Experimentation with Maximum Likelihood Identification in Static Distributed Systems
NASA Technical Reports Server (NTRS)
Scheid, R. E., Jr.; Rodriguez, G.
1985-01-01
Many important issues in the control of large space structures are intimately related to the fundamental problem of parameter identification. One might also ask how well this identification process can be carried out in the presence of noisy data since no sensor system is perfect. With these considerations in mind the algorithms herein are designed to treat both the case of uncertainties in the modeling and uncertainties in the data. The analytical aspects of maximum likelihood identification are considered in some detail in another paper. The questions relevant to the implementation of these schemes are dealt with, particularly as they apply to models of large space structures. The emphasis is on the influence of the infinite dimensional character of the problem on finite dimensional implementations of the algorithms. Those areas of current and future analysis are highlighted which indicate the interplay between error analysis and possible truncations of the state and parameter spaces.
Spacecraft Dynamics and Control Program at AFRPL
NASA Technical Reports Server (NTRS)
Das, A.; Slimak, L. K. S.; Schloegel, W. T.
1986-01-01
A number of future DOD and NASA spacecraft such as the space based radar will be not only an order of magnitude larger in dimension than the current spacecraft, but will exhibit extreme structural flexibility with very low structural vibration frequencies. Another class of spacecraft (such as the space defense platforms) will combine large physical size with extremely precise pointing requirement. Such problems require a total departure from the traditional methods of modeling and control system design of spacecraft where structural flexibility is treated as a secondary effect. With these problems in mind, the Air Force Rocket Propulsion Laboratory (AFRPL) initiated research to develop dynamics and control technology so as to enable the future large space structures (LSS). AFRPL's effort in this area can be subdivided into the following three overlapping areas: (1) ground experiments, (2) spacecraft modeling and control, and (3) sensors and actuators. Both the in-house and contractual efforts of the AFRPL in LSS are summarized.
Herbei, Radu; Kubatko, Laura
2013-03-26
Markov chains are widely used for modeling in many areas of molecular biology and genetics. As the complexity of such models advances, it becomes increasingly important to assess the rate at which a Markov chain converges to its stationary distribution in order to carry out accurate inference. A common measure of convergence to the stationary distribution is the total variation distance, but this measure can be difficult to compute when the state space of the chain is large. We propose a Monte Carlo method to estimate the total variation distance that can be applied in this situation, and we demonstrate how the method can be efficiently implemented by taking advantage of GPU computing techniques. We apply the method to two Markov chains on the space of phylogenetic trees, and discuss the implications of our findings for the development of algorithms for phylogenetic inference.
Modeling and Analysis of Large Amplitude Flight Maneuvers
NASA Technical Reports Server (NTRS)
Anderson, Mark R.
2004-01-01
Analytical methods for stability analysis of large amplitude aircraft motion have been slow to develop because many nonlinear system stability assessment methods are restricted to a state-space dimension of less than three. The proffered approach is to create regional cell-to-cell maps for strategically located two-dimensional subspaces within the higher-dimensional model statespace. These regional solutions capture nonlinear behavior better than linearized point solutions. They also avoid the computational difficulties that emerge when attempting to create a cell map for the entire state-space. Example stability results are presented for a general aviation aircraft and a micro-aerial vehicle configuration. The analytical results are consistent with characteristics that were discovered during previous flight-testing.
NASA Astrophysics Data System (ADS)
Buessen, Finn Lasse; Roscher, Dietrich; Diehl, Sebastian; Trebst, Simon
2018-02-01
The pseudofermion functional renormalization group (pf-FRG) is one of the few numerical approaches that has been demonstrated to quantitatively determine the ordering tendencies of frustrated quantum magnets in two and three spatial dimensions. The approach, however, relies on a number of presumptions and approximations, in particular the choice of pseudofermion decomposition and the truncation of an infinite number of flow equations to a finite set. Here we generalize the pf-FRG approach to SU (N )-spin systems with arbitrary N and demonstrate that the scheme becomes exact in the large-N limit. Numerically solving the generalized real-space renormalization group equations for arbitrary N , we can make a stringent connection between the physically most significant case of SU(2) spins and more accessible SU (N ) models. In a case study of the square-lattice SU (N ) Heisenberg antiferromagnet, we explicitly demonstrate that the generalized pf-FRG approach is capable of identifying the instability indicating the transition into a staggered flux spin liquid ground state in these models for large, but finite, values of N . In a companion paper [Roscher et al., Phys. Rev. B 97, 064416 (2018), 10.1103/PhysRevB.97.064416] we formulate a momentum-space pf-FRG approach for SU (N ) spin models that allows us to explicitly study the large-N limit and access the low-temperature spin liquid phase.
NASA Astrophysics Data System (ADS)
Ferrari, Ulisse
A maximal entropy model provides the least constrained probability distribution that reproduces experimental averages of an observables set. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a ``rectified'' Data-Driven algorithm that is fast and by sampling from the parameters posterior avoids both under- and over-fitting along all the directions of the parameters space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method. This research was supported by a Grant from the Human Brain Project (HBP CLAP).
Operational Space Weather Activities in the US
NASA Astrophysics Data System (ADS)
Berger, Thomas; Singer, Howard; Onsager, Terrance; Viereck, Rodney; Murtagh, William; Rutledge, Robert
2016-07-01
We review the current activities in the civil operational space weather forecasting enterprise of the United States. The NOAA/Space Weather Prediction Center is the nation's official source of space weather watches, warnings, and alerts, working with partners in the Air Force as well as international operational forecast services to provide predictions, data, and products on a large variety of space weather phenomena and impacts. In October 2015, the White House Office of Science and Technology Policy released the National Space Weather Strategy (NSWS) and associated Space Weather Action Plan (SWAP) that define how the nation will better forecast, mitigate, and respond to an extreme space weather event. The SWAP defines actions involving multiple federal agencies and mandates coordination and collaboration with academia, the private sector, and international bodies to, among other things, develop and sustain an operational space weather observing system; develop and deploy new models of space weather impacts to critical infrastructure systems; define new mechanisms for the transition of research models to operations and to ensure that the research community is supported for, and has access to, operational model upgrade paths; and to enhance fundamental understanding of space weather through support of research models and observations. The SWAP will guide significant aspects of space weather operational and research activities for the next decade, with opportunities to revisit the strategy in the coming years through the auspices of the National Science and Technology Council.
Not-so-well-tempered neutralino
NASA Astrophysics Data System (ADS)
Profumo, Stefano; Stefaniak, Tim; Stephenson-Haskins, Laurel
2017-09-01
Light electroweakinos, the neutral and charged fermionic supersymmetric partners of the standard model SU (2 )×U (1 ) gauge bosons and of the two SU(2) Higgs doublets, are an important target for searches for new physics with the Large Hadron Collider (LHC). However, if the lightest neutralino is the dark matter, constraints from direct dark matter detection experiments rule out large swaths of the parameter space accessible to the LHC, including in large part the so-called "well-tempered" neutralinos. We focus on the minimal supersymmetric standard model (MSSM) and explore in detail which regions of parameter space are not excluded by null results from direct dark matter detection, assuming exclusive thermal production of neutralinos in the early universe, and illustrate the complementarity with current and future LHC searches for electroweak gauginos. We consider both bino-Higgsino and bino-wino "not-so-well-tempered" neutralinos, i.e. we include models where the lightest neutralino constitutes only part of the cosmological dark matter, with the consequent suppression of the constraints from direct and indirect dark matter searches.
Pleticha, Josef; Maus, Timothy P; Jeng-Singh, Christian; Marsh, Michael P; Al-Saiegh, Fadi; Christner, Jodie A; Lee, Kendall H; Beutler, Andreas S
2013-05-30
Intrathecal (IT) administration is an important route of drug delivery, and its modelling in a large animal species is of critical value. Although domestic swine is the preferred species for preclinical pharmacology, no minimally invasive method has been established to deliver agents into the IT space. While a "blind" lumbar puncture (LP) can sample cerebrospinal fluid (CSF), it is unreliable for drug delivery in pigs. Using computed tomography (CT), we determined the underlying anatomical reasons for this irregularity. The pig spinal cord was visualised terminating at the S2-S3 level. The lumbar region contained only small amounts of CSF found in the lateral recess. Additional anatomical constraints included ossification of the midline ligaments, overlapping lamina with small interlaminar spaces, and a large bulk of epidural adipose tissue. Accommodating the the pig CT anatomy, we developed a lateral LP (LLP) injection technique that employs advanced planning of the needle path and monitoring of the IT injection progress. The key features of the LLP procedure involved choosing a vertebral level without overlapping lamina or spinal ligament ossification, a needle trajectory crossing the midline, and entering the IT space in its lateral recess. Effective IT delivery was validated by the injection of contrast media to obtain a CT myelogram. LLP represents a safe and reliable method to deliver agents to the lumbar pig IT space, which can be implemented in a straightforward way by any laboratory with access to CT equipment. Therefore, LLP is an attractive large animal model for preclinical studies of IT therapies. Copyright © 2013 Elsevier B.V. All rights reserved.
Yu, Lei; Kang, Jian
2009-09-01
This research aims to explore the feasibility of using computer-based models to predict the soundscape quality evaluation of potential users in urban open spaces at the design stage. With the data from large scale field surveys in 19 urban open spaces across Europe and China, the importance of various physical, behavioral, social, demographical, and psychological factors for the soundscape evaluation has been statistically analyzed. Artificial neural network (ANN) models have then been explored at three levels. It has been shown that for both subjective sound level and acoustic comfort evaluation, a general model for all the case study sites is less feasible due to the complex physical and social environments in urban open spaces; models based on individual case study sites perform well but the application range is limited; and specific models for certain types of location/function would be reliable and practical. The performance of acoustic comfort models is considerably better than that of sound level models. Based on the ANN models, soundscape quality maps can be produced and this has been demonstrated with an example.
Brace, Christopher L; Laeseke, Paul F; Sampson, Lisa A; Frey, Tina M; van der Weide, Daniel W; Lee, Fred T
2007-07-01
To prospectively investigate the ability of a single generator to power multiple small-diameter antennas and create large zones of ablation in an in vivo swine liver model. Thirteen female domestic swine (mean weight, 70 kg) were used for the study as approved by the animal care and use committee. A single generator was used to simultaneously power three triaxial antennas at 55 W per antenna for 10 minutes in three groups: a control group where antennas were spaced to eliminate ablation zone overlap (n=6; 18 individual zones of ablation) and experimental groups where antennas were spaced 2.5 cm (n=7) or 3.0 cm (n=5) apart. Animals were euthanized after ablation, and ablation zones were sectioned and measured. A mixed linear model was used to test for differences in size and circularity among groups. Mean (+/-standard deviation) cross-sectional areas of multiple-antenna zones of ablation at 2.5- and 3.0-cm spacing (26.6 cm(2) +/- 9.7 and 32.2 cm(2) +/- 8.1, respectively) were significantly larger than individual ablation zones created with single antennas (6.76 cm(2) +/- 2.8, P<.001) and were 31% (2.5-cm spacing group: multiple antenna mean area, 26.6 cm(2); 3 x single antenna mean area, 20.28 cm(2)) to 59% (3.0-cm spacing group: multiple antenna mean area, 32.2 cm(2); 3 x single antenna mean area, 20.28 cm(2)) larger than 3 times the mean area of the single-antenna zones. Zones of ablation were found to be very circular, and vessels as large as 1.1 cm were completely coagulated with multiple antennas. A single generator may effectively deliver microwave power to multiple antennas. Large volumes of tissue may be ablated and large vessels coagulated with multiple-antenna ablation in the same time as single-antenna ablation. (c) RSNA, 2007.
Heating of large format filters in sub-mm and fir space optics
NASA Astrophysics Data System (ADS)
Baccichet, N.; Savini, G.
2017-11-01
Most FIR and sub-mm space borne observatories use polymer-based quasi-optical elements like filters and lenses, due to their high transparency and low absorption in such wavelength ranges. Nevertheless, data from those missions have proven that thermal imbalances in the instrument (not caused by filters) can complicate the data analysis. Consequently, for future, higher precision instrumentation, further investigation is required on any thermal imbalances embedded in such polymer-based filters. Particularly, in this paper the heating of polymers when operating at cryogenic temperature in space will be studied. Such phenomenon is an important aspect of their functioning since the transient emission of unwanted thermal radiation may affect the scientific measurements. To assess this effect, a computer model was developed for polypropylene based filters and PTFE-based coatings. Specifically, a theoretical model of their thermal properties was created and used into a multi-physics simulation that accounts for conductive and radiative heating effects of large optical elements, the geometry of which was suggested by the large format array instruments designed for future space missions. It was found that in the simulated conditions, the filters temperature was characterized by a time-dependent behaviour, modulated by a small scale fluctuation. Moreover, it was noticed that thermalization was reached only when a low power input was present.
A k-space method for acoustic propagation using coupled first-order equations in three dimensions.
Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C
2009-09-01
A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.
Z boson mediated dark matter beyond the effective theory
Kearney, John; Orlofsky, Nicholas; Pierce, Aaron
2017-02-17
Here, direct detection bounds are beginning to constrain a very simple model of weakly interacting dark matter—a Majorana fermion with a coupling to the Z boson. In a particularly straightforward gauge-invariant realization, this coupling is introduced via a higher-dimensional operator. While attractive in its simplicity, this model generically induces a large ρ parameter. An ultraviolet completion that avoids an overly large contribution to ρ is the singlet-doublet model. We revisit this model, focusing on the Higgs blind spot region of parameter space where spin-independent interactions are absent. This model successfully reproduces dark matter with direct detection mediated by the Zmore » boson but whose cosmology may depend on additional couplings and states. Future direct detection experiments should effectively probe a significant portion of this parameter space, aside from a small coannihilating region. As such, Z-mediated thermal dark matter as realized in the singlet-doublet model represents an interesting target for future searches.« less
Model for large magnetoresistance effect in p–n junctions
NASA Astrophysics Data System (ADS)
Cao, Yang; Yang, Dezheng; Si, Mingsu; Shi, Huigang; Xue, Desheng
2018-06-01
We present a simple model based on the classic Shockley model to explain the magnetotransport in nonmagnetic p–n junctions. Under a magnetic field, the evaluation of the carrier to compensate Lorentz force establishes the necessary space-charge region distribution. The calculated current–voltage (I–V) characteristics under various magnetic fields demonstrate that the conventional nonmagnetic p–n junction can exhibit an extremely large magnetoresistance effect, which is even larger than that in magnetic materials. Because the large magnetoresistance effect that we discussed is based on the conventional p–n junction device, our model provides new insight into the development of semiconductor magnetoelectronics.
Space Shuttle critical function audit
NASA Technical Reports Server (NTRS)
Sacks, Ivan J.; Dipol, John; Su, Paul
1990-01-01
A large fault-tolerance model of the main propulsion system of the US space shuttle has been developed. This model is being used to identify single components and pairs of components that will cause loss of shuttle critical functions. In addition, this model is the basis for risk quantification of the shuttle. The process used to develop and analyze the model is digraph matrix analysis (DMA). The DMA modeling and analysis process is accessed via a graphics-based computer user interface. This interface provides coupled display of the integrated system schematics, the digraph models, the component database, and the results of the fault tolerance and risk analyses.
NASA Astrophysics Data System (ADS)
Cui, Tiangang; Marzouk, Youssef; Willcox, Karen
2016-06-01
Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.
NASA Astrophysics Data System (ADS)
Bouya, Zahra; Terkildsen, Michael
2016-07-01
The Australian Space Forecast Centre (ASFC) provides space weather forecasts to a diverse group of customers. Space Weather Services (SWS) within the Australian Bureau of Meteorology is focussed both on developing tailored products and services for the key customer groups, and supporting ASFC operations. Research in SWS is largely centred on the development of data-driven models using a range of solar-terrestrial data. This paper will cover some data requirements , approaches and recent SWS activities for data driven modelling with a focus on the regional Ionospheric specification and forecasting.
2011-12-11
CAPE CANAVERAL, Fla. – The high-fidelity space shuttle model that was on display at the NASA Kennedy Space Center Visitor Complex in Florida negotiates the on-ramp at the intersection of NASA Causeway and Kennedy Parkway to gain entrance to the northbound roadways on the center. It is standard procedure for large payloads and equipment to travel against the normal flow of traffic under the supervision of a move crew when being transported on or off center property. The model is being moved from the visitor complex to NASA Kennedy Space Center's Launch Complex 39 turn basin. The shuttle was part of a display at the visitor complex that also included an external tank and two solid rocket boosters that were used to show visitors the size of actual space shuttle components. The full-scale shuttle model is being transferred from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will stay at the turn basin for a few months until it is ready to be transported to Texas via barge. The move also helps clear the way for the Kennedy Space Center Visitor Complex to begin construction of a new facility next year to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Dimitri Gerondidakis
2011-12-11
CAPE CANAVERAL, Fla. – The high-fidelity space shuttle model that was on display at the NASA Kennedy Space Center Visitor Complex in Florida arrives at the foot of the on-ramp at the intersection of NASA Causeway and Kennedy Parkway to gain entrance to the northbound roadways on the center. It is standard procedure for large payloads and equipment to travel against the normal flow of traffic under the supervision of a move crew when being transported on or off center property. The model is being moved from the visitor complex to NASA Kennedy Space Center's Launch Complex 39 turn basin. The shuttle was part of a display at the visitor complex that also included an external tank and two solid rocket boosters that were used to show visitors the size of actual space shuttle components. The full-scale shuttle model is being transferred from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will stay at the turn basin for a few months until it is ready to be transported to Texas via barge. The move also helps clear the way for the Kennedy Space Center Visitor Complex to begin construction of a new facility next year to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Dimitri Gerondidakis
2011-12-11
CAPE CANAVERAL, Fla. – The high-fidelity space shuttle model that was on display at the NASA Kennedy Space Center Visitor Complex in Florida moves along the on-ramp from NASA Causeway to Kennedy Parkway to gain entrance to the northbound roadways on the center. It is standard procedure for large payloads and equipment to travel against the normal flow of traffic under the supervision of a move crew when being transported on or off center property. The model is being moved from the visitor complex to NASA Kennedy Space Center's Launch Complex 39 turn basin. The shuttle was part of a display at the visitor complex that also included an external tank and two solid rocket boosters that were used to show visitors the size of actual space shuttle components. The full-scale shuttle model is being transferred from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will stay at the turn basin for a few months until it is ready to be transported to Texas via barge. The move also helps clear the way for the Kennedy Space Center Visitor Complex to begin construction of a new facility next year to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Dimitri Gerondidakis
2011-12-11
CAPE CANAVERAL, Fla. – The high-fidelity space shuttle model that was on display at the NASA Kennedy Space Center Visitor Complex in Florida uses the on-ramp at the intersection of NASA Causeway and Kennedy Parkway to gain entrance to the northbound roadways on the center. It is standard procedure for large payloads and equipment to travel against the normal flow of traffic under the supervision of a move crew when being transported on or off center property. The model is being moved from the visitor complex to NASA Kennedy Space Center's Launch Complex 39 turn basin. The shuttle was part of a display at the visitor complex that also included an external tank and two solid rocket boosters that were used to show visitors the size of actual space shuttle components. The full-scale shuttle model is being transferred from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will stay at the turn basin for a few months until it is ready to be transported to Texas via barge. The move also helps clear the way for the Kennedy Space Center Visitor Complex to begin construction of a new facility next year to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Dimitri Gerondidakis
2011-12-11
CAPE CANAVERAL, Fla. – The high-fidelity space shuttle model that was on display at the NASA Kennedy Space Center Visitor Complex in Florida monopolizes the on-ramp at the intersection of NASA Causeway and Kennedy Parkway to gain entrance to the northbound roadways on the center. It is standard procedure for large payloads and equipment to travel against the normal flow of traffic under the supervision of a move crew when being transported on or off center property. The model is being moved from the visitor complex to NASA Kennedy Space Center's Launch Complex 39 turn basin. The shuttle was part of a display at the visitor complex that also included an external tank and two solid rocket boosters that were used to show visitors the size of actual space shuttle components. The full-scale shuttle model is being transferred from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will stay at the turn basin for a few months until it is ready to be transported to Texas via barge. The move also helps clear the way for the Kennedy Space Center Visitor Complex to begin construction of a new facility next year to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Dimitri Gerondidakis
2011-12-11
CAPE CANAVERAL, Fla. – The high-fidelity space shuttle model that was on display at the NASA Kennedy Space Center Visitor Complex in Florida creeps along the on-ramp from NASA Causeway to Kennedy Parkway to gain entrance to the northbound roadways on the center. It is standard procedure for large payloads and equipment to travel against the normal flow of traffic under the supervision of a move crew when being transported on or off center property. The model is being moved from the visitor complex to NASA Kennedy Space Center's Launch Complex 39 turn basin. The shuttle was part of a display at the visitor complex that also included an external tank and two solid rocket boosters that were used to show visitors the size of actual space shuttle components. The full-scale shuttle model is being transferred from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will stay at the turn basin for a few months until it is ready to be transported to Texas via barge. The move also helps clear the way for the Kennedy Space Center Visitor Complex to begin construction of a new facility next year to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Dimitri Gerondidakis
2011-12-11
CAPE CANAVERAL, Fla. – Once it has passed the security gate, the high-fidelity space shuttle model that was on display at the NASA Kennedy Space Center Visitor Complex in Florida changes lanes as it rolls onto the center. It is standard procedure for large payloads and equipment to travel against the normal flow of traffic under the supervision of a move crew when being transported on or off center property. The model is being moved from the visitor complex to NASA Kennedy Space Center's Launch Complex 39 turn basin. The shuttle was part of a display at the visitor complex that also included an external tank and two solid rocket boosters that were used to show visitors the size of actual space shuttle components. The full-scale shuttle model is being transferred from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will stay at the turn basin for a few months until it is ready to be transported to Texas via barge. The move also helps clear the way for the Kennedy Space Center Visitor Complex to begin construction of a new facility next year to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Dimitri Gerondidakis
Overview of the SHIELDS Project at LANL
NASA Astrophysics Data System (ADS)
Jordanova, V.; Delzanno, G. L.; Henderson, M. G.; Godinez, H. C.; Jeffery, C. A.; Lawrence, E. C.; Meierbachtol, C.; Moulton, D.; Vernon, L.; Woodroffe, J. R.; Toth, G.; Welling, D. T.; Yu, Y.; Birn, J.; Thomsen, M. F.; Borovsky, J.; Denton, M.; Albert, J.; Horne, R. B.; Lemon, C. L.; Markidis, S.; Young, S. L.
2015-12-01
The near-Earth space environment is a highly dynamic and coupled system through a complex set of physical processes over a large range of scales, which responds nonlinearly to driving by the time-varying solar wind. Predicting variations in this environment that can affect technologies in space and on Earth, i.e. "space weather", remains a big space physics challenge. We present a recently funded project through the Los Alamos National Laboratory (LANL) Directed Research and Development (LDRD) program that is developing a new capability to understand, model, and predict Space Hazards Induced near Earth by Large Dynamic Storms, the SHIELDS framework. The project goals are to specify the dynamics of the hot (keV) particles (the seed population for the radiation belts) on both macro- and micro-scale, including important physics of rapid particle injection and acceleration associated with magnetospheric storms/substorms and plasma waves. This challenging problem is addressed using a team of world-class experts in the fields of space science and computational plasma physics and state-of-the-art models and computational facilities. New data assimilation techniques employing data from LANL instruments on the Van Allen Probes and geosynchronous satellites are developed in addition to physics-based models. This research will provide a framework for understanding of key radiation belt drivers that may accelerate particles to relativistic energies and lead to spacecraft damage and failure. The ability to reliably distinguish between various modes of failure is critically important in anomaly resolution and forensics. SHIELDS will enhance our capability to accurately specify and predict the near-Earth space environment where operational satellites reside.
The Microgravity Vibration Isolation Mount: A Dynamic Model for Optimal Controller Design
NASA Technical Reports Server (NTRS)
Hampton, R. David; Tryggvason, Bjarni V.; DeCarufel, Jean; Townsend, Miles A.; Wagar, William O.
1997-01-01
Vibration acceleration levels on large space platforms exceed the requirements of many space experiments. The Microgravity Vibration Isolation Mount (MIM) was built by the Canadian Space Agency to attenuate these disturbances to acceptable levels, and has been operational on the Russian Space Station Mir since May 1996. It has demonstrated good isolation performance and has supported several materials science experiments. The MIM uses Lorentz (voice-coil) magnetic actuators to levitate and isolate payloads at the individual experiment/sub-experiment (versus rack) level. Payload acceleration, relative position, and relative orientation (Euler-parameter) measurements are fed to a state-space controller. The controller, in turn, determines the actuator currents needed for effective experiment isolation. This paper presents the development of an algebraic, state-space model of the MIM, in a form suitable for optimal controller design.
NSF's Perspective on Space Weather Research for Building Forecasting Capabilities
NASA Astrophysics Data System (ADS)
Bisi, M. M.; Pulkkinen, A. A.; Bisi, M. M.; Pulkkinen, A. A.; Webb, D. F.; Oughton, E. J.; Azeem, S. I.
2017-12-01
Space weather research at the National Science Foundation (NSF) is focused on scientific discovery and on deepening knowledge of the Sun-Geospace system. The process of maturation of knowledge base is a requirement for the development of improved space weather forecast models and for the accurate assessment of potential mitigation strategies. Progress in space weather forecasting requires advancing in-depth understanding of the underlying physical processes, developing better instrumentation and measurement techniques, and capturing the advancements in understanding in large-scale physics based models that span the entire chain of events from the Sun to the Earth. This presentation will provide an overview of current and planned programs pertaining to space weather research at NSF and discuss the recommendations of the Geospace Section portfolio review panel within the context of space weather forecasting capabilities.
Glovebox Integrated Microgravity Isolation Technology (g-LIMIT): A Linearized State-Space Model
NASA Technical Reports Server (NTRS)
Hampton, R. David; Calhoun, Philip C.; Whorton, Mark S.
2001-01-01
Vibration acceleration levels on large space platforms exceed the requirements of many space experiments. The Glovebox Integrated Microgravity Isolation Technology (g-LIMIT) is being built by the NASA Marshall Space Flight Center to attenuate these disturbances to acceptable levels. G-LIMIT uses Lorentz (voice-coil) magnetic actuators to levitate and isolate payloads at the individual experiment/sub-experiment (versus rack) level. Payload acceleration, relative position, and relative orientation measurements are fed to a state-space controller. The controller, in turn, determines the actuator Currents needed for effective experiment isolation. This paper presents the development of an algebraic, state-space model of g-LIMIT, in a form suitable for optimal controller design. The equations are first derived using Newton's Second Law directly, then simplified to a linear form for the purpose of controller design.
Large Terrain Modeling and Visualization for Planets
NASA Technical Reports Server (NTRS)
Myint, Steven; Jain, Abhinandan; Cameron, Jonathan; Lim, Christopher
2011-01-01
Physics-based simulations are actively used in the design, testing, and operations phases of surface and near-surface planetary space missions. One of the challenges in realtime simulations is the ability to handle large multi-resolution terrain data sets within models as well as for visualization. In this paper, we describe special techniques that we have developed for visualization, paging, and data storage for dealing with these large data sets. The visualization technique uses a real-time GPU-based continuous level-of-detail technique that delivers multiple frames a second performance even for planetary scale terrain model sizes.
Review of space radiation interaction with ZERODUR
NASA Astrophysics Data System (ADS)
Carré, Antoine; Westerhoff, Thomas; Hull, Tony; Doyle, D.
2017-09-01
ZERODUR has been and is still being successfully used as mirror substrates for a large number of space missions. Improvements in CNC machining at SCHOTT allow to achieve extremely light weighted substrates incorporating very thin ribs and face sheets. This paper is reviewing data published on the interaction of space radiation with ZERODUR. Additionally, this paper reports on considerations and experiments which are needed to confidently apply an updated model on ZERODUR behavior under space radiation for extremely light weighted ZERODUR substrates.
Extracting Useful Semantic Information from Large Scale Corpora of Text
ERIC Educational Resources Information Center
Mendoza, Ray Padilla, Jr.
2012-01-01
Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…
NASA Astrophysics Data System (ADS)
Chu, Zhongyi; Di, Jingnan; Cui, Jing
2017-10-01
Space debris occupies a valuable orbital resource and is an inevitable and urgent problem, especially for large space debris because of its high risk and the possible crippling effects of a collision. Space debris has attracted much attention in recent years. A tethered system used in an active debris removal scenario is a promising method to de-orbit large debris in a safe manner. In a tethered system, the flexibility of the tether used in debris removal can possibly induce tangling, which is dangerous and should be avoided. In particular, attachment point bias due to capture error can significantly affect the motion of debris relative to the tether and increase the tangling risk. Hence, in this paper, the effect of attachment point bias on the tethered system is studied based on a dynamic model established based on a Newtonian approach. Next, a safety metric of avoiding a tangle when a tether is tensioned with attachment point bias is designed to analyse the tangling risk of the tethered system. Finally, several numerical cases are established and simulated to validate the effects of attachment point bias on a space tethered system.
An Evaluation of Cosmological Models from the Expansion and Growth of Structure Measurements
NASA Astrophysics Data System (ADS)
Zhai, Zhongxu; Blanton, Michael; Slosar, Anže; Tinker, Jeremy
2017-12-01
We compare a large suite of theoretical cosmological models to observational data from the cosmic microwave background, baryon acoustic oscillation measurements of expansion, Type Ia supernova measurements of expansion, redshift space distortion measurements of the growth of structure, and the local Hubble constant. Our theoretical models include parametrizations of dark energy as well as physical models of dark energy and modified gravity. We determine the constraints on the model parameters, incorporating the redshift space distortion data directly in the analysis. To determine whether models can be ruled out, we evaluate the p-value (the probability under the model of obtaining data as bad or worse than the observed data). In our comparison, we find the well-known tension of H 0 with the other data; no model resolves this tension successfully. Among the models we consider, the large-scale growth of structure data does not affect the modified gravity models as a category particularly differently from dark energy models; it matters for some modified gravity models but not others, and the same is true for dark energy models. We compute predicted observables for each model under current observational constraints, and identify models for which future observational constraints will be particularly informative.
Strongly contracted canonical transformation theory
NASA Astrophysics Data System (ADS)
Neuscamman, Eric; Yanai, Takeshi; Chan, Garnet Kin-Lic
2010-01-01
Canonical transformation (CT) theory describes dynamic correlation in multireference systems with large active spaces. Here we discuss CT theory's intruder state problem and why our previous approach of overlap matrix truncation becomes infeasible for sufficiently large active spaces. We propose the use of strongly and weakly contracted excitation operators as alternatives for dealing with intruder states in CT theory. The performance of these operators is evaluated for the H2O, N2, and NiO molecules, with comparisons made to complete active space second order perturbation theory and Davidson-corrected multireference configuration interaction theory. Finally, using a combination of strongly contracted CT theory and orbital-optimized density matrix renormalization group theory, we evaluate the singlet-triplet gap of free base porphin using an active space containing all 24 out-of-plane 2p orbitals. Modeling dynamic correlation with an active space of this size is currently only possible using CT theory.
Probabilistic failure assessment with application to solid rocket motors
NASA Technical Reports Server (NTRS)
Jan, Darrell L.; Davidson, Barry D.; Moore, Nicholas R.
1990-01-01
A quantitative methodology is being developed for assessment of risk of failure of solid rocket motors. This probabilistic methodology employs best available engineering models and available information in a stochastic framework. The framework accounts for incomplete knowledge of governing parameters, intrinsic variability, and failure model specification error. Earlier case studies have been conducted on several failure modes of the Space Shuttle Main Engine. Work in progress on application of this probabilistic approach to large solid rocket boosters such as the Advanced Solid Rocket Motor for the Space Shuttle is described. Failure due to debonding has been selected as the first case study for large solid rocket motors (SRMs) since it accounts for a significant number of historical SRM failures. Impact of incomplete knowledge of governing parameters and failure model specification errors is expected to be important.
Advanced UVOIR Mirror Technology Development for Very Large Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip
2011-01-01
Objective of this work is to define and initiate a long-term program to mature six inter-linked critical technologies for future UVOIR space telescope mirrors to TRL6 by 2018 so that a viable flight mission can be proposed to the 2020 Decadal Review. (1) Large-Aperture, Low Areal Density, High Stiffness Mirrors: 4 to 8 m monolithic & 8 to 16 m segmented primary mirrors require larger, thicker, stiffer substrates. (2) Support System:Large-aperture mirrors require large support systems to ensure that they survive launch and deploy on orbit in a stress-free and undistorted shape. (3) Mid/High Spatial Frequency Figure Error:A very smooth mirror is critical for producing a high-quality point spread function (PSF) for high-contrast imaging. (4) Segment Edges:Edges impact PSF for high-contrast imaging applications, contributes to stray light noise, and affects the total collecting aperture. (5) Segment-to-Segment Gap Phasing:Segment phasing is critical for producing a high-quality temporally stable PSF. (6) Integrated Model Validation:On-orbit performance is determined by mechanical and thermal stability. Future systems require validated performance models. We are pursuing multiple design paths give the science community the option to enable either a future monolithic or segmented space telescope.
Distributed control of large space antennas
NASA Technical Reports Server (NTRS)
Cameron, J. M.; Hamidi, M.; Lin, Y. H.; Wang, S. J.
1983-01-01
A systematic way to choose control design parameters and to evaluate performance for large space antennas is presented. The structural dynamics and control properties for a Hoop and Column Antenna and a Wrap-Rib Antenna are characterized. Some results of the effects of model parameter uncertainties to the stability, surface accuracy, and pointing errors are presented. Critical dynamics and control problems for these antenna configurations are identified and potential solutions are discussed. It was concluded that structural uncertainties and model error can cause serious performance deterioration and can even destabilize the controllers. For the hoop and column antenna, large hoop and long meat and the lack of stiffness between the two substructures result in low structural frequencies. Performance can be improved if this design can be strengthened. The two-site control system is more robust than either single-site control systems for the hoop and column antenna.
New optical and radio frequency angular tropospheric refraction models for deep space applications
NASA Technical Reports Server (NTRS)
Berman, A. L.; Rockwell, S. T.
1976-01-01
The development of angular tropospheric refraction models for optical and radio frequency usage is presented. The models are compact analytic functions, finite over the entire domain of elevation angle, and accurate over large ranges of pressure, temperature, and relative humidity. Additionally, FORTRAN subroutines for each of the models are included.
GW/Bethe-Salpeter calculations for charged and model systems from real-space DFT
NASA Astrophysics Data System (ADS)
Strubbe, David A.
GW and Bethe-Salpeter (GW/BSE) calculations use mean-field input from density-functional theory (DFT) calculations to compute excited states of a condensed-matter system. Many parts of a GW/BSE calculation are efficiently performed in a plane-wave basis, and extensive effort has gone into optimizing and parallelizing plane-wave GW/BSE codes for large-scale computations. Most straightforwardly, plane-wave DFT can be used as a starting point, but real-space DFT is also an attractive starting point: it is systematically convergeable like plane waves, can take advantage of efficient domain parallelization for large systems, and is well suited physically for finite and especially charged systems. The flexibility of a real-space grid also allows convenient calculations on non-atomic model systems. I will discuss the interfacing of a real-space (TD)DFT code (Octopus, www.tddft.org/programs/octopus) with a plane-wave GW/BSE code (BerkeleyGW, www.berkeleygw.org), consider performance issues and accuracy, and present some applications to simple and paradigmatic systems that illuminate fundamental properties of these approximations in many-body perturbation theory.
Large-Scale NASA Science Applications on the Columbia Supercluster
NASA Technical Reports Server (NTRS)
Brooks, Walter
2005-01-01
Columbia, NASA's newest 61 teraflops supercomputer that became operational late last year, is a highly integrated Altix cluster of 10,240 processors, and was named to honor the crew of the Space Shuttle lost in early 2003. Constructed in just four months, Columbia increased NASA's computing capability ten-fold, and revitalized the Agency's high-end computing efforts. Significant cutting-edge science and engineering simulations in the areas of space and Earth sciences, as well as aeronautics and space operations, are already occurring on this largest operational Linux supercomputer, demonstrating its capacity and capability to accelerate NASA's space exploration vision. The presentation will describe how an integrated environment consisting not only of next-generation systems, but also modeling and simulation, high-speed networking, parallel performance optimization, and advanced data analysis and visualization, is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions. The talk will conclude by discussing how NAS partnered with various NASA centers, other government agencies, computer industry, and academia, to create a national resource in large-scale modeling and simulation.
2011-12-11
CAPE CANAVERAL, Fla. – The high-fidelity space shuttle model that was on display at the NASA Kennedy Space Center Visitor Complex in Florida negotiates the turn from Kennedy Parkway onto Schwartz Road on its way toward NASA Kennedy Space Center's Launch Complex 39 turn basin. It is standard procedure for large payloads and equipment to travel against the normal flow of traffic under the supervision of a move crew when being transported on or off center property. The shuttle was part of a display at the visitor complex that also included an external tank and two solid rocket boosters that were used to show visitors the size of actual space shuttle components. The full-scale shuttle model is being transferred from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will stay at the turn basin for a few months until it is ready to be transported to Texas via barge. The move also helps clear the way for the Kennedy Space Center Visitor Complex to begin construction of a new facility next year to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Dimitri Gerondidakis
2011-12-11
CAPE CANAVERAL, Fla. – The high-fidelity space shuttle model that was on display at the NASA Kennedy Space Center Visitor Complex in Florida moves past the traffic signals onto Kennedy Parkway as it travels northbound toward NASA Kennedy Space Center's Launch Complex 39 turn basin. It is standard procedure for large payloads and equipment to travel against the normal flow of traffic under the supervision of a move crew when being transported on or off center property. The shuttle was part of a display at the visitor complex that also included an external tank and two solid rocket boosters that were used to show visitors the size of actual space shuttle components. The full-scale shuttle model is being transferred from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will stay at the turn basin for a few months until it is ready to be transported to Texas via barge. The move also helps clear the way for the Kennedy Space Center Visitor Complex to begin construction of a new facility next year to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Dimitri Gerondidakis
2011-12-11
CAPE CANAVERAL, Fla. – A transporter carrying the high-fidelity space shuttle model that was on display at the NASA Kennedy Space Center Visitor Complex in Florida rolls along the NASA Causeway as it leaves the visitor complex on its way to NASA Kennedy Space Center's Launch Complex 39 turn basin. It is standard procedure for large payloads and equipment to travel against the normal flow of traffic under the supervision of a move crew when being transported on or off center property. The shuttle was part of a display at the visitor complex that also included an external tank and two solid rocket boosters that were used to show visitors the size of actual space shuttle components. The full-scale shuttle model is being transferred from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will stay at the turn basin for a few months until it is ready to be transported to Texas via barge. The move also helps clear the way for the Kennedy Space Center Visitor Complex to begin construction of a new facility next year to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Dimitri Gerondidakis
NASA Technical Reports Server (NTRS)
Solomon, S. C.; Comer, R. P.; Head, J. W.
1982-01-01
A topographic profile of the young large lunar basin, Orientale, is presented in order to examine the effects of viscous relaxation on basin topography. Analytical models for viscous flow are considered, showing a wavelength-dependence of time constants for viscous decay on the decrease in viscosity with depth and on the extent of the isostatic compensation of the initial topography. Lunar rheological models which are developed include a half-space model for uniform Newtonian viscosity, density, and gravitational acceleration, a layer over inviscid half space model with material inviscid over geological time scales, and a layer with isostatic compensation where a uniformly viscous layer overlies an inviscid half space of higher density. Greater roughness is concluded, and has been observed, on the moon's dark side due to continued lower temperatures since the time of heavy bombardment.
Corrected Mean-Field Model for Random Sequential Adsorption on Random Geometric Graphs
NASA Astrophysics Data System (ADS)
Dhara, Souvik; van Leeuwaarden, Johan S. H.; Mukherjee, Debankur
2018-03-01
A notorious problem in mathematics and physics is to create a solvable model for random sequential adsorption of non-overlapping congruent spheres in the d-dimensional Euclidean space with d≥ 2 . Spheres arrive sequentially at uniformly chosen locations in space and are accepted only when there is no overlap with previously deposited spheres. Due to spatial correlations, characterizing the fraction of accepted spheres remains largely intractable. We study this fraction by taking a novel approach that compares random sequential adsorption in Euclidean space to the nearest-neighbor blocking on a sequence of clustered random graphs. This random network model can be thought of as a corrected mean-field model for the interaction graph between the attempted spheres. Using functional limit theorems, we characterize the fraction of accepted spheres and its fluctuations.
Decentralized control of large flexible structures by joint decoupling
NASA Technical Reports Server (NTRS)
Su, Tzu-Jeng; Juang, Jer-Nan
1992-01-01
A decentralized control design method is presented for large complex flexible structures by using the idea of joint decoupling. The derivation is based on a coupled substructure state-space model, which is obtained from enforcing conditions of interface compatibility and equilibrium to the substructure state-space models. It is shown that by restricting the control law to be localized state feedback and by setting the joint actuator input commands to decouple joint 'degrees of freedom' (dof) from interior dof, the global structure control design problem can be decomposed into several substructure control design problems. The substructure control gains and substructure observers are designed based on modified substructure state-space models. The controllers produced by the proposed method can operate successfully at the individual substructure level as well as at the global structure level. Therefore, not only control design but also control implementation is decentralized. Stability and performance requirement of the closed-loop system can be achieved by using any existing state feedback control design method. A two-component mass-spring damper system and a three-truss structure are used as examples to demonstrate the proposed method.
Status of Technology Development to enable Large Stable UVOIR Space Telescopes
NASA Astrophysics Data System (ADS)
Stahl, H. Philip; MSFC AMTD Team
2017-01-01
NASA MSFC has two funded Strategic Astrophysics Technology projects to develop technology for potential future large missions: AMTD and PTC. The Advanced Mirror Technology Development (AMTD) project is developing technology to make mechanically stable mirrors for a 4-meter or larger UVOIR space telescope. AMTD is demonstrating this technology by making a 1.5 meter diameter x 200 mm thick ULE(C) mirror that is 1/3rd scale of a full size 4-m mirror. AMTD is characterizing the mechanical and thermal performance of this mirror and of a 1.2-meter Zerodur(R) mirror to validate integrate modeling tools. Additionally, AMTD has developed integrated modeling tools which are being used to evaluate primary mirror systems for a potential Habitable Exoplanet Mission and analyzed the interaction between optical telescope wavefront stability and coronagraph contrast leakage. Predictive Thermal Control (PTC) project is developing technology to enable high stability thermal wavefront performance by using integrated modeling tools to predict and actively control the thermal environment of a 4-m or larger UVOIR space telescope.
Methods of Sparse Modeling and Dimensionality Reduction to Deal with Big Data
2015-04-01
supervised learning (c). Our framework consists of two separate phases: (a) first find an initial space in an unsupervised manner; then (b) utilize label...model that can learn thousands of topics from a large set of documents and infer the topic mixture of each document, 2) a supervised dimension reduction...model that can learn thousands of topics from a large set of documents and infer the topic mixture of each document, (i) a method of supervised
Illumination in diverse codimensions
NASA Technical Reports Server (NTRS)
Banks, David C.
1994-01-01
This paper derives a model of diffuse and specular illumination in arbitrarily large dimensions, based on a few characteristics of material and light in three-space. It then describes how to adjust for the anomaly of excess brightness in large codimensions. If a surface is grooved or furry, it can be illuminated with a hybrid model that incorporates both the one dimensional geometry (the grooves or fur) and the two dimensional geometry (the surface).
Adaptive control of large space structures using recursive lattice filters
NASA Technical Reports Server (NTRS)
Goglia, G. L.
1985-01-01
The use of recursive lattice filters for identification and adaptive control of large space structures was studied. Lattice filters are used widely in the areas of speech and signal processing. Herein, they are used to identify the structural dynamics model of the flexible structures. This identified model is then used for adaptive control. Before the identified model and control laws are integrated, the identified model is passed through a series of validation procedures and only when the model passes these validation procedures control is engaged. This type of validation scheme prevents instability when the overall loop is closed. The results obtained from simulation were compared to those obtained from experiments. In this regard, the flexible beam and grid apparatus at the Aerospace Control Research Lab (ACRL) of NASA Langley Research Center were used as the principal candidates for carrying out the above tasks. Another important area of research, namely that of robust controller synthesis, was investigated using frequency domain multivariable controller synthesis methods.
NASA Technical Reports Server (NTRS)
Johnston, John D.; Howard, Joseph M.; Mosier, Gary E.; Parrish, Keith A.; McGinnis, Mark A.; Bluth, Marcel; Kim, Kevin; Ha, Kong Q.
2004-01-01
The James Web Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2011. This is a continuation of a series of papers on modeling activities for JWST. The structural-thermal-optical, often referred to as STOP, analysis process is used to predict the effect of thermal distortion on optical performance. The benchmark STOP analysis for JWST assesses the effect of an observatory slew on wavefront error. Temperatures predicted using geometric and thermal math models are mapped to a structural finite element model in order to predict thermally induced deformations. Motions and deformations at optical surfaces are then input to optical models, and optical performance is predicted using either an optical ray trace or a linear optical analysis tool. In addition to baseline performance predictions, a process for performing sensitivity studies to assess modeling uncertainties is described.
Redshift space clustering of galaxies and cold dark matter model
NASA Technical Reports Server (NTRS)
Bahcall, Neta A.; Cen, Renyue; Gramann, Mirt
1993-01-01
The distorting effect of peculiar velocities on the power speturm and correlation function of IRAS and optical galaxies is studied. The observed redshift space power spectra and correlation functions of IRAS and optical the galaxies over the entire range of scales are directly compared with the corresponding redshift space distributions using large-scale computer simulations of cold dark matter (CDM) models in order to study the distortion effect of peculiar velocities on the power spectrum and correlation function of the galaxies. It is found that the observed power spectrum of IRAS and optical galaxies is consistent with the spectrum of an Omega = 1 CDM model. The problems that such a model currently faces may be related more to the high value of Omega in the model than to the shape of the spectrum. A low-density CDM model is also investigated and found to be consistent with the data.
NASA Technical Reports Server (NTRS)
Klumpar, D. M. (Principal Investigator)
1981-01-01
Progress is reported in reading MAGSAT tapes in modeling procedure developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere. The modeling technique utilizes a linear current element representation of the large-scale space-current system.
Cosmic Ray Studies with the Fermi Gamma-ray Space Telescope Large Area Telescope
NASA Technical Reports Server (NTRS)
Thompson, David J.; Baldini, L.; Uchiyama, Y.
2012-01-01
The Large Area Telescope (LAT) on the Fermi Gamma-ray Space Telescope provides both direct and indirect measurements of galactic cosmic rays (CR). The LAT high-statistics observations of the 7 GeV - 1 TeV electron plus positron spectrum and limits on spatial anisotropy constrain models for this cosmic-ray component. On a galactic scale, the LAT observations indicate that cosmic-ray sources may be more plentiful in the outer Galaxy than expected or that the scale height of the cosmic-ray diffusive halo is larger than conventional models. Production of cosmic rays in supernova remnants (SNR) is supported by the LAT gamma-ray studies of several of these, both young SNR and those interacting with molecular clouds.
Cosmic Ray Studies with the Fermi Gamma-ray Space Telescope Large Area Telescope
NASA Technical Reports Server (NTRS)
Thompson, D. J.; Baldini, L.; Uchiyama, Y.
2011-01-01
The Large Area Telescope (LAT) on the Fermi Gamma-ray Space Telescope provides both direct and indirect measurements of Galactic cosmic rays (CR). The LAT high-statistics observations of the 7 GeV - 1 TcV electron plus positron spectrum and limits on spatial anisotropy constrain models for this cosmic-ray component. On a Galactic scale, the LAT observations indicate that cosmic-ray sources may be more plentiful in the outer Galaxy than expected or that the scale height of the cosmic-ray diffusive halo is larger than conventional models. Production of cosmic rays in supernova remnants (SNR) is supported by the LAT gamma-ray studies of several of these, both young SNR and those interacting with molecular clouds.
NASA Astrophysics Data System (ADS)
Howell, S. M.; Ito, G.; Behn, M. D.; Olive, J. A. L.; Kaus, B.; Popov, A.; Mittelstaedt, E. L.; Morrow, T. A.
2016-12-01
Previous two-dimensional (2-D) modeling studies of abyssal-hill scale fault generation and evolution at mid-ocean ridges have predicted that M, the ratio of magmatic to total extension, strongly influences the total slip, spacing, and rotation of large faults, as well as the morphology of the ridge axis. Scaling relations derived from these 2-D models broadly explain the globally observed decrease in abyssal hill spacing with increasing ridge spreading rate, as well as the formation of large-offset faults close to the ends of slow-spreading ridge segments. However, these scaling relations do not explain some higher resolution observations of segment-scale variability in fault spacing along the Chile Ridge and the Mid-Atlantic Ridge, where fault spacing shows no obvious correlation with M. This discrepancy between observations and 2-D model predictions illuminates the need for three-dimensional (3-D) numerical models that incorporate the effects of along-axis variations in lithospheric structure and magmatic accretion. To this end, we use the geodynamic modeling software LaMEM to simulate 3-D tectono-magmatic interactions in a visco-elasto-plastic lithosphere under extension. We model a single ridge segment subjected to an along-axis gradient in the rate of magma injection, which is simulated by imposing a mass source in a plane of model finite volumes beneath the ridge axis. Outputs of interest include characteristic fault offset, spacing, and along-axis gradients in seafloor morphology. We also examine the effects of along-axis variations in lithospheric thickness and off-axis thickening rate. The main objectives of this study are to quantify the relative importance of the amount of magmatic extension and the local lithospheric structure at a given along-axis location, versus the importance of along-axis communication of lithospheric stresses on the 3-D fault evolution and morphology of intermediate-spreading-rate ridges.
Model Adaptation in Parametric Space for POD-Galerkin Models
NASA Astrophysics Data System (ADS)
Gao, Haotian; Wei, Mingjun
2017-11-01
The development of low-order POD-Galerkin models is largely motivated by the expectation to use the model developed with a set of parameters at their native values to predict the dynamic behaviors of the same system under different parametric values, in other words, a successful model adaptation in parametric space. However, most of time, even small deviation of parameters from their original value may lead to large deviation or unstable results. It has been shown that adding more information (e.g. a steady state, mean value of a different unsteady state, or an entire different set of POD modes) may improve the prediction of flow with other parametric states. For a simple case of the flow passing a fixed cylinder, an orthogonal mean mode at a different Reynolds number may stabilize the POD-Galerkin model when Reynolds number is changed. For a more complicated case of the flow passing an oscillatory cylinder, a global POD-Galerkin model is first applied to handle the moving boundaries, then more information (e.g. more POD modes) is required to predicate the flow under different oscillatory frequencies. Supported by ARL.
Quinn, James; Lovasi, Gina; Bader, Michael; Yousefzadeh, Paulette; Weiss, Christopher; Neckerman, Kathryn
2013-01-01
Purpose To determine whether body mass index (BMI) is associated with proximity to neighborhood parks, the size of the parks, their cleanliness and the availability of recreational facilities in the parks. Design Cross-sectional. Setting New York City. Subjects 13,102 adults (median age 45 years, 36% male) recruited from 2000–2002. Measures Anthropometric and socio-demographic data from study subjects were linked to Department of Parks & Recreation data on park space, cleanliness, and facilities. Neighborhood level socio-demographic and park proximity metrics were created for half-mile radius circular buffers around each subject’s residence. Proximity to park space was measured as the proportion of the subject’s neighborhood buffer area that was total park space, large park space (a park > 6 acres) and small park space (a park <=6 acres). Analysis Hierarchical linear models were used to determine whether neighborhood park metrics were associated with BMI. Results Higher proximity to large park space was significantly associated with lower BMI (beta = −1.69 95% CI = −2.76, −0.63). Across the population distribution of proximity to large park space, compared to subjects living in neighborhoods at the 10th percentile of the distribution, the covariate adjusted average BMI was estimated to be 0.35 kg/m2 lower for those living in neighborhoods at the 90th percentile. The proportion of neighborhood area that was small park space was not associated with BMI, nor was park cleanliness or the availability of recreational facilities. Conclusions Neighborhood proximity to large park spaces is modestly associated with lower BMI in a diverse urban population. PMID:23448416
NASA Astrophysics Data System (ADS)
Tallarita, Gianni; Peterson, Adam
2018-04-01
We perform a numerical study of the phase diagram of the model proposed in [M. Shifman, Phys. Rev. D 87, 025025 (2013)., 10.1103/PhysRevD.87.025025], which is a simple model containing non-Abelian vortices. As per the case of Abrikosov vortices, we map out a region of parameter space in which the system prefers the formation of vortices in ordered lattice structures. These are generalizations of Abrikosov vortex lattices with extra orientational moduli in the vortex cores. At sufficiently large lattice spacing the low energy theory is described by a sum of C P (1 ) theories, each located on a vortex site. As the lattice spacing becomes smaller, when the self-interaction of the orientational field becomes relevant, only an overall rotation in internal space survives.
An approach to solving large reliability models
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Veeraraghavan, Malathi; Dugan, Joanne Bechta; Trivedi, Kishor S.
1988-01-01
This paper describes a unified approach to the problem of solving large realistic reliability models. The methodology integrates behavioral decomposition, state trunction, and efficient sparse matrix-based numerical methods. The use of fault trees, together with ancillary information regarding dependencies to automatically generate the underlying Markov model state space is proposed. The effectiveness of this approach is illustrated by modeling a state-of-the-art flight control system and a multiprocessor system. Nonexponential distributions for times to failure of components are assumed in the latter example. The modeling tool used for most of this analysis is HARP (the Hybrid Automated Reliability Predictor).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Xiexiaomen; Tutuncu, Azra; Eustes, Alfred
Enhanced Geothermal Systems (EGS) could potentially use technological advancements in coupled implementation of horizontal drilling and multistage hydraulic fracturing techniques in tight oil and shale gas reservoirs along with improvements in reservoir simulation techniques to design and create EGS reservoirs. In this study, a commercial hydraulic fracture simulation package, Mangrove by Schlumberger, was used in an EGS model with largely distributed pre-existing natural fractures to model fracture propagation during the creation of a complex fracture network. The main goal of this study is to investigate optimum treatment parameters in creating multiple large, planar fractures to hydraulically connect a horizontal injectionmore » well and a horizontal production well that are 10,000 ft. deep and spaced 500 ft. apart from each other. A matrix of simulations for this study was carried out to determine the influence of reservoir and treatment parameters on preventing (or aiding) the creation of large planar fractures. The reservoir parameters investigated during the matrix simulations include the in-situ stress state and properties of the natural fracture set such as the primary and secondary fracture orientation, average fracture length, and average fracture spacing. The treatment parameters investigated during the simulations were fluid viscosity, proppant concentration, pump rate, and pump volume. A final simulation with optimized design parameters was performed. The optimized design simulation indicated that high fluid viscosity, high proppant concentration, large pump volume and pump rate tend to minimize the complexity of the created fracture network. Additionally, a reservoir with 'friendly' formation characteristics such as large stress anisotropy, natural fractures set parallel to the maximum horizontal principal stress (SHmax), and large natural fracture spacing also promote the creation of large planar fractures while minimizing fracture complexity.« less
NASA Astrophysics Data System (ADS)
Linares, R.; Palmer, D.; Thompson, D.; Koller, J.
2013-09-01
Recent events in space, including the collision of Russia's Cosmos 2251 satellite with Iridium 33 and China's Feng Yun 1C anti-satellite demonstration, have stressed the capabilities of Space Surveillance Network (SSN) and its ability to provide accurate and actionable impact probability estimates. The SSN network has the unique challenge of tracking more than 18,000 resident space objects (RSOs) and providing critical collision avoidance warnings to military, NASA, and commercial systems. However, due to the large number of RSOs and the limited number of sensors available to track them, it is impossible to maintain persistent surveillance. Observation gaps result in large propagation intervals between measurements and close approaches. Coupled with nonlinear RSO dynamics this results in difficulty in modeling the probability distribution functions (pdfs) of the RSO. In particular low-Earth orbiting (LEO) satellites are heavily influenced by atmospheric drag, which is very difficult to model accurately. A number of atmospheric models exist which can be classified as either empirical or physics-based models. The current Air Force standard is the High Accuracy Satellite Drag Model (HASDM), which is an empirical model based on observation of calibration satellites. These satellite observations are used to determine model parameters based on their orbit determination solutions. Atmospheric orbits are perturbed by a number of factors including drag coefficient, attitude, and shape of the space object. The satellites used for the HASDM model calibration process are chosen because of their relatively simple shapes, to minimize errors introduced due to shape miss-modeling. Under this requirement the number of calibration satellites that can be used for calibrating the atmospheric models is limited. Los Alamos National Laboratory (LANL) has established a research effort, called IMPACT (Integrated Modeling of Perturbations in Atmospheres for Conjunction Tracking), to improve impact assessment via improved physics-based modeling. As part of this effort calibration satellite observations are used to dynamically calibrate the physics-based model and to improve its forecasting capability. The observations are collected from a variety of sources, including from LANL's own Raven-class optical telescope. This system collects both astrometric and photometric data on space objects. The photometric data will be used to estimate the space objects' attitude and shape. Non-resolved photometric data have been studied by many as a mechanism for space object characterization. Photometry is the measurement of an object's flux or apparent brightness measured over a wavelength band. The temporal variation of photometric measurements is referred to as photometric signature. The photometric optical signature of an object contains information about shape, attitude, size and material composition. This work focuses on the processing of the data collected with LANL's telescope in an effort to use photometric data to expand the number of space objects that can be used as calibration satellites. An Unscented Kalman filter is used to estimate the attitude and angular velocity of the space object; both real data and simulated data scenarios are shown. A number of inactive space objects are used for the real data examples and good estimation results are shown.
IHY Modeling Support at the Community Coordinated Modeling Center
NASA Technical Reports Server (NTRS)
Chulaki, A.; Hesse, Michael; Kuznetsova, Masha; MacNeice, P.; Rastaetter, L.
2005-01-01
The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. In particular, the CCMC provides to the research community the execution of "runs-onrequest" for specific events of interest to space science researchers. Through this activity and the concurrent development of advanced visualization tools, CCMC provides, to the general science community, unprecedented access to a large number of state-of-the-art research models. CCMC houses models that cover the entire domain from the Sun to the Earth. In this presentation, we will provide an overview of CCMC modeling services that are available to support activities during the International Heliospheric Year. In order to tailor CCMC activities to IHY needs, we will also invite community input into our IHY planning activities.
The Large Area Crop Inventory Experiment (LACIE)
NASA Technical Reports Server (NTRS)
Macdonald, R. B.
1976-01-01
A Large Area Crop Inventory Experiment (LACIE) was undertaken to prove out an economically important application of remote sensing from space. The experiment focused upon determination of wheat acreages in the U.S. Great Plains and upon the development and testing of yield models. The results and conclusions are presented.
NASA Astrophysics Data System (ADS)
Kozyra, J. U.; Brandt, P. C.; Cattell, C. A.; Clilverd, M.; de Zeeuw, D.; Evans, D. S.; Fang, X.; Frey, H. U.; Kavanagh, A. J.; Liemohn, M. W.; Lu, G.; Mende, S. B.; Paxton, L. J.; Ridley, A. J.; Rodger, C. J.; Soraas, F.
2010-12-01
Energetic ions and electrons that precipitate into the upper atmosphere from sources throughout geospace carry the influences of space weather disturbances deeper into the atmosphere, possibly contributing to climate variability. The three-dimensional atmospheric effects of these precipitating particles are a function of the energy and species of the particles, lifetimes of reactive species generated during collisions in the atmosphere, the nature of the driving space weather disturbance, and the large-scale transport properties (meteorology) of the atmosphere in the region of impact. Unraveling the features of system-level coupling between solar magnetic variability, space weather and stratospheric dynamics requires a global view of the precipitation, along with its temporal and spatial variation. However, observations of particle precipitation at the system level are sparse and incomplete requiring they be combined with other observations and with large-scale models to provide the global context that is needed to accelerate progress. We compare satellite and ground-based observations of geospace conditions and energetic precipitation (at ring current, radiation belt and auroral energies) to a simulation of the geospace environment during 21-22 January 2005 by the BATS-R-US MHD model coupled with a self-consistent ring current solution. The aim is to explore the extent to which regions of particle precipitation track global magnetic field distortions and ways in which global models enhance our understanding of linkages between solar wind drivers and evolution of energetic particle precipitation.
Proceedings of the Workshop on Identification and Control of Flexible Space Structures, Volume 2
NASA Technical Reports Server (NTRS)
Rodriguez, G. (Editor)
1985-01-01
The results of a workshop on identification and control of flexible space structures held in San Diego, CA, July 4 to 6, 1984 are discussed. The main objectives of the workshop were to provide a forum to exchange ideas in exploring the most advanced modeling, estimation, identification and control methodologies to flexible space structures. The workshop responded to the rapidly growing interest within NASA in large space systems (space station, platforms, antennas, flight experiments) currently under design. Dynamic structural analysis, control theory, structural vibration and stability, and distributed parameter systems are discussed.
Automation and Robotics for Space-Based Systems, 1991
NASA Technical Reports Server (NTRS)
Williams, Robert L., II (Editor)
1992-01-01
The purpose of this in-house workshop was to assess the state-of-the-art of automation and robotics for space operations from an LaRC perspective and to identify areas of opportunity for future research. Over half of the presentations came from the Automation Technology Branch, covering telerobotic control, extravehicular activity (EVA) and intra-vehicular activity (IVA) robotics, hand controllers for teleoperation, sensors, neural networks, and automated structural assembly, all applied to space missions. Other talks covered the Remote Manipulator System (RMS) active damping augmentation, space crane work, modeling, simulation, and control of large, flexible space manipulators, and virtual passive controller designs for space robots.
2011-12-11
CAPE CANAVERAL, Fla. – The high-fidelity space shuttle model that was on display at the NASA Kennedy Space Center Visitor Complex in Florida travels along Schwartz Road on its way toward NASA Kennedy Space Center's Launch Complex 39 turn basin. It is standard procedure for large payloads and equipment to travel against the normal flow of traffic under the supervision of a move crew when being transported on or off center property. The Assembly and Refurbishment Facility, formerly used to process components of space shuttle solid rocket boosters, is in the background at right. The shuttle was part of a display at the visitor complex that also included an external tank and two solid rocket boosters that were used to show visitors the size of actual space shuttle components. The full-scale shuttle model is being transferred from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will stay at the turn basin for a few months until it is ready to be transported to Texas via barge. The move also helps clear the way for the Kennedy Space Center Visitor Complex to begin construction of a new facility next year to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Dimitri Gerondidakis
Cost effective management of space venture risks
NASA Technical Reports Server (NTRS)
Giuntini, Ronald E.; Storm, Richard E.
1986-01-01
The development of a model for the cost-effective management of space venture risks is discussed. The risk assessment and control program of insurance companies is examined. A simplified system development cycle which consists of a conceptual design phase, a preliminary design phase, a final design phase, a construction phase, and a system operations and maintenance phase is described. The model incorporates insurance safety risk methods and reliability engineering, and testing practices used in the development of large aerospace and defense systems.
Large transient fault current test of an electrical roll ring
NASA Technical Reports Server (NTRS)
Yenni, Edward J.; Birchenough, Arthur G.
1992-01-01
The space station uses precision rotary gimbals to provide for sun tracking of its photoelectric arrays. Electrical power, command signals and data are transferred across the gimbals by roll rings. Roll rings have been shown to be capable of highly efficient electrical transmission and long life, through tests conducted at the NASA Lewis Research Center and Honeywell's Satellite and Space Systems Division in Phoenix, AZ. Large potential fault currents inherent to the power system's DC distribution architecture, have brought about the need to evaluate the effects of large transient fault currents on roll rings. A test recently conducted at Lewis subjected a roll ring to a simulated worst case space station electrical fault. The system model used to obtain the fault profile is described, along with details of the reduced order circuit that was used to simulate the fault. Test results comparing roll ring performance before and after the fault are also presented.
Dynamic and thermal response finite element models of multi-body space structural configurations
NASA Technical Reports Server (NTRS)
Edighoffer, Harold H.
1987-01-01
Presented is structural dynamics modeling of two multibody space structural configurations. The first configuration is a generic space station model of a cylindrical habitation module, two solar array panels, radiator panel, and central connecting tube. The second is a 15-m hoop-column antenna. Discussed is the special joint elimination sequence used for these large finite element models, so that eigenvalues could be extracted. The generic space station model aided test configuration design and analysis/test data correlation. The model consisted of six finite element models, one of each substructure and one of all substructures as a system. Static analysis and tests at the substructure level fine-tuned the finite element models. The 15-m hoop-column antenna is a truss column and structural ring interconnected with tension stabilizing cables. To the cables, pretensioned mesh membrane elements were attached to form four parabolic shaped antennae, one per quadrant. Imposing thermal preloads in the cables and mesh elements produced pretension in the finite element model. Thermal preload variation in the 96 control cables was adjusted to maintain antenna shape within the required tolerance and to give pointing accuracy.
NASA Technical Reports Server (NTRS)
Walker, Steven A.; Clowdsley, Martha S.; Abston, H. Lee; Simon, Hatthew A.; Gallegos, Adam M.
2013-01-01
NASA has plans for long duration missions beyond low Earth orbit (LEO). Outside of LEO, large solar particle events (SPEs), which occur sporadically, can deliver a very large dose in a short amount of time. The relatively low proton energies make SPE shielding practical, and the possibility of the occurrence of a large event drives the need for SPE shielding for all deep space missions. The Advanced Exploration Systems (AES) RadWorks Storm Shelter Team was charged with developing minimal mass SPE storm shelter concepts for missions beyond LEO. The concepts developed included "wearable" shields, shelters that could be deployed at the onset of an event, and augmentations to the crew quarters. The radiation transport codes, human body models, and vehicle geometry tools contained in the On-Line Tool for the Assessment of Radiation In Space (OLTARIS) were used to evaluate the protection provided by each concept within a realistic space habitat and provide the concept designers with shield thickness requirements. Several different SPE models were utilized to examine the dependence of the shield requirements on the event spectrum. This paper describes the radiation analysis methods and the results of these analyses for several of the shielding concepts.
NASA Astrophysics Data System (ADS)
Fisher, Karl B.
1995-08-01
The relation between the galaxy correlation functions in real-space and redshift-space is derived in the linear regime by an appropriate averaging of the joint probability distribution of density and velocity. The derivation recovers the familiar linear theory result on large scales but has the advantage of clearly revealing the dependence of the redshift distortions on the underlying peculiar velocity field; streaming motions give rise to distortions of θ(Ω0.6/b) while variations in the anisotropic velocity dispersion yield terms of order θ(Ω1.2/b2). This probabilistic derivation of the redshift-space correlation function is similar in spirit to the derivation of the commonly used "streaming" model, in which the distortions are given by a convolution of the real-space correlation function with a velocity distribution function. The streaming model is often used to model the redshift-space correlation function on small, highly nonlinear, scales. There have been claims in the literature, however, that the streaming model is not valid in the linear regime. Our analysis confirms this claim, but we show that the streaming model can be made consistent with linear theory provided that the model for the streaming has the functional form predicted by linear theory and that the velocity distribution is chosen to be a Gaussian with the correct linear theory dispersion.
Community Coordinated Modeling Center: Addressing Needs of Operational Space Weather Forecasting
NASA Technical Reports Server (NTRS)
Kuznetsova, M.; Maddox, M.; Pulkkinen, A.; Hesse, M.; Rastaetter, L.; Macneice, P.; Taktakishvili, A.; Berrios, D.; Chulaki, A.; Zheng, Y.;
2012-01-01
Models are key elements of space weather forecasting. The Community Coordinated Modeling Center (CCMC, http://ccmc.gsfc.nasa.gov) hosts a broad range of state-of-the-art space weather models and enables access to complex models through an unmatched automated web-based runs-on-request system. Model output comparisons with observational data carried out by a large number of CCMC users open an unprecedented mechanism for extensive model testing and broad community feedback on model performance. The CCMC also evaluates model's prediction ability as an unbiased broker and supports operational model selections. The CCMC is organizing and leading a series of community-wide projects aiming to evaluate the current state of space weather modeling, to address challenges of model-data comparisons, and to define metrics for various user s needs and requirements. Many of CCMC models are continuously running in real-time. Over the years the CCMC acquired the unique experience in developing and maintaining real-time systems. CCMC staff expertise and trusted relations with model owners enable to keep up to date with rapid advances in model development. The information gleaned from the real-time calculations is tailored to specific mission needs. Model forecasts combined with data streams from NASA and other missions are integrated into an innovative configurable data analysis and dissemination system (http://iswa.gsfc.nasa.gov) that is accessible world-wide. The talk will review the latest progress and discuss opportunities for addressing operational space weather needs in innovative and collaborative ways.
A unified framework for mesh refinement in random and physical space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jing; Stinis, Panos
In recent work we have shown how an accurate reduced model can be utilized to perform mesh renement in random space. That work relied on the explicit knowledge of an accurate reduced model which is used to monitor the transfer of activity from the large to the small scales of the solution. Since this is not always available, we present in the current work a framework which shares the merits and basic idea of the previous approach but does not require an explicit knowledge of a reduced model. Moreover, the current framework can be applied for renement in both randommore » and physical space. In this manuscript we focus on the application to random space mesh renement. We study examples of increasing difficulty (from ordinary to partial differential equations) which demonstrate the effciency and versatility of our approach. We also provide some results from the application of the new framework to physical space mesh refinement.« less
Large-scale shell-model calculation with core excitations for neutron-rich nuclei beyond 132Sn
NASA Astrophysics Data System (ADS)
Jin, Hua; Hasegawa, Munetake; Tazaki, Shigeru; Kaneko, Kazunari; Sun, Yang
2011-10-01
The structure of neutron-rich nuclei with a few nucleons beyond 132Sn is investigated by means of large-scale shell-model calculations. For a considerably large model space, including neutron core excitations, a new effective interaction is determined by employing the extended pairing-plus-quadrupole model with monopole corrections. The model provides a systematical description for energy levels of A=133-135 nuclei up to high spins and reproduces available data of electromagnetic transitions. The structure of these nuclei is analyzed in detail, with emphasis of effects associated with core excitations. The results show evidence of hexadecupole correlation in addition to octupole correlation in this mass region. The suggested feature of magnetic rotation in 135Te occurs in the present shell-model calculation.
Analysis of a Radiation Model of the Shuttle Space Suit
NASA Technical Reports Server (NTRS)
Anderson, Brooke M.; Nealy, John E.; Kim, Myung-Hee; Qualls, Garry D.; Wilson, John W.
2003-01-01
The extravehicular activity (EVA) required to assemble the International Space Station (ISS) will take approximately 1500 hours with 400 hours of EVA per year in operations and maintenance. With the Space Station at an inclination of 51.6 deg the radiation environment is highly variable with solar activity being of great concern. Thus, it is important to study the dose gradients about the body during an EVA to help determine the cancer risk associated with the different environments the ISS will encounter. In this paper we are concerned only with the trapped radiation (electrons and protons). Two different scenarios are looked at: the first is the quiet geomagnetic periods in low Earth orbit (LEO) and the second is during a large solar particle event in the deep space environment. This study includes a description of how the space suit's computer aided design (CAD) model was developed along with a description of the human model. Also included is a brief description of the transport codes used to determine the total integrated dose at several locations within the body. Finally, the results of the transport codes when applied to the space suit and human model and a brief description of the results are presented.
Cosmology and accelerator tests of strongly interacting dark matter
Berlin, Asher; Blinov, Nikita; Gori, Stefania; ...
2018-03-23
A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experimentsmore » such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. As a result, we also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.« less
Cosmology and accelerator tests of strongly interacting dark matter
NASA Astrophysics Data System (ADS)
Berlin, Asher; Blinov, Nikita; Gori, Stefania; Schuster, Philip; Toro, Natalia
2018-03-01
A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experiments such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. We also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.
On the analytical modeling of the nonlinear vibrations of pretensioned space structures
NASA Technical Reports Server (NTRS)
Housner, J. M.; Belvin, W. K.
1983-01-01
Pretensioned structures are receiving considerable attention as candidate large space structures. A typical example is a hoop-column antenna. The large number of preloaded members requires efficient analytical methods for concept validation and design. Validation through analyses is especially important since ground testing may be limited due to gravity effects and structural size. The present investigation has the objective to present an examination of the analytical modeling of pretensioned members undergoing nonlinear vibrations. Two approximate nonlinear analysis are developed to model general structural arrangements which include beam-columns and pretensioned cables attached to a common nucleus, such as may occur at a joint of a pretensioned structure. Attention is given to structures undergoing nonlinear steady-state oscillations due to sinusoidal excitation forces. Three analyses, linear, quasi-linear, and nonlinear are conducted and applied to study the response of a relatively simple cable stiffened structure.
Cosmology and accelerator tests of strongly interacting dark matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berlin, Asher; Blinov, Nikita; Gori, Stefania
A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experimentsmore » such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. As a result, we also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.« less
Control-structure interaction study for the Space Station solar dynamic power module
NASA Technical Reports Server (NTRS)
Cheng, J.; Ianculescu, G.; Ly, J.; Kim, M.
1991-01-01
The authors investigate the feasibility of using a conventional PID (proportional plus integral plus derivative) controller design to perform the pointing and tracking functions for the Space Station Freedom solar dynamic power module. Using this simple controller design, the control/structure interaction effects were also studied without assuming frequency bandwidth separation. From the results, the feasibility of a simple solar dynamic control solution with a reduced-order model, which satisfies the basic system pointing and stability requirements, is suggested. However, the conventional control design approach is shown to be very much influenced by the order of reduction of the plant model, i.e., the number of the retained elastic modes from the full-order model. This suggests that, for complex large space structures, such as the Space Station Freedom solar dynamic, the conventional control system design methods may not be adequate.
Parametric Modeling for Fluid Systems
NASA Technical Reports Server (NTRS)
Pizarro, Yaritzmar Rosario; Martinez, Jonathan
2013-01-01
Fluid Systems involves different projects that require parametric modeling, which is a model that maintains consistent relationships between elements as is manipulated. One of these projects is the Neo Liquid Propellant Testbed, which is part of Rocket U. As part of Rocket U (Rocket University), engineers at NASA's Kennedy Space Center in Florida have the opportunity to develop critical flight skills as they design, build and launch high-powered rockets. To build the Neo testbed; hardware from the Space Shuttle Program was repurposed. Modeling for Neo, included: fittings, valves, frames and tubing, between others. These models help in the review process, to make sure regulations are being followed. Another fluid systems project that required modeling is Plant Habitat's TCUI test project. Plant Habitat is a plan to develop a large growth chamber to learn the effects of long-duration microgravity exposure to plants in space. Work for this project included the design and modeling of a duct vent for flow test. Parametric Modeling for these projects was done using Creo Parametric 2.0.
Models of Solar Wind Structures and Their Interaction with the Earth's Space Environment
NASA Astrophysics Data System (ADS)
Watermann, J.; Wintoft, P.; Sanahuja, B.; Saiz, E.; Poedts, S.; Palmroth, M.; Milillo, A.; Metallinou, F.-A.; Jacobs, C.; Ganushkina, N. Y.; Daglis, I. A.; Cid, C.; Cerrato, Y.; Balasis, G.; Aylward, A. D.; Aran, A.
2009-11-01
The discipline of “Space Weather” is built on the scientific foundation of solar-terrestrial physics but with a strong orientation toward applied research. Models describing the solar-terrestrial environment are therefore at the heart of this discipline, for both physical understanding of the processes involved and establishing predictive capabilities of the consequences of these processes. Depending on the requirements, purely physical models, semi-empirical or empirical models are considered to be the most appropriate. This review focuses on the interaction of solar wind disturbances with geospace. We cover interplanetary space, the Earth’s magnetosphere (with the exception of radiation belt physics), the ionosphere (with the exception of radio science), the neutral atmosphere and the ground (via electromagnetic induction fields). Space weather relevant state-of-the-art physical and semi-empirical models of the various regions are reviewed. They include models for interplanetary space, its quiet state and the evolution of recurrent and transient solar perturbations (corotating interaction regions, coronal mass ejections, their interplanetary remnants, and solar energetic particle fluxes). Models of coupled large-scale solar wind-magnetosphere-ionosphere processes (global magnetohydrodynamic descriptions) and of inner magnetosphere processes (ring current dynamics) are discussed. Achievements in modeling the coupling between magnetospheric processes and the neutral and ionized upper and middle atmospheres are described. Finally we mention efforts to compile comprehensive and flexible models from selections of existing modules applicable to particular regions and conditions in interplanetary space and geospace.
Thatcher, T L; Wilson, D J; Wood, E E; Craig, M J; Sextro, R G
2004-08-01
Scale modeling is a useful tool for analyzing complex indoor spaces. Scale model experiments can reduce experimental costs, improve control of flow and temperature conditions, and provide a practical method for pretesting full-scale system modifications. However, changes in physical scale and working fluid (air or water) can complicate interpretation of the equivalent effects in the full-scale structure. This paper presents a detailed scaling analysis of a water tank experiment designed to model a large indoor space, and experimental results obtained with this model to assess the influence of furniture and people in the pollutant concentration field at breathing height. Theoretical calculations are derived for predicting the effects from losses of molecular diffusion, small scale eddies, turbulent kinetic energy, and turbulent mass diffusivity in a scale model, even without Reynolds number matching. Pollutant dispersion experiments were performed in a water-filled 30:1 scale model of a large room, using uranine dye injected continuously from a small point source. Pollutant concentrations were measured in a plane, using laser-induced fluorescence techniques, for three interior configurations: unobstructed, table-like obstructions, and table-like and figure-like obstructions. Concentrations within the measurement plane varied by more than an order of magnitude, even after the concentration field was fully developed. Objects in the model interior had a significant effect on both the concentration field and fluctuation intensity in the measurement plane. PRACTICAL IMPLICATION: This scale model study demonstrates both the utility of scale models for investigating dispersion in indoor environments and the significant impact of turbulence created by furnishings and people on pollutant transport from floor level sources. In a room with no furniture or occupants, the average concentration can vary by about a factor of 3 across the room. Adding furniture and occupants can increase this spatial variation by another factor of 3.
Optimal variable-grid finite-difference modeling for porous media
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Yin, Xingyao; Li, Haishan
2014-12-01
Numerical modeling of poroelastic waves by the finite-difference (FD) method is more expensive than that of acoustic or elastic waves. To improve the accuracy and computational efficiency of seismic modeling, variable-grid FD methods have been developed. In this paper, we derived optimal staggered-grid finite difference schemes with variable grid-spacing and time-step for seismic modeling in porous media. FD operators with small grid-spacing and time-step are adopted for low-velocity or small-scale geological bodies, while FD operators with big grid-spacing and time-step are adopted for high-velocity or large-scale regions. The dispersion relations of FD schemes were derived based on the plane wave theory, then the FD coefficients were obtained using the Taylor expansion. Dispersion analysis and modeling results demonstrated that the proposed method has higher accuracy with lower computational cost for poroelastic wave simulation in heterogeneous reservoirs.
Xu, Enhua; Ten-No, Seiichiro L
2018-06-05
Partially linearized external models to active-space coupled-cluster through hextuple excitations, for example, CC{SDtqph} L , CCSD{tqph} L , and CCSD{tqph} hyb, are implemented and compared with the full active-space CCSDtqph. The computational scaling of CCSDtqph coincides with that for the standard coupled-cluster singles and doubles (CCSD), yet with a much large prefactor. The approximate schemes to linearize the external excitations higher than doubles are significantly cheaper than the full CCSDtqph model. These models are applied to investigate the bond dissociation energies of diatomic molecules (HF, F 2 , CuH, and CuF), and the potential energy surfaces of the bond dissociation processes of HF, CuH, H 2 O, and C 2 H 4 . Among the approximate models, CCSD{tqph} hyb provides very accurate descriptions compared with CCSDtqph for all of the tested systems. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Underexpanded Screeching Jets From Circular, Rectangular, and Elliptic Nozzles
NASA Technical Reports Server (NTRS)
Panda, J.; Raman, G.; Zaman, K. B. M. Q.
2004-01-01
The screech frequency and amplitude, the shock spacing, the hydrodynamic-acoustic standing wave spacing, and the convective velocity of large organized structures are measured in the nominal Mach number range of 1.1 less than or = Mj less that or = l0.9 for supersonic, underexpanded jets exhausting from a circular, a rectangular and an elliptic nozzle. This provides a carefully measured data set useful in comparing the importance of various physical parameters in the screech generation process. The hydrodynamic-acoustic standing wave is formed between the potential pressure field of large turbulent structures and the acoustic pressure field of the screech sound. It has been demonstrated earlier that in the currently available screech frequency prediction models replacement of the shock spacing by the standing wave spacing provides an exact expression. In view of this newly found evidence, a comparison is made between the average standing wavelength and the average shock spacing. It is found that there exists a small, yet important, difference, which is dependent on the azimuthal screech mode. For example, in the flapping modes of circular, rectangular, and elliptic jets, the standing wavelength is slightly longer than the shock spacing, while for the helical screech mode in a circular jet the opposite is true. This difference accounts for the departure of the existing models from predicting the exact screech frequency. Another important parameter, necessary in screech prediction, is the convective velocity of the large organized structures. It is demonstrated that the presence of the hydrodynamic-acoustic standing wave, even inside the jet shear layer, becomes a significant source of error in the convective velocity data obtained using the conventional methods. However, a new relationship, using the standing wavelength and screech frequency is shown to provide more accurate results.
MAPPING GROWTH AND GRAVITY WITH ROBUST REDSHIFT SPACE DISTORTIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwan, Juliana; Lewis, Geraint F.; Linder, Eric V.
2012-04-01
Redshift space distortions (RSDs) caused by galaxy peculiar velocities provide a window onto the growth rate of large-scale structure and a method for testing general relativity. We investigate through a comparison of N-body simulations to various extensions of perturbation theory beyond the linear regime, the robustness of cosmological parameter extraction, including the gravitational growth index {gamma}. We find that the Kaiser formula and some perturbation theory approaches bias the growth rate by 1{sigma} or more relative to the fiducial at scales as large as k > 0.07 h Mpc{sup -1}. This bias propagates to estimates of the gravitational growth indexmore » as well as {Omega}{sub m} and the equation-of-state parameter and presents a significant challenge to modeling RSDs. We also determine an accurate fitting function for a combination of line-of-sight damping and higher order angular dependence that allows robust modeling of the redshift space power spectrum to substantially higher k.« less
COMPUTATIONAL METHODOLOGIES for REAL-SPACE STRUCTURAL REFINEMENT of LARGE MACROMOLECULAR COMPLEXES
Goh, Boon Chong; Hadden, Jodi A.; Bernardi, Rafael C.; Singharoy, Abhishek; McGreevy, Ryan; Rudack, Till; Cassidy, C. Keith; Schulten, Klaus
2017-01-01
The rise of the computer as a powerful tool for model building and refinement has revolutionized the field of structure determination for large biomolecular systems. Despite the wide availability of robust experimental methods capable of resolving structural details across a range of spatiotemporal resolutions, computational hybrid methods have the unique ability to integrate the diverse data from multimodal techniques such as X-ray crystallography and electron microscopy into consistent, fully atomistic structures. Here, commonly employed strategies for computational real-space structural refinement are reviewed, and their specific applications are illustrated for several large macromolecular complexes: ribosome, virus capsids, chemosensory array, and photosynthetic chromatophore. The increasingly important role of computational methods in large-scale structural refinement, along with current and future challenges, is discussed. PMID:27145875
Simulating Space Capsule Water Landing with Explicit Finite Element Method
NASA Technical Reports Server (NTRS)
Wang, John T.; Lyle, Karen H.
2007-01-01
A study of using an explicit nonlinear dynamic finite element code for simulating the water landing of a space capsule was performed. The finite element model contains Lagrangian shell elements for the space capsule and Eulerian solid elements for the water and air. An Arbitrary Lagrangian Eulerian (ALE) solver and a penalty coupling method were used for predicting the fluid and structure interaction forces. The space capsule was first assumed to be rigid, so the numerical results could be correlated with closed form solutions. The water and air meshes were continuously refined until the solution was converged. The converged maximum deceleration predicted is bounded by the classical von Karman and Wagner solutions and is considered to be an adequate solution. The refined water and air meshes were then used in the models for simulating the water landing of a capsule model that has a flexible bottom. For small pitch angle cases, the maximum deceleration from the flexible capsule model was found to be significantly greater than the maximum deceleration obtained from the corresponding rigid model. For large pitch angle cases, the difference between the maximum deceleration of the flexible model and that of its corresponding rigid model is smaller. Test data of Apollo space capsules with a flexible heat shield qualitatively support the findings presented in this paper.
Space station dynamic modeling, disturbance accommodation, and adaptive control
NASA Technical Reports Server (NTRS)
Wang, S. J.; Ih, C. H.; Lin, Y. H.; Metter, E.
1985-01-01
Dynamic models for two space station configurations were derived. Space shuttle docking disturbances and their effects on the station and solar panels are quantified. It is shown that hard shuttle docking can cause solar panel buckling. Soft docking and berthing can substantially reduce structural loads at the expense of large shuttle and station attitude excursions. It is found predocking shuttle momentum reduction is necessary to achieve safe and routine operations. A direct model reference adaptive control is synthesized and evaluated for the station model parameter errors and plant dynamics truncations. The rigid body and the flexible modes are treated. It is shown that convergence of the adaptive algorithm can be achieved in 100 seconds with reasonable performance even during shuttle hard docking operations in which station mass and inertia are instantaneously changed by more than 100%.
NASA Technical Reports Server (NTRS)
Klumpar, D. M. (Principal Investigator)
1982-01-01
The status of the initial testing of the modeling procedure developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere and magnetosphere is reported. The modeling technique utilizes a linear current element representation of the large scale space-current system.
A stochastic differential equation model for the foraging behavior of fish schools.
Tạ, Tôn Việt; Nguyen, Linh Thi Hoai
2018-03-15
Constructing models of living organisms locating food sources has important implications for understanding animal behavior and for the development of distribution technologies. This paper presents a novel simple model of stochastic differential equations for the foraging behavior of fish schools in a space including obstacles. The model is studied numerically. Three configurations of space with various food locations are considered. In the first configuration, fish swim in free but limited space. All individuals can find food with large probability while keeping their school structure. In the second and third configurations, they move in limited space with one and two obstacles, respectively. Our results reveal that the probability of foraging success is highest in the first configuration, and smallest in the third one. Furthermore, when school size increases up to an optimal value, the probability of foraging success tends to increase. When it exceeds an optimal value, the probability tends to decrease. The results agree with experimental observations.
A stochastic differential equation model for the foraging behavior of fish schools
NASA Astrophysics Data System (ADS)
Tạ, Tôn ệt, Vi; Hoai Nguyen, Linh Thi
2018-05-01
Constructing models of living organisms locating food sources has important implications for understanding animal behavior and for the development of distribution technologies. This paper presents a novel simple model of stochastic differential equations for the foraging behavior of fish schools in a space including obstacles. The model is studied numerically. Three configurations of space with various food locations are considered. In the first configuration, fish swim in free but limited space. All individuals can find food with large probability while keeping their school structure. In the second and third configurations, they move in limited space with one and two obstacles, respectively. Our results reveal that the probability of foraging success is highest in the first configuration, and smallest in the third one. Furthermore, when school size increases up to an optimal value, the probability of foraging success tends to increase. When it exceeds an optimal value, the probability tends to decrease. The results agree with experimental observations.
NASA Technical Reports Server (NTRS)
Ko, William L.; Olona, Timothy; Muramoto, Kyle M.
1990-01-01
Different finite element models previously set up for thermal analysis of the space shuttle orbiter structure are discussed and their shortcomings identified. Element density criteria are established for the finite element thermal modelings of space shuttle orbiter-type large, hypersonic aircraft structures. These criteria are based on rigorous studies on solution accuracies using different finite element models having different element densities set up for one cell of the orbiter wing. Also, a method for optimization of the transient thermal analysis computer central processing unit (CPU) time is discussed. Based on the newly established element density criteria, the orbiter wing midspan segment was modeled for the examination of thermal analysis solution accuracies and the extent of computation CPU time requirements. The results showed that the distributions of the structural temperatures and the thermal stresses obtained from this wing segment model were satisfactory and the computation CPU time was at the acceptable level. The studies offered the hope that modeling the large, hypersonic aircraft structures using high-density elements for transient thermal analysis is possible if a CPU optimization technique was used.
Verification of Space Station Secondary Power System Stability Using Design of Experiment
NASA Technical Reports Server (NTRS)
Karimi, Kamiar J.; Booker, Andrew J.; Mong, Alvin C.; Manners, Bruce
1998-01-01
This paper describes analytical methods used in verification of large DC power systems with applications to the International Space Station (ISS). Large DC power systems contain many switching power converters with negative resistor characteristics. The ISS power system presents numerous challenges with respect to system stability such as complex sources and undefined loads. The Space Station program has developed impedance specifications for sources and loads. The overall approach to system stability consists of specific hardware requirements coupled with extensive system analysis and testing. Testing of large complex distributed power systems is not practical due to size and complexity of the system. Computer modeling has been extensively used to develop hardware specifications as well as to identify system configurations for lab testing. The statistical method of Design of Experiments (DoE) is used as an analysis tool for verification of these large systems. DOE reduces the number of computer runs which are necessary to analyze the performance of a complex power system consisting of hundreds of DC/DC converters. DoE also provides valuable information about the effect of changes in system parameters on the performance of the system. DoE provides information about various operating scenarios and identification of the ones with potential for instability. In this paper we will describe how we have used computer modeling to analyze a large DC power system. A brief description of DoE is given. Examples using applications of DoE to analysis and verification of the ISS power system are provided.
Large scale cryogenic fluid systems testing
NASA Technical Reports Server (NTRS)
1992-01-01
NASA Lewis Research Center's Cryogenic Fluid Systems Branch (CFSB) within the Space Propulsion Technology Division (SPTD) has the ultimate goal of enabling the long term storage and in-space fueling/resupply operations for spacecraft and reusable vehicles in support of space exploration. Using analytical modeling, ground based testing, and on-orbit experimentation, the CFSB is studying three primary categories of fluid technology: storage, supply, and transfer. The CFSB is also investigating fluid handling, advanced instrumentation, and tank structures and materials. Ground based testing of large-scale systems is done using liquid hydrogen as a test fluid at the Cryogenic Propellant Tank Facility (K-site) at Lewis' Plum Brook Station in Sandusky, Ohio. A general overview of tests involving liquid transfer, thermal control, pressure control, and pressurization is given.
The Microgravity Isolation Mount: A Linearized State-Space Model a la Newton and Kane
NASA Technical Reports Server (NTRS)
Hampton, R. David; Tryggvason, Bjarni V.; DeCarufel, Jean; Townsend, Miles A.; Wagar, William O.
1999-01-01
Vibration acceleration levels on large space platforms exceed the requirements of many space experiments. The Microgravity Vibration Isolation Mount (MIM) was built by the Canadian Space Agency to attenuate these disturbances to acceptable levels, and has been operational on the Russian Space Station Mir since May 1996. It has demonstrated good isolation performance and has supported several materials science experiments. The MIM uses Lorentz (voice-coil) magnetic actuators to levitate and isolate payloads at the individual experiment/sub-experiment (versus rack) level. Payload acceleration, relative position, and relative orientation (Euler-parameter) measurements are fed to a state-space controller. The controller, in turn, determines the actuator currents needed for effective experiment isolation. This paper presents the development of an algebraic, state-space model of the MIM, in a form suitable for optimal controller design. The equations are first derived using Newton's Second Law directly; then a second derivation (i.e., validation) of the same equations is provided, using Kane's approach.
A Very Large Area Network (VLAN) knowledge-base applied to space communication problems
NASA Technical Reports Server (NTRS)
Zander, Carol S.
1988-01-01
This paper first describes a hierarchical model for very large area networks (VLAN). Space communication problems whose solution could profit by the model are discussed and then an enhanced version of this model incorporating the knowledge needed for the missile detection-destruction problem is presented. A satellite network or VLAN is a network which includes at least one satellite. Due to the complexity, a compromise between fully centralized and fully distributed network management has been adopted. Network nodes are assigned to a physically localized group, called a partition. Partitions consist of groups of cell nodes with one cell node acting as the organizer or master, called the Group Master (GM). Coordinating the group masters is a Partition Master (PM). Knowledge is also distributed hierarchically existing in at least two nodes. Each satellite node has a back-up earth node. Knowledge must be distributed in such a way so as to minimize information loss when a node fails. Thus the model is hierarchical both physically and informationally.
An Evaluation of Cosmological Models from the Expansion and Growth of Structure Measurements
Zhai, Zhongxu; Blanton, Michael; Slosar, Anze; ...
2017-12-01
Here, we compare a large suite of theoretical cosmological models to observational data from the cosmic microwave background, baryon acoustic oscillation measurements of expansion, Type Ia supernova measurements of expansion, redshift space distortion measurements of the growth of structure, and the local Hubble constant. Our theoretical models include parametrizations of dark energy as well as physical models of dark energy and modified gravity. We determine the constraints on the model parameters, incorporating the redshift space distortion data directly in the analysis. To determine whether models can be ruled out, we evaluate the p-value (the probability under the model of obtainingmore » data as bad or worse than the observed data). In our comparison, we find the well-known tension of H 0 with the other data; no model resolves this tension successfully. Among the models we consider, the large-scale growth of structure data does not affect the modified gravity models as a category particularly differently from dark energy models; it matters for some modified gravity models but not others, and the same is true for dark energy models. We compute predicted observables for each model under current observational constraints, and identify models for which future observational constraints will be particularly informative.« less
An Evaluation of Cosmological Models from the Expansion and Growth of Structure Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, Zhongxu; Blanton, Michael; Slosar, Anze
Here, we compare a large suite of theoretical cosmological models to observational data from the cosmic microwave background, baryon acoustic oscillation measurements of expansion, Type Ia supernova measurements of expansion, redshift space distortion measurements of the growth of structure, and the local Hubble constant. Our theoretical models include parametrizations of dark energy as well as physical models of dark energy and modified gravity. We determine the constraints on the model parameters, incorporating the redshift space distortion data directly in the analysis. To determine whether models can be ruled out, we evaluate the p-value (the probability under the model of obtainingmore » data as bad or worse than the observed data). In our comparison, we find the well-known tension of H 0 with the other data; no model resolves this tension successfully. Among the models we consider, the large-scale growth of structure data does not affect the modified gravity models as a category particularly differently from dark energy models; it matters for some modified gravity models but not others, and the same is true for dark energy models. We compute predicted observables for each model under current observational constraints, and identify models for which future observational constraints will be particularly informative.« less
Gething, Peter W; Patil, Anand P; Hay, Simon I
2010-04-01
Risk maps estimating the spatial distribution of infectious diseases are required to guide public health policy from local to global scales. The advent of model-based geostatistics (MBG) has allowed these maps to be generated in a formal statistical framework, providing robust metrics of map uncertainty that enhances their utility for decision-makers. In many settings, decision-makers require spatially aggregated measures over large regions such as the mean prevalence within a country or administrative region, or national populations living under different levels of risk. Existing MBG mapping approaches provide suitable metrics of local uncertainty--the fidelity of predictions at each mapped pixel--but have not been adapted for measuring uncertainty over large areas, due largely to a series of fundamental computational constraints. Here the authors present a new efficient approximating algorithm that can generate for the first time the necessary joint simulation of prevalence values across the very large prediction spaces needed for global scale mapping. This new approach is implemented in conjunction with an established model for P. falciparum allowing robust estimates of mean prevalence at any specified level of spatial aggregation. The model is used to provide estimates of national populations at risk under three policy-relevant prevalence thresholds, along with accompanying model-based measures of uncertainty. By overcoming previously unchallenged computational barriers, this study illustrates how MBG approaches, already at the forefront of infectious disease mapping, can be extended to provide large-scale aggregate measures appropriate for decision-makers.
Paparo, M.; Benko, J. M.; Hareter, M.; ...
2016-05-11
In this study, a sequence search method was developed to search the regular frequency spacing in δ Scuti stars through visual inspection and an algorithmic search. We searched for sequences of quasi-equally spaced frequencies, containing at least four members per sequence, in 90 δ Scuti stars observed by CoRoT. We found an unexpectedly large number of independent series of regular frequency spacing in 77 δ Scuti stars (from one to eight sequences) in the non-asymptotic regime. We introduce the sequence search method presenting the sequences and echelle diagram of CoRoT 102675756 and the structure of the algorithmic search. Four sequencesmore » (echelle ridges) were found in the 5–21 d –1 region where the pairs of the sequences are shifted (between 0.5 and 0.59 d –1) by twice the value of the estimated rotational splitting frequency (0.269 d –1). The general conclusions for the whole sample are also presented in this paper. The statistics of the spacings derived by the sequence search method, by FT (Fourier transform of the frequencies), and the statistics of the shifts are also compared. In many stars more than one almost equally valid spacing appeared. The model frequencies of FG Vir and their rotationally split components were used to formulate the possible explanation that one spacing is the large separation while the other is the sum of the large separation and the rotational frequency. In CoRoT 102675756, the two spacings (2.249 and 1.977 d –1) are in better agreement with the sum of a possible 1.710 d –1 large separation and two or one times, respectively, the value of the rotational frequency.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paparo, M.; Benko, J. M.; Hareter, M.
In this study, a sequence search method was developed to search the regular frequency spacing in δ Scuti stars through visual inspection and an algorithmic search. We searched for sequences of quasi-equally spaced frequencies, containing at least four members per sequence, in 90 δ Scuti stars observed by CoRoT. We found an unexpectedly large number of independent series of regular frequency spacing in 77 δ Scuti stars (from one to eight sequences) in the non-asymptotic regime. We introduce the sequence search method presenting the sequences and echelle diagram of CoRoT 102675756 and the structure of the algorithmic search. Four sequencesmore » (echelle ridges) were found in the 5–21 d –1 region where the pairs of the sequences are shifted (between 0.5 and 0.59 d –1) by twice the value of the estimated rotational splitting frequency (0.269 d –1). The general conclusions for the whole sample are also presented in this paper. The statistics of the spacings derived by the sequence search method, by FT (Fourier transform of the frequencies), and the statistics of the shifts are also compared. In many stars more than one almost equally valid spacing appeared. The model frequencies of FG Vir and their rotationally split components were used to formulate the possible explanation that one spacing is the large separation while the other is the sum of the large separation and the rotational frequency. In CoRoT 102675756, the two spacings (2.249 and 1.977 d –1) are in better agreement with the sum of a possible 1.710 d –1 large separation and two or one times, respectively, the value of the rotational frequency.« less
2011-12-11
CAPE CANAVERAL, Fla. – The high-fidelity space shuttle model that was on display at the NASA Kennedy Space Center Visitor Complex in Florida negotiates the turn from Kennedy Parkway onto Schwartz Road on its way toward NASA Kennedy Space Center's Launch Complex 39 turn basin. It is standard procedure for large payloads and equipment to travel against the normal flow of traffic under the supervision of a move crew when being transported on or off center property. The 525-foot-tall Vehicle Assembly Building peeps over the treetops, at right. The shuttle was part of a display at the visitor complex that also included an external tank and two solid rocket boosters that were used to show visitors the size of actual space shuttle components. The full-scale shuttle model is being transferred from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will stay at the turn basin for a few months until it is ready to be transported to Texas via barge. The move also helps clear the way for the Kennedy Space Center Visitor Complex to begin construction of a new facility next year to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Dimitri Gerondidakis
2011-12-11
CAPE CANAVERAL, Fla. – The transporter carrying the high-fidelity space shuttle model that was on display at the NASA Kennedy Space Center Visitor Complex in Florida makes a wide turn into the right-hand lane of the NASA Causeway as it leaves the visitor complex on its way to NASA Kennedy Space Center's Launch Complex 39 turn basin. It is standard procedure for large payloads and equipment to travel against the normal flow of traffic under the supervision of a move crew when being transported on or off center property. The shuttle was part of a display at the visitor complex that also included an external tank and two solid rocket boosters that were used to show visitors the size of actual space shuttle components. The full-scale shuttle model is being transferred from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will stay at the turn basin for a few months until it is ready to be transported to Texas via barge. The move also helps clear the way for the Kennedy Space Center Visitor Complex to begin construction of a new facility next year to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Dimitri Gerondidakis
NASA Astrophysics Data System (ADS)
Khoei, A. R.; Samimi, M.; Azami, A. R.
2007-02-01
In this paper, an application of the reproducing kernel particle method (RKPM) is presented in plasticity behavior of pressure-sensitive material. The RKPM technique is implemented in large deformation analysis of powder compaction process. The RKPM shape function and its derivatives are constructed by imposing the consistency conditions. The essential boundary conditions are enforced by the use of the penalty approach. The support of the RKPM shape function covers the same set of particles during powder compaction, hence no instability is encountered in the large deformation computation. A double-surface plasticity model is developed in numerical simulation of pressure-sensitive material. The plasticity model includes a failure surface and an elliptical cap, which closes the open space between the failure surface and hydrostatic axis. The moving cap expands in the stress space according to a specified hardening rule. The cap model is presented within the framework of large deformation RKPM analysis in order to predict the non-uniform relative density distribution during powder die pressing. Numerical computations are performed to demonstrate the applicability of the algorithm in modeling of powder forming processes and the results are compared to those obtained from finite element simulation to demonstrate the accuracy of the proposed model.
Clustering fossils in solid inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akhshik, Mohammad, E-mail: m.akhshik@ipm.ir
In solid inflation the single field non-Gaussianity consistency condition is violated. As a result, the long tenor perturbation induces observable clustering fossils in the form of quadrupole anisotropy in large scale structure power spectrum. In this work we revisit the bispectrum analysis for the scalar-scalar-scalar and tensor-scalar-scalar bispectrum for the general parameter space of solid. We consider the parameter space of the model in which the level of non-Gaussianity generated is consistent with the Planck constraints. Specializing to this allowed range of model parameter we calculate the quadrupole anisotropy induced from the long tensor perturbations on the power spectrum ofmore » the scalar perturbations. We argue that the imprints of clustering fossil from primordial gravitational waves on large scale structures can be detected from the future galaxy surveys.« less
A multilevel control system for the large space telescope. [numerical analysis/optimal control
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Sundareshan, S. K.; Vukcevic, M. B.
1975-01-01
A multilevel scheme was proposed for control of Large Space Telescope (LST) modeled by a three-axis-six-order nonlinear equation. Local controllers were used on the subsystem level to stabilize motions corresponding to the three axes. Global controllers were applied to reduce (and sometimes nullify) the interactions among the subsystems. A multilevel optimization method was developed whereby local quadratic optimizations were performed on the subsystem level, and global control was again used to reduce (nullify) the effect of interactions. The multilevel stabilization and optimization methods are presented as general tools for design and then used in the design of the LST Control System. The methods are entirely computerized, so that they can accommodate higher order LST models with both conceptual and numerical advantages over standard straightforward design techniques.
A more accurate modeling of the effects of actuators in large space structures
NASA Technical Reports Server (NTRS)
Hablani, H. B.
1981-01-01
The paper deals with finite actuators. A nonspinning three-axis stabilized space vehicle having a two-dimensional large structure and a rigid body at the center is chosen for analysis. The torquers acting on the vehicle are modeled as antisymmetric forces distributed in a small but finite area. In the limit they represent point torquers which also are treated as a special case of surface distribution of dipoles. Ordinary and partial differential equations governing the forced vibrations of the vehicle are derived by using Hamilton's principle. Associated modal inputs are obtained for both the distributed moments and the distributed forces. It is shown that the finite torquers excite the higher modes less than the point torquers. Modal cost analysis proves to be a suitable methodology to this end.
Design of shape memory alloy actuated intelligent parabolic antenna for space applications
NASA Astrophysics Data System (ADS)
Kalra, Sahil; Bhattacharya, Bishakh; Munjal, B. S.
2017-09-01
The deployment of large flexible antennas is becoming critical for space applications today. Such antenna systems can be reconfigured in space for variable antenna footprint, and hence can be utilized for signal transmission to different geographic locations. Due to quasi-static shape change requirements, coupled with the demand of large deflection, shape memory alloy (SMA) based actuators are uniquely suitable for this system. In this paper, we discuss the design and development of a reconfigurable parabolic antenna structure. The reflector skin of the antenna is vacuum formed using a metalized polycarbonate shell. Two different strategies are chosen for the antenna actuation. Initially, an SMA wire based offset network is formed on the back side of the reflector. A computational model is developed using equivalent coefficient of thermal expansion (ECTE) for the SMA wire. Subsequently, the interaction between the antenna and SMA wire is modeled as a constrained recovery system, using a 1D modified Brinson model. Joule effect based SMA phase transformation is considered for the relationship between input voltage and temperature at the SMA wire. The antenna is modeled using ABAQUS based finite element methodology. The deflection found through the computational model is compared with that measured in experiment. Subsequently, a point-wise actuation system is developed for higher deflection. For power-minimization, an auto-locking device is developed. The performance of the new configuration is compared with the offset-network configuration. It is envisaged that the study will provide a comprehensive procedure for the design of intelligent flexible structures especially suitable for space applications.
Particle Tracing Modeling with SHIELDS
NASA Astrophysics Data System (ADS)
Woodroffe, J. R.; Brito, T. V.; Jordanova, V. K.
2017-12-01
The near-Earth inner magnetosphere, where most of the nation's civilian and military space assets operate, is an extremely hazardous region of the space environment which poses major risks to our space infrastructure. Failure of satellite subsystems or even total failure of a spacecraft can arise for a variety of reasons, some of which are related to the space environment: space weather events like single-event-upsets and deep dielectric charging caused by high energy particles, or surface charging caused by low to medium energy particles; other space hazards are collisions with natural or man-made space debris, or intentional hostile acts. A recently funded project through the Los Alamos National Laboratory (LANL) Directed Research and Development (LDRD) program aims at developing a new capability to understand, model, and predict Space Hazards Induced near Earth by Large Dynamic Storms, the SHIELDS framework. The project goals are to understand the dynamics of the surface charging environment (SCE), the hot (keV) electrons on both macro- and microscale. These challenging problems are addressed using a team of world-class experts and state-of-the-art physics-based models and computational facilities. We present first results of a coupled BATS-R-US/RAM-SCB/Particle Tracing Model to evaluate particle fluxes in the inner magnetosphere. We demonstrate that this setup is capable of capturing the earthward particle acceleration process resulting from dipolarization events in the tail region of the magnetosphere.
Engineering charge ordering into multiferroicity
NASA Astrophysics Data System (ADS)
He, Xu; Jin, Kui-juan
2016-04-01
Multiferroic materials have attracted great interest but are rare in nature. In many transition-metal oxides, charge ordering and magnetic ordering coexist, so that a method of engineering charge-ordered materials into ferroelectric materials would lead to a large class of multiferroic materials. We propose a strategy for designing new ferroelectric or even multiferroic materials by inserting a spacing layer into each two layers of charge-ordered materials and artificially making a superlattice. One example of the model demonstrated here is the perovskite (LaFeO3)2/LaTiO3 (111) superlattice, in which the LaTiO3 layer acts as the donor and the spacing layer, and the LaFeO3 layer is half doped and performs charge ordering. The collaboration of the charge ordering and the spacing layer breaks the space inversion symmetry, resulting in a large ferroelectric polarization. As the charge ordering also leads to a ferrimagnetic structure, (LaFeO3)2/LaTiO3 is multiferroic. It is expected that this work can encourage the designing and experimental implementation of a large class of multiferroic structures with novel properties.
Battery-powered thin film deposition process for coating telescope mirrors in space
NASA Astrophysics Data System (ADS)
Sheikh, David A.
2016-07-01
Aluminum films manufactured in the vacuum of space may increase the broadband reflectance response of a space telescope operating in the EUV (50-nm to 115-nm) by eliminating absorbing metal-fluorides and metal-oxides, which significantly reduce aluminum's reflectance below 115-nm. Recent developments in battery technology allow small lithium batteries to rapidly discharge large amounts of energy. It is therefore conceivable to power an array of resistive evaporation filaments in a space environment, using a reasonable mass of batteries and other hardware. This paper presents modeling results for coating thickness as a function of position, for aluminum films made with a hexagonal array of battery powered evaporation sources. The model is based on measured data from a single battery-powered evaporation source.
The Unknown Hydrogen Exosphere: Space Weather Implications
NASA Astrophysics Data System (ADS)
Krall, J.; Glocer, A.; Fok, M.-C.; Nossal, S. M.; Huba, J. D.
2018-03-01
Recent studies suggest that the hydrogen (H) density in the exosphere and geocorona might differ from previously assumed values by factors as large as 2. We use the SAMI3 (Sami3 is Also a Model of the Ionosphere) and Comprehensive Inner Magnetosphere-Ionosphere models to evaluate scenarios where the hydrogen density is reduced or enhanced, by a factor of 2, relative to values given by commonly used empirical models. We show that the rate of plasmasphere refilling following a geomagnetic storm varies nearly linearly with the hydrogen density. We also show that the ring current associated with a geomagnetic storm decays more rapidly when H is increased. With respect to these two space weather effects, increased exosphere hydrogen density is associated with reduced threats to space assets during and following a geomagnetic storm.
A strategy for the observation of volcanism on Earth from space.
Wadge, G
2003-01-15
Heat, strain, topography and atmospheric emissions associated with volcanism are well observed by satellites orbiting the Earth. Gravity and electromagnetic transients from volcanoes may also prove to be measurable from space. The nature of eruptions means that the best strategy for measuring their dynamic properties remotely from space is to employ two modes with different spatial and temporal samplings: eruption mode and background mode. Such observational programmes are best carried out at local or regional volcano observatories by coupling them with numerical models of volcanic processes. Eventually, such models could become multi-process, operational forecast models that assimilate the remote and other observables to constrain their uncertainties. The threat posed by very large magnitude explosive eruptions is global and best addressed by a spaceborne observational programme with a global remit.
Control by model error estimation
NASA Technical Reports Server (NTRS)
Likins, P. W.; Skelton, R. E.
1976-01-01
Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).
Highlights of Space Weather Services/Capabilities at NASA/GSFC Space Weather Center
NASA Technical Reports Server (NTRS)
Fok, Mei-Ching; Zheng, Yihua; Hesse, Michael; Kuznetsova, Maria; Pulkkinen, Antti; Taktakishvili, Aleksandre; Mays, Leila; Chulaki, Anna; Lee, Hyesook
2012-01-01
The importance of space weather has been recognized world-wide. Our society depends increasingly on technological infrastructure, including the power grid as well as satellites used for communication and navigation. Such technologies, however, are vulnerable to space weather effects caused by the Sun's variability. NASA GSFC's Space Weather Center (SWC) (http://science.gsfc.nasa.gov//674/swx services/swx services.html) has developed space weather products/capabilities/services that not only respond to NASA's needs but also address broader interests by leveraging the latest scientific research results and state-of-the-art models hosted at the Community Coordinated Modeling Center (CCMC: http://ccmc.gsfc.nasa.gov). By combining forefront space weather science and models, employing an innovative and configurable dissemination system (iSWA.gsfc.nasa.gov), taking advantage of scientific expertise both in-house and from the broader community as well as fostering and actively participating in multilateral collaborations both nationally and internationally, NASA/GSFC space weather Center, as a sibling organization to CCMC, is poised to address NASA's space weather needs (and needs of various partners) and to help enhancing space weather forecasting capabilities collaboratively. With a large number of state-of-the-art physics-based models running in real-time covering the whole space weather domain, it offers predictive capabilities and a comprehensive view of space weather events throughout the solar system. In this paper, we will provide some highlights of our service products/capabilities. In particular, we will take the 23 January and the 27 January space weather events as examples to illustrate how we can use the iSWA system to track them in the interplanetary space and forecast their impacts.
NASA Technical Reports Server (NTRS)
Parlos, Alexander G.; Sunkel, John W.
1990-01-01
An attitude-control and momentum-management (ACMM) system for the Space Station in a large-angle torque-equilibrium-attitude (TEA) configuration is developed analytically and demonstrated by means of numerical simulations. The equations of motion for a rigid-body Space Station model are outlined; linearized equations for an arbitrary TEA (resulting from misalignment of control and body axes) are derived; the general requirements for an ACMM are summarized; and a pole-placement linear-quadratic regulator solution based on scheduled gains is proposed. Results are presented in graphs for (1) simulations based on configuration MB3 (showing the importance of accounting for the cross-inertia terms in the TEA estimate) and (2) simulations of a stepwise change from configuration MB3 to the 'assembly complete' stage over 130 orbits (indicating that the present ACCM scheme maintains sufficient control over slowly varying Space Station dynamics).
A BRDF statistical model applying to space target materials modeling
NASA Astrophysics Data System (ADS)
Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen
2017-10-01
In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.
Characterization of Inactive Rocket Bodies Via Non-Resolved Photometric Data
NASA Astrophysics Data System (ADS)
Linares, R.; Palmer, D.; Thompson, D.; Klimenko, A.
2014-09-01
Recent events in space, including the collision of Russias Cosmos 2251 satellite with Iridium 33 and Chinas Feng Yun 1C anti-satellite demonstration, have stressed the capabilities of Space Surveillance Network (SSN) and its ability to provide accurate and actionable impact probability estimates. The SSN network has the unique challenge of tracking more than 18,000 resident space objects (RSOs) and providing critical collision avoidance warnings to military, NASA, and commercial systems. However, due to the large number of RSOs and the limited number of sensors available to track them, it is impossible to maintain persistent surveillance. Observation gaps result in large propagation intervals between measurements and close approaches. Coupled with nonlinear RSO dynamics this results in difficulty in modeling the probability distribution functions (pdfs) of the RSO. In particular low-Earth orbiting (LEO) satellites are heavily influenced by atmospheric drag, which is very difficult to model accurately. A number of atmospheric models exist which can be classified as either empirical or physics-based models. The current Air Force standard is the High Accuracy Satellite Drag Model (HASDM), which is an empirical model based on observation of calibration satellites. These satellite observations are used to determine model parameters based on their orbit determination solutions. Atmospheric orbits are perturbed by a number of factors including drag coefficient, attitude, and shape of the space object. The satellites used for the HASDM model calibration process are chosen because of their relatively simple shapes, to minimize errors introduced due to shape miss-modeling. Under this requirement the number of calibration satellites that can be used for calibrating the atmospheric models is limited. Los Alamos National Laboratory (LANL) has established a research effort, called IMPACT (Integrated Modeling of Perturbations in Atmospheres for Conjunction Tracking), to improve impact assessment via improved physics-based modeling. As part of this effort calibration satellite observations are used to dynamically calibrate the physics-based model and to improve its forecasting capability. The observations are collected from a variety of sources, including from LANLs own Raven-class optical telescope. This system collects both astrometric and photometric data on space objects. The photometric data will be used to estimate the space objects attitude and shape. Non-resolved photometric data have been studied by many as a mechanism for space object characterization. Photometry is the measurement of an objects flux or apparent brightness measured over a wavelength band. The temporal variation of photometric measurements is referred to as photometric signature. The photometric optical signature of an object contains information about shape, attitude, size and material composition. This work focuses on the processing of the data collected with LANLs telescope in an effort to use photometric data to expand the number of space objects that can be used as calibration satellites. A nonlinear least squares is used to estimate the attitude and angular velocity of the space object; a number of real data examples are shown. Inactive space objects are used for the real data examples and good estimation results are shown.
NASA Technical Reports Server (NTRS)
Adams, Louis R.
1987-01-01
The design requirements for a truss beam model are reviewed. The concept behind the beam is described. Pertinent analysis and studies concerning beam definition, deployment loading, joint compliance, etc. are given. Design, fabrication and assembly procedures are discussed.
Improving orbit prediction accuracy through supervised machine learning
NASA Astrophysics Data System (ADS)
Peng, Hao; Bai, Xiaoli
2018-05-01
Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already. This paper presents a methodology to predict RSOs' trajectories with higher accuracy than that of the current methods. Inspired by the machine learning (ML) theory through which the models are learned based on large amounts of observed data and the prediction is conducted without explicitly modeling space objects and space environment, the proposed ML approach integrates physics-based orbit prediction algorithms with a learning-based process that focuses on reducing the prediction errors. Using a simulation-based space catalog environment as the test bed, the paper demonstrates three types of generalization capability for the proposed ML approach: (1) the ML model can be used to improve the same RSO's orbit information that is not available during the learning process but shares the same time interval as the training data; (2) the ML model can be used to improve predictions of the same RSO at future epochs; and (3) the ML model based on a RSO can be applied to other RSOs that share some common features.
Design and Experimental Verification of Deployable/Inflatable Ultra-Lightweight Structures
NASA Technical Reports Server (NTRS)
Pai, P. Frank
2004-01-01
Because launch cost of a space structural system is often proportional to the launch volume and mass and there is no significant gravity in space, NASA's space exploration programs and various science missions have stimulated extensive use of ultra-lightweight deployable/inflatable structures. These structures are named here as Highly Flexible Structures (HFSs) because they are designed to undergo large displacements, rotations, and/or buckling without plastic deformation under normal operation conditions. Except recent applications to space structural systems, HFSs have been used in many mechanical systems, civil structures, aerospace vehicles, home appliances, and medical devices to satisfy space limitations, provide special mechanisms, and/or reduce structural weight. The extensive use of HFSs in today's structural engineering reveals the need of a design and analysis software and a database system with design guidelines for practicing engineers to perform computer-aided design and rapid prototyping of HFSs. Also to prepare engineering students for future structural engineering requires a new and easy-to- understand method of presenting the complex mathematics of the modeling and analysis of HFSs. However, because of the high flexibility of HFSs, many unique challenging problems in the modeling, design and analysis of HFSs need to be studied. The current state of research on HFSs needs advances in the following areas: (1) modeling of large rotations using appropriate strain measures, (2) modeling of cross-section warpings of structures, (3) how to account for both large rotations and cross- section warpings in 2D (two-dimensional) and 1D structural theories, (4) modeling of thickness thinning of membranes due to inflation pressure, pretension, and temperature change, (5) prediction of inflated shapes and wrinkles of inflatable structures, (6) development of efficient numerical methods for nonlinear static and dynamic analyses, and (7) filling the gap between geometrically exact elastic analysis and elastoplastic analysis. The objectives of this research project were: (1) to study the modeling, design, and analysis of deployable/inflatable ultra-lightweight structures, (2) to perform numerical and experimental studies on the static and dynamic characteristics and deployability of HFSs, (3) to derive guidelines for designing HFSs, (4) to develop a MATLAB toolbox for the design, analysis, and dynamic animation of HFSs, and (5) to perform experiments and establish an adequate database of post-buckling characteristics of HFSs.
A methodology for selecting optimum organizations for space communities
NASA Technical Reports Server (NTRS)
Ragusa, J. M.
1978-01-01
This paper suggests that a methodology exists for selecting optimum organizations for future space communities of various sizes and purposes. Results of an exploratory study to identify an optimum hypothetical organizational structure for a large earth-orbiting multidisciplinary research and applications (R&A) Space Base manned by a mixed crew of technologists are presented. Since such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than the empirical testing of it. The principal finding of this research was that a four-level project type 'total matrix' model will optimize the effectiveness of Space Base technologists. An overall conclusion which can be reached from the research is that application of this methodology, or portions of it, may provide planning insights for the formal organizations which will be needed during the Space Industrialization Age.
Ultra-low current beams in UMER to model space-charge effects in high-energy proton and ion machines
NASA Astrophysics Data System (ADS)
Bernal, S.; Beaudoin, B.; Baumgartner, H.; Ehrenstein, S.; Haber, I.; Koeth, T.; Montgomery, E.; Ruisard, K.; Sutter, D.; Yun, D.; Kishek, R. A.
2017-03-01
The University of Maryland Electron Ring (UMER) has operated traditionally in the regime of strong space-charge dominated beam transport, but small-current beams are desirable to significantly reduce the direct (incoherent) space-charge tune shift as well as the tune depression. This regime is of interest to model space-charge effects in large proton and ion rings similar to those used in nuclear physics and spallation neutron sources, and also for nonlinear dynamics studies of lattices inspired on the Integrable Optics Test Accelerator (IOTA). We review the definitions of beam vs. space-charge intensities and discuss three methods for producing very small beam currents in UMER. We aim at generating 60µA - 1.0mA, 100 ns, 10 keV beams with normalized rms emittances of the order of 0.1 - 1.0µm.
Concurrent processing simulation of the space station
NASA Technical Reports Server (NTRS)
Gluck, R.; Hale, A. L.; Sunkel, John W.
1989-01-01
The development of a new capability for the time-domain simulation of multibody dynamic systems and its application to the study of a large angle rotational maneuvers of the Space Station is described. The effort was divided into three sequential tasks, which required significant advancements of the state-of-the art to accomplish. These were: (1) the development of an explicit mathematical model via symbol manipulation of a flexible, multibody dynamic system; (2) the development of a methodology for balancing the computational load of an explicit mathematical model for concurrent processing; and (3) the implementation and successful simulation of the above on a prototype Custom Architectured Parallel Processing System (CAPPS) containing eight processors. The throughput rate achieved by the CAPPS operating at only 70 percent efficiency, was 3.9 times greater than that obtained sequentially by the IBM 3090 supercomputer simulating the same problem. More significantly, analysis of the results leads to the conclusion that the relative cost effectiveness of concurrent vs. sequential digital computation will grow substantially as the computational load is increased. This is a welcomed development in an era when very complex and cumbersome mathematical models of large space vehicles must be used as substitutes for full scale testing which has become impractical.
ICF Implosions, Space-Charge Electric Fields, and Their Impact on Mix and Compression
NASA Astrophysics Data System (ADS)
Knoll, Dana; Chacon, Luis; Simakov, Andrei
2013-10-01
The single-fluid, quasi-neutral, radiation hydrodynamics codes, used to design the NIF targets, predict thermonuclear ignition for the conditions that have been achieved experimentally. A logical conclusion is that the physics model used in these codes is missing one, or more, key phenomena. Two key model-experiment inconsistencies on NIF are: 1) a lower implosion velocity than predicted by the design codes, and 2) transport of pusher material deep into the hot spot. We hypothesize that both of these model-experiment inconsistencies may be a result of a large, space-charge, electric field residing on the distinct interfaces in a NIF target. Large space-charge fields have been experimentally observed in Omega experiments. Given our hypothesis, this presentation will: 1) Develop a more complete physics picture of initiation, sustainment, and dissipation of a current-driven plasma sheath / double-layer at the Fuel-Pusher interface of an ablating plastic shell implosion on Omega, 2) Characterize the mix that can result from a double-layer field at the Fuel-Pusher interface, prior to the onset of fluid instabilities, and 3) Quantify the impact of the double-layer induced surface tension at the Fuel-Pusher interface on the peak observed implosion velocity in Omega.
Large Space Antenna Systems Technology, 1984
NASA Technical Reports Server (NTRS)
Boyer, W. J. (Compiler)
1985-01-01
Mission applications for large space antenna systems; large space antenna structural systems; materials and structures technology; structural dynamics and control technology, electromagnetics technology, large space antenna systems and the Space Station; and flight test and evaluation were examined.
Model Selection for Monitoring CO2 Plume during Sequestration
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-12-31
The model selection method developed as part of this project mainly includes four steps: (1) assessing the connectivity/dynamic characteristics of a large prior ensemble of models, (2) model clustering using multidimensional scaling coupled with k-mean clustering, (3) model selection using the Bayes' rule in the reduced model space, (4) model expansion using iterative resampling of the posterior models. The fourth step expresses one of the advantages of the method: it provides a built-in means of quantifying the uncertainty in predictions made with the selected models. In our application to plume monitoring, by expanding the posterior space of models, the finalmore » ensemble of representations of geological model can be used to assess the uncertainty in predicting the future displacement of the CO2 plume. The software implementation of this approach is attached here.« less
The Role of Integrated Modeling in the Design and Verification of the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Mosier, Gary E.; Howard, Joseph M.; Johnston, John D.; Parrish, Keith A.; Hyde, T. Tupper; McGinnis, Mark A.; Bluth, Marcel; Kim, Kevin; Ha, Kong Q.
2004-01-01
The James Web Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2011. System-level verification of critical optical performance requirements will rely on integrated modeling to a considerable degree. In turn, requirements for accuracy of the models are significant. The size of the lightweight observatory structure, coupled with the need to test at cryogenic temperatures, effectively precludes validation of the models and verification of optical performance with a single test in 1-g. Rather, a complex series of steps are planned by which the components of the end-to-end models are validated at various levels of subassembly, and the ultimate verification of optical performance is by analysis using the assembled models. This paper describes the critical optical performance requirements driving the integrated modeling activity, shows how the error budget is used to allocate and track contributions to total performance, and presents examples of integrated modeling methods and results that support the preliminary observatory design. Finally, the concepts for model validation and the role of integrated modeling in the ultimate verification of observatory are described.
Synergia: an accelerator modeling tool with 3-D space charge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amundson, James F.; Spentzouris, P.; /Fermilab
2004-07-01
High precision modeling of space-charge effects, together with accurate treatment of single-particle dynamics, is essential for designing future accelerators as well as optimizing the performance of existing machines. We describe Synergia, a high-fidelity parallel beam dynamics simulation package with fully three dimensional space-charge capabilities and a higher order optics implementation. We describe the computational techniques, the advanced human interface, and the parallel performance obtained using large numbers of macroparticles. We also perform code benchmarks comparing to semi-analytic results and other codes. Finally, we present initial results on particle tune spread, beam halo creation, and emittance growth in the Fermilab boostermore » accelerator.« less
Determination of Acreage Thermal Protection Foam Loss From Ice and Foam Impacts
NASA Technical Reports Server (NTRS)
Carney, Kelly S.; Lawrence, Charles
2015-01-01
A parametric study was conducted to establish Thermal Protection System (TPS) loss from foam and ice impact conditions similar to what might occur on the Space Launch System. This study was based upon the large amount of testing and analysis that was conducted with both ice and foam debris impacts on TPS acreage foam for the Space Shuttle Project External Tank. Test verified material models and modeling techniques that resulted from Space Shuttle related testing were utilized for this parametric study. Parameters varied include projectile mass, impact velocity and impact angle (5 degree and 10 degree impacts). The amount of TPS acreage foam loss as a result of the various impact conditions is presented.
Space-time extreme wind waves: Observation and analysis of shapes and heights
NASA Astrophysics Data System (ADS)
Benetazzo, Alvise; Barbariol, Francesco; Bergamasco, Filippo; Carniel, Sandro; Sclavo, Mauro
2016-04-01
We analyze here the temporal shape and the maximal height of extreme wind waves, which were obtained from an observational space-time sample of sea surface elevations during a mature and short-crested sea state (Benetazzo et al., 2015). Space-time wave data are processed to detect the largest waves of specific 3-D wave groups close to the apex of their development. First, maximal elevations of the groups are discussed within the framework of space-time (ST) extreme statistical models of random wave fields (Adler and Taylor, 2007; Benetazzo et al., 2015; Fedele, 2012). Results of ST models are also compared with observations and predictions of maxima based on time series of sea surface elevations. Second, the time profile of the extreme waves around the maximal crest height is analyzed and compared with the expectations of the linear (Boccotti, 1983) and second-order nonlinear extension (Arena, 2005) of the Quasi-Determinism (QD) theory. Main purpose is to verify to what extent, using the QD model results, one can estimate the shape and the crest-to-trough height of large waves in a random ST wave field. From the results presented, it emerges that, apart from the displacements around the crest apex, sea surface elevations of very high waves are greatly dispersed around a mean profile. Yet the QD model furnishes, on average, a fair prediction of the wave height of the maximal waves, especially when nonlinearities are taken into account. Moreover, the combination of ST and QD model predictions allow establishing, for a given sea condition, a framework for the representation of waves with very large crest heights. The results have also the potential to be implemented in a phase-averaged numerical wave model (see abstract EGU2016-14008 and Barbariol et al., 2015). - Adler, R.J., Taylor, J.E., 2007. Random fields and geometry. Springer, New York (USA), 448 pp. - Arena, F., 2005. On non-linear very large sea wave groups. Ocean Eng. 32, 1311-1331. - Barbariol, F., Alves, J.H.G.., Benetazzo, A., Bergamasco, F., Bertotti, L., Carniel, S., Cavaleri, L., Chao, Y.Y., Chawla, A., Ricchi, A., Sclavo, M., Tolman, H., 2015. Space-Time Wave Extremes in WAVEWATCH III: Implementation and Validation for the Adriatic Sea Case Study, in: 14th International Workshop on Wave Hindcasting and Forecasting. November, 8-13, Key West, Florida (USA). - Benetazzo, A., Barbariol, F., Bergamasco, F., Torsello, A., Carniel, S., Sclavo, M., 2015. Observation of extreme sea waves in a space-time ensemble. J. Phys. Oceanogr. 45, 2261-2275. - Boccotti, P., 1983. Some new results on statistical properties of wind waves. Appl. Ocean Res. 5, 134-140. - Fedele, F., 2012. Space-Time Extremes in Short-Crested Storm Seas. J. Phys. Oceanogr. 42, 1601-1615.
Xu, Jason; Minin, Vladimir N
2015-07-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.
Xu, Jason; Minin, Vladimir N.
2016-01-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377
Techniques for Down-Sampling a Measured Surface Height Map for Model Validation
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2012-01-01
This software allows one to down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. The software tool of the current two new techniques can be used in all optical model validation processes involving large space optical surfaces
NASA Astrophysics Data System (ADS)
Evans, William Todd; Neely, Kelsay E.; Strauss, Alvin M.; Cook, George E.
2017-11-01
Friction Stir Welding has been proposed as an efficient and appropriate method for in space welding. It has the potential to serve as a viable option for assembling large scale space structures. These large structures will require the use of natural in space materials such as those available from iron meteorites. Impurities present in most iron meteorites limit its ability to be welded by other space welding techniques such as electron beam laser welding. This study investigates the ability to weld pieces of in situ Campo del Cielo meteorites by Friction Stir Spot Welding. Due to the rarity of the material, low carbon steel was used as a model material to determine welding parameters. Welded samples of low carbon steel, invar, and Campo del Cielo meteorite were compared and found to behave in similar ways. This study shows that meteorites can be Friction Stir Spot Welded and that they exhibit properties analogous to that of FSSW low carbon steel welds. Thus, iron meteorites can be regarded as another viable option for in-space or Martian construction.
Improvement of Automated POST Case Success Rate Using Support Vector Machines
NASA Technical Reports Server (NTRS)
Zwack, Mathew R.; Dees, Patrick D.
2017-01-01
During early conceptual design of complex systems, concept down selection can have a large impact upon program life-cycle cost. Therefore, any concepts selected during early design will inherently commit program costs and affect the overall probability of program success. For this reason it is important to consider as large a design space as possible in order to better inform the down selection process. For conceptual design of launch vehicles, trajectory analysis and optimization often presents the largest obstacle to evaluating large trade spaces. This is due to the sensitivity of the trajectory discipline to changes in all other aspects of the vehicle design. Small deltas in the performance of other subsystems can result in relatively large fluctuations in the ascent trajectory because the solution space is non-linear and multi-modal. In order to help capture large design spaces for new launch vehicles, the authors have performed previous work seeking to automate the execution of the industry standard tool, Program to Optimize Simulated Trajectories (POST). This work initially focused on implementation of analyst heuristics to enable closure of cases in an automated fashion, with the goal of applying the concepts of design of experiments (DOE) and surrogate modeling to enable near instantaneous throughput of vehicle cases.3 As noted in [4] work was then completed to improve the DOE process by utilizing a graph theory based approach to connect similar design points.
Evolution of crop production under a pseudo-space environment using model plants, Lotus japonicus
NASA Astrophysics Data System (ADS)
Tomita-Yokotani, Kaori; Motohashi, Kyohei; Omi, Naomi; Sato, Seigo; Aoki, Toshio; Hashimoto, Hirofumi; Yamashita, Masamichi
Habitation in outer space is one of our challenges. We have been studying space agriculture and/or spacecraft agriculture to provide food and oxygen for the habitation area in the space environment. However, careful investigation should be made concerning the results of exotic environmental effects on the endogenous production of biologically active substances in indi-vidual cultivated plants in a space environment. We have already reported that the production of functional substances in cultivated plants as crops are affected by gravity. The amounts of the main physiological substances in these plants grown under terrestrial control were different from that grown in a pseudo-microgravity. These results suggested that the nutrition would be changed in the plants/crops grown in the space environment when human beings eat in space. This estimation required us to investigate each of the useful components produced by each plant grown in the space environment. These estimations involved several study fields, includ-ing nutrition, plant physiology, etc. On the other hand, the analysis of model plant genomes has recently been remarkably advanced. Lotus japonicus, a leguminous plant, is also one of the model plant. The leguminosae is a large family in the plant vegetable kingdom and almost the entire genome sequence of Lotus japonicus has been determined. Nitrogen fixation would be possible even in a space environment. We are trying to determine the best conditions and evolution for crop production using the model plants.
Feldman, Daniel; Liu, Zuowei; Nath, Pran
2007-12-21
The minimal supersymmetric standard model with soft breaking has a large landscape of supersymmetric particle mass hierarchies. This number is reduced significantly in well-motivated scenarios such as minimal supergravity and alternatives. We carry out an analysis of the landscape for the first four lightest particles and identify at least 16 mass patterns, and provide benchmarks for each. We study the signature space for the patterns at the CERN Large Hadron Collider by analyzing the lepton+ (jet> or =2) + missing P{T} signals with 0, 1, 2, and 3 leptons. Correlations in missing P{T} are also analyzed. It is found that even with 10 fb{-1} of data a significant discrimination among patterns emerges.
COI Structural Analysis Presentation
NASA Technical Reports Server (NTRS)
Cline, Todd; Stahl, H. Philip (Technical Monitor)
2001-01-01
This report discusses the structural analysis of the Next Generation Space Telescope Mirror System Demonstrator (NMSD) developed by Composite Optics Incorporated (COI) in support of the Next Generation Space Telescope (NGST) project. The mirror was submitted to Marshall Space Flight Center (MSFC) for cryogenic testing and evaluation. Once at MSFC, the mirror was lowered to approximately 40 K and the optical surface distortions were measured. Alongside this experiment, an analytical model was developed and used to compare to the test results. A NASTRAN finite element model was provided by COI and a thermal model was developed from it. Using the thermal model, steady state nodal temperatures were calculated based on the predicted environment of the large cryogenic test chamber at MSFC. This temperature distribution was applied in the structural analysis to solve for the deflections of the optical surface. Finally, these deflections were submitted for optical analysis and comparison to the interferometer test data.
Assessing global vegetation activity using spatio-temporal Bayesian modelling
NASA Astrophysics Data System (ADS)
Mulder, Vera L.; van Eck, Christel M.; Friedlingstein, Pierre; Regnier, Pierre A. G.
2016-04-01
This work demonstrates the potential of modelling vegetation activity using a hierarchical Bayesian spatio-temporal model. This approach allows modelling changes in vegetation and climate simultaneous in space and time. Changes of vegetation activity such as phenology are modelled as a dynamic process depending on climate variability in both space and time. Additionally, differences in observed vegetation status can be contributed to other abiotic ecosystem properties, e.g. soil and terrain properties. Although these properties do not change in time, they do change in space and may provide valuable information in addition to the climate dynamics. The spatio-temporal Bayesian models were calibrated at a regional scale because the local trends in space and time can be better captured by the model. The regional subsets were defined according to the SREX segmentation, as defined by the IPCC. Each region is considered being relatively homogeneous in terms of large-scale climate and biomes, still capturing small-scale (grid-cell level) variability. Modelling within these regions is hence expected to be less uncertain due to the absence of these large-scale patterns, compared to a global approach. This overall modelling approach allows the comparison of model behavior for the different regions and may provide insights on the main dynamic processes driving the interaction between vegetation and climate within different regions. The data employed in this study encompasses the global datasets for soil properties (SoilGrids), terrain properties (Global Relief Model based on SRTM DEM and ETOPO), monthly time series of satellite-derived vegetation indices (GIMMS NDVI3g) and climate variables (Princeton Meteorological Forcing Dataset). The findings proved the potential of a spatio-temporal Bayesian modelling approach for assessing vegetation dynamics, at a regional scale. The observed interrelationships of the employed data and the different spatial and temporal trends support our hypothesis. That is, the change of vegetation in space and time may be better understood when modelling vegetation change as both a dynamic and multivariate process. Therefore, future research will focus on a multivariate dynamical spatio-temporal modelling approach. This ongoing research is performed within the context of the project "Global impacts of hydrological and climatic extremes on vegetation" (project acronym: SAT-EX) which is part of the Belgian research programme for Earth Observation Stereo III.
Large Space Antenna Systems Technology, 1984
NASA Technical Reports Server (NTRS)
Boyer, W. J. (Compiler)
1985-01-01
Papers are presented which provide a comprehensive review of space missions requiring large antenna systems and of the status of key technologies required to enable these missions. Topic areas include mission applications for large space antenna systems, large space antenna structural systems, materials and structures technology, structural dynamics and control technology, electromagnetics technology, large space antenna systems and the space station, and flight test and evaluation.
A Semi-Structured MODFLOW-USG Model to Evaluate Local Water Sources to Wells for Decision Support.
Feinstein, Daniel T; Fienen, Michael N; Reeves, Howard W; Langevin, Christian D
2016-07-01
In order to better represent the configuration of the stream network and simulate local groundwater-surface water interactions, a version of MODFLOW with refined spacing in the topmost layer was applied to a Lake Michigan Basin (LMB) regional groundwater-flow model developed by the U.S. Geological. Regional MODFLOW models commonly use coarse grids over large areas; this coarse spacing precludes model application to local management issues (e.g., surface-water depletion by wells) without recourse to labor-intensive inset models. Implementation of an unstructured formulation within the MODFLOW framework (MODFLOW-USG) allows application of regional models to address local problems. A "semi-structured" approach (uniform lateral spacing within layers, different lateral spacing among layers) was tested using the LMB regional model. The parent 20-layer model with uniform 5000-foot (1524-m) lateral spacing was converted to 4 layers with 500-foot (152-m) spacing in the top glacial (Quaternary) layer, where surface water features are located, overlying coarser resolution layers representing deeper deposits. This semi-structured version of the LMB model reproduces regional flow conditions, whereas the finer resolution in the top layer improves the accuracy of the simulated response of surface water to shallow wells. One application of the semi-structured LMB model is to provide statistical measures of the correlation between modeled inputs and the simulated amount of water that wells derive from local surface water. The relations identified in this paper serve as the basis for metamodels to predict (with uncertainty) surface-water depletion in response to shallow pumping within and potentially beyond the modeled area, see Fienen et al. (2015a). Published 2016. This article is a U.S. Government work and is in the public domain in the USA.
A semi-structured MODFLOW-USG model to evaluate local water sources to wells for decision support
Feinstein, Daniel T.; Fienen, Michael N.; Reeves, Howard W.; Langevin, Christian D.
2016-01-01
In order to better represent the configuration of the stream network and simulate local groundwater-surface water interactions, a version of MODFLOW with refined spacing in the topmost layer was applied to a Lake Michigan Basin (LMB) regional groundwater-flow model developed by the U.S. Geological. Regional MODFLOW models commonly use coarse grids over large areas; this coarse spacing precludes model application to local management issues (e.g., surface-water depletion by wells) without recourse to labor-intensive inset models. Implementation of an unstructured formulation within the MODFLOW framework (MODFLOW-USG) allows application of regional models to address local problems. A “semi-structured” approach (uniform lateral spacing within layers, different lateral spacing among layers) was tested using the LMB regional model. The parent 20-layer model with uniform 5000-foot (1524-m) lateral spacing was converted to 4 layers with 500-foot (152-m) spacing in the top glacial (Quaternary) layer, where surface water features are located, overlying coarser resolution layers representing deeper deposits. This semi-structured version of the LMB model reproduces regional flow conditions, whereas the finer resolution in the top layer improves the accuracy of the simulated response of surface water to shallow wells. One application of the semi-structured LMB model is to provide statistical measures of the correlation between modeled inputs and the simulated amount of water that wells derive from local surface water. The relations identified in this paper serve as the basis for metamodels to predict (with uncertainty) surface-water depletion in response to shallow pumping within and potentially beyond the modeled area, see Fienen et al. (2015a).
Optics at langley research center.
Crumbly, K H
1970-02-01
The specialized tools of optics have played an important part in Langley's history of aeronautical and space research. Schlieren systems for photographing aeronautics and space models in wind-tunnel investigations have contributed to the available knowledge of aerodynamics. Optics continues to be an important part of Langley's research program, including new techniques for measuring the sensitivity of photomultiplier tubes, spectrographic techniques for radiation measurements of wind-tunnel models, research into large orbiting telescopes, horizon definition by ir radiation measurements, spectra of natural and artificial meteors, measurement of clear air turbulence utilizing lasers, and many others.
NASA Astrophysics Data System (ADS)
Arenberg, Jonathan; Conti, Alberto; Atkinson, Charles
2017-01-01
Pursuing ground breaking science in a highly cost and funding constrained environment presents new challenges to the development of future space astrophysics missions. Within the conventional cost models for large observatories, executing a flagship “mission after next” appears to be unstainable. To achieve our nation’s space astrophysics ambitions requires new paradigms in system design, development and manufacture. Implementation of this new paradigm requires that the space astrophysics community adopt new answers to a new set of questions. This paper will discuss the origins of these new questions and the steps to their answers.
Fast Spatio-Temporal Data Mining from Large Geophysical Datasets
NASA Technical Reports Server (NTRS)
Stolorz, P.; Mesrobian, E.; Muntz, R.; Santos, J. R.; Shek, E.; Yi, J.; Mechoso, C.; Farrara, J.
1995-01-01
Use of the UCLA CONQUEST (CONtent-based Querying in Space and Time) is reviewed for performance of automatic cyclone extraction and detection of spatio-temporal blocking conditions on MPP. CONQUEST is a data analysis environment for knowledge and data mining to aid in high-resolution modeling of climate modeling.
Spatial perspectives in state-and-transition models: A missing link to land management?
USDA-ARS?s Scientific Manuscript database
Conceptual models of alternative states and thresholds are based largely on observations of ecosystem processes at a few points in space. Because the distribution of alternative states in spatially-structured ecosystems is the result of variations in pattern-process interactions at different scales,...
A comprehensive space management model for facilitating programmatic research.
Libecap, Ann; Wormsley, Steven; Cress, Anne; Matthews, Mary; Souza, Angie; Joiner, Keith A
2008-03-01
In FY04, the authors developed and implemented models to manage existing and incremental research space, and to facilitate programmatic research, at the University of Arizona College of Medicine. Benchmarks were set for recovery of total sponsored research dollars and for facilities and administrative (F&A) dollars/net square foot (nsf) of space, based on college-wide metrics. Benchmarks were applied to units (departments, centers), rather than to individual faculty. Performance relative to the benchmark was assessed using three-year moving averages, and applied to existing blocks of space. Space was recaptured or allocated, in all cases to programmatic themes, using uniform policies. F&A revenues were returned on the basis of performance relative to a benchmark. During the first two years after implementation of the model (FY05 and FY06), and for the 24 units occupying research space, median total sponsored research revenue/nsf increased from $393.96 to $474.46 (20.4%), and median F&A revenue/nsf increased from $57.42 to $91.86 (60.0%). These large increases in median values are driven primarily from redistribution and recapturing of space. Recruiting policies for unit heads were developed to facilitate joint hires among units. In combination, these policies created a comprehensive space management model for facilitating programmatic research. Although challenges remain in implementing the programmatic recruitment strategy, and selected modifications to the original policy were introduced later (e.g., research space for newly recruited junior faculty is now exempted from calculations for three years), overall, the models have created a climate of transparency that is now accepted and that allows efficient and equitable management of research space.
Image-based optimization of coronal magnetic field models for improved space weather forecasting
NASA Astrophysics Data System (ADS)
Uritsky, V. M.; Davila, J. M.; Jones, S. I.; MacNeice, P. J.
2017-12-01
The existing space weather forecasting frameworks show a significant dependence on the accuracy of the photospheric magnetograms and the extrapolation models used to reconstruct the magnetic filed in the solar corona. Minor uncertainties in the magnetic field magnitude and direction near the Sun, when propagated through the heliosphere, can lead to unacceptible prediction errors at 1 AU. We argue that ground based and satellite coronagraph images can provide valid geometric constraints that could be used for improving coronal magnetic field extrapolation results, enabling more reliable forecasts of extreme space weather events such as major CMEs. In contrast to the previously developed loop segmentation codes designed for detecting compact closed-field structures above solar active regions, we focus on the large-scale geometry of the open-field coronal regions up to 1-2 solar radii above the photosphere. By applying the developed image processing techniques to high-resolution Mauna Loa Solar Observatory images, we perform an optimized 3D B-line tracing for a full Carrington rotation using the magnetic field extrapolation code developed S. Jones at al. (ApJ 2016, 2017). Our tracing results are shown to be in a good qualitative agreement with the large-scale configuration of the optical corona, and lead to a more consistent reconstruction of the large-scale coronal magnetic field geometry, and potentially more accurate global heliospheric simulation results. Several upcoming data products for the space weather forecasting community will be also discussed.
Methods for evaluating the predictive accuracy of structural dynamic models
NASA Technical Reports Server (NTRS)
Hasselman, Timothy K.; Chrostowski, Jon D.
1991-01-01
Modeling uncertainty is defined in terms of the difference between predicted and measured eigenvalues and eigenvectors. Data compiled from 22 sets of analysis/test results was used to create statistical databases for large truss-type space structures and both pretest and posttest models of conventional satellite-type space structures. Modeling uncertainty is propagated through the model to produce intervals of uncertainty on frequency response functions, both amplitude and phase. This methodology was used successfully to evaluate the predictive accuracy of several structures, including the NASA CSI Evolutionary Structure tested at Langley Research Center. Test measurements for this structure were within + one-sigma intervals of predicted accuracy for the most part, demonstrating the validity of the methodology and computer code.
An economic analysis of disaggregation of space assets: Application to GPS
NASA Astrophysics Data System (ADS)
Hastings, Daniel E.; La Tour, Paul A.
2017-05-01
New ideas, technologies and architectural concepts are emerging with the potential to reshape the space enterprise. One of those new architectural concepts is the idea that rather than aggregating payloads onto large very high performance buses, space architectures should be disaggregated with smaller numbers of payloads (as small as one) per bus and the space capabilities spread across a correspondingly larger number of systems. The primary rationale is increased survivability and resilience. The concept of disaggregation is examined from an acquisition cost perspective. A mixed system dynamics and trade space exploration model is developed to look at long-term trends in the space acquisition business. The model is used to examine the question of how different disaggregated GPS architectures compare in cost to the well-known current GPS architecture. A generation-over-generation examination of policy choices is made possible through the application of soft systems modeling of experience and learning effects. The assumptions that are allowed to vary are: design lives, production quantities, non-recurring engineering and time between generations. The model shows that there is always a premium in the first generation to be paid to disaggregate the GPS payloads. However, it is possible to construct survivable architectures where the premium after two generations is relatively low.
Large-basis ab initio no-core shell model and its application to {sup 12}C
DOE Office of Scientific and Technical Information (OSTI.GOV)
Navratil, P.; Vary, J. P.; Barrett, B. R.
2000-11-01
We present the framework for the ab initio no-core nuclear shell model and apply it to obtain properties of {sup 12}C. We derive two-body effective interactions microscopically for specific model spaces from the realistic CD-Bonn and the Argonne V8' nucleon-nucleon (NN) potentials. We then evaluate binding energies, excitation spectra, radii, and electromagnetic transitions in the 0{Dirac_h}{Omega}, 2{Dirac_h}{Omega}, and 4{Dirac_h}{Omega} model spaces for the positive-parity states and the 1{Dirac_h}{Omega}, 3{Dirac_h}{Omega}, and 5{Dirac_h}{Omega} model spaces for the negative-parity states. Dependence on the model-space size, on the harmonic-oscillator frequency, and on the type of the NN potential, used for the effective interaction derivation,more » are studied. In addition, electromagnetic and weak neutral elastic charge form factors are calculated in the impulse approximation. Sensitivity of the form-factor ratios to the strangeness one-body form-factor parameters and to the influence of isospin-symmetry violation is evaluated and discussed. Agreement between theory and experiment is favorable for many observables, while others require yet larger model spaces and/or three-body forces. The limitations of the present results are easily understood by virtue of the trends established and previous phenomenological results.« less
A first-order k-space model for elastic wave propagation in heterogeneous media.
Firouzi, K; Cox, B T; Treeby, B E; Saffari, N
2012-09-01
A pseudospectral model of linear elastic wave propagation is described based on the first order stress-velocity equations of elastodynamics. k-space adjustments to the spectral gradient calculations are derived from the dyadic Green's function solution to the second-order elastic wave equation and used to (a) ensure the solution is exact for homogeneous wave propagation for timesteps of arbitrarily large size, and (b) also allows larger time steps without loss of accuracy in heterogeneous media. The formulation in k-space allows the wavefield to be split easily into compressional and shear parts. A perfectly matched layer (PML) absorbing boundary condition was developed to effectively impose a radiation condition on the wavefield. The staggered grid, which is essential for accurate simulations, is described, along with other practical details of the implementation. The model is verified through comparison with exact solutions for canonical examples and further examples are given to show the efficiency of the method for practical problems. The efficiency of the model is by virtue of the reduced point-per-wavelength requirement, the use of the fast Fourier transform (FFT) to calculate the gradients in k space, and larger time steps made possible by the k-space adjustments.
Curvature perturbation and waterfall dynamics in hybrid inflation
NASA Astrophysics Data System (ADS)
Akbar Abolhasani, Ali; Firouzjahi, Hassan; Sasaki, Misao
2011-10-01
We investigate the parameter spaces of hybrid inflation model with special attention paid to the dynamics of waterfall field and curvature perturbations induced from its quantum fluctuations. Depending on the inflaton field value at the time of phase transition and the sharpness of the phase transition inflation can have multiple extended stages. We find that for models with mild phase transition the induced curvature perturbation from the waterfall field is too large to satisfy the COBE normalization. We investigate the model parameter space where the curvature perturbations from the waterfall quantum fluctuations vary between the results of standard hybrid inflation and the results obtained here.
Approximations of thermoelastic and viscoelastic control systems
NASA Technical Reports Server (NTRS)
Burns, J. A.; Liu, Z. Y.; Miller, R. E.
1990-01-01
Well-posed models and computational algorithms are developed and analyzed for control of a class of partial differential equations that describe the motions of thermo-viscoelastic structures. An abstract (state space) framework and a general well-posedness result are presented that can be applied to a large class of thermo-elastic and thermo-viscoelastic models. This state space framework is used in the development of a computational scheme to be used in the solution of a linear quadratic regulator (LQR) control problem. A detailed convergence proof is provided for the viscoelastic model and several numerical results are presented to illustrate the theory and to analyze problems for which the theory is incomplete.
NASA Technical Reports Server (NTRS)
Mueller, A. C.
1977-01-01
An atmospheric model developed by Jacchia, quite accurate but requiring a large amount of computer storage and execution time, was found to be ill-suited for the space shuttle onboard program. The development of a simple atmospheric density model to simulate the Jacchia model was studied. Required characteristics including variation with solar activity, diurnal variation, variation with geomagnetic activity, semiannual variation, and variation with height were met by the new atmospheric density model.
Modeling and testing of a tube-in-tube separation mechanism of bodies in space
NASA Astrophysics Data System (ADS)
Michaels, Dan; Gany, Alon
2016-12-01
A tube-in-tube concept for separation of bodies in space was investigated theoretically and experimentally. The separation system is based on generation of high pressure gas by combustion of solid propellant and restricting the expansion of the gas only by ejecting the two bodies in opposite directions, in such a fashion that maximizes generated impulse. An interior ballistics model was developed in order to investigate the potential benefits of the separation system for a large range of space body masses and for different design parameters such as geometry and propellant. The model takes into account solid propellant combustion, heat losses, and gas phase chemical reactions. The model shows that for large bodies (above 100 kg) and typical separation velocities of 5 m/s, the proposed separation mechanism may be characterized by a specific impulse of 25,000 s, two order of magnitude larger than that of conventional solid rockets. It means that the proposed separation system requires only 1% of the propellant mass that would be needed for a conventional rocket for the same mission. Since many existing launch vehicles obtain such separation velocities by using conventional solid rocket motors (retro-rockets), the implementation of the new separation system design can reduce dramatically the mass of the separation system and increase safety. A dedicated experimental setup was built in order to demonstrate the concept and validate the model. The experimental results revealed specific impulse values of up to 27,000 s and showed good correspondence with the model.
Real- and redshift-space halo clustering in f(R) cosmologies
NASA Astrophysics Data System (ADS)
Arnalte-Mur, Pablo; Hellwing, Wojciech A.; Norberg, Peder
2017-05-01
We present two-point correlation function statistics of the mass and the haloes in the chameleon f(R) modified gravity scenario using a series of large-volume N-body simulations. Three distinct variations of f(R) are considered (F4, F5 and F6) and compared to a fiducial Λ cold dark matter (ΛCDM) model in the redshift range z ∈ [0, 1]. We find that the matter clustering is indistinguishable for all models except for F4, which shows a significantly steeper slope. The ratio of the redshift- to real-space correlation function at scales >20 h-1 Mpc agrees with the linear General Relativity (GR) Kaiser formula for the viable f(R) models considered. We consider three halo populations characterized by spatial abundances comparable to that of luminous red galaxies and galaxy clusters. The redshift-space halo correlation functions of F4 and F5 deviate significantly from ΛCDM at intermediate and high redshift, as the f(R) halo bias is smaller than or equal to that of the ΛCDM case. Finally, we introduce a new model-independent clustering statistic to distinguish f(R) from GR: the relative halo clustering ratio - R. The sampling required to adequately reduce the scatter in R will be available with the advent of the next-generation galaxy redshift surveys. This will foster a prospective avenue to obtain largely model-independent cosmological constraints on this class of modified gravity models.
Hesford, Andrew J; Tillett, Jason C; Astheimer, Jeffrey P; Waag, Robert C
2014-08-01
Accurate and efficient modeling of ultrasound propagation through realistic tissue models is important to many aspects of clinical ultrasound imaging. Simplified problems with known solutions are often used to study and validate numerical methods. Greater confidence in a time-domain k-space method and a frequency-domain fast multipole method is established in this paper by analyzing results for realistic models of the human breast. Models of breast tissue were produced by segmenting magnetic resonance images of ex vivo specimens into seven distinct tissue types. After confirming with histologic analysis by pathologists that the model structures mimicked in vivo breast, the tissue types were mapped to variations in sound speed and acoustic absorption. Calculations of acoustic scattering by the resulting model were performed on massively parallel supercomputer clusters using parallel implementations of the k-space method and the fast multipole method. The efficient use of these resources was confirmed by parallel efficiency and scalability studies using large-scale, realistic tissue models. Comparisons between the temporal and spectral results were performed in representative planes by Fourier transforming the temporal results. An RMS field error less than 3% throughout the model volume confirms the accuracy of the methods for modeling ultrasound propagation through human breast.
NASA Astrophysics Data System (ADS)
Gao, Yang; Leung, L. Ruby; Zhao, Chun; Hagos, Samson
2017-03-01
Simulating summer precipitation is a significant challenge for climate models that rely on cumulus parameterizations to represent moist convection processes. Motivated by recent advances in computing that support very high-resolution modeling, this study aims to systematically evaluate the effects of model resolution and convective parameterizations across the gray zone resolutions. Simulations using the Weather Research and Forecasting model were conducted at grid spacings of 36 km, 12 km, and 4 km for two summers over the conterminous U.S. The convection-permitting simulations at 4 km grid spacing are most skillful in reproducing the observed precipitation spatial distributions and diurnal variability. Notable differences are found between simulations with the traditional Kain-Fritsch (KF) and the scale-aware Grell-Freitas (GF) convection schemes, with the latter more skillful in capturing the nocturnal timing in the Great Plains and North American monsoon regions. The GF scheme also simulates a smoother transition from convective to large-scale precipitation as resolution increases, resulting in reduced sensitivity to model resolution compared to the KF scheme. Nonhydrostatic dynamics has a positive impact on precipitation over complex terrain even at 12 km and 36 km grid spacings. With nudging of the winds toward observations, we show that the conspicuous warm biases in the Southern Great Plains are related to precipitation biases induced by large-scale circulation biases, which are insensitive to model resolution. Overall, notable improvements in simulating summer rainfall and its diurnal variability through convection-permitting modeling and scale-aware parameterizations suggest promising venues for improving climate simulations of water cycle processes.
Simulation of Deep Convective Clouds with the Dynamic Reconstruction Turbulence Closure
NASA Astrophysics Data System (ADS)
Shi, X.; Chow, F. K.; Street, R. L.; Bryan, G. H.
2017-12-01
The terra incognita (TI), or gray zone, in simulations is a range of grid spacing comparable to the most energetic eddy diameter. Spacing in mesoscale and simulations is much larger than the eddies, and turbulence is parameterized with one-dimensional vertical-mixing. Large eddy simulations (LES) have grid spacing much smaller than the energetic eddies, and use three-dimensional models of turbulence. Studies of convective weather use convection-permitting resolutions, which are in the TI. Neither mesoscale-turbulence nor LES models are designed for the TI, so TI turbulence parameterization needs to be discussed. Here, the effects of sub-filter scale (SFS) closure schemes on the simulation of deep tropical convection are evaluated by comparing three closures, i.e. Smagorinsky model, Deardorff-type TKE model and the dynamic reconstruction model (DRM), which partitions SFS turbulence into resolvable sub-filter scales (RSFS) and unresolved sub-grid scales (SGS). The RSFS are reconstructed, and the SGS are modeled with a dynamic eddy viscosity/diffusivity model. The RSFS stresses/fluxes allow backscatter of energy/variance via counter-gradient stresses/fluxes. In high-resolution (100m) simulations of tropical convection use of these turbulence models did not lead to significant differences in cloud water/ice distribution, precipitation flux, or vertical fluxes of momentum and heat. When model resolutions are coarsened, the Smagorinsky and TKE models overestimate cloud ice and produces large-amplitude downward heat flux in the middle troposphere (not found in the high-resolution simulations). This error is a result of unrealistically large eddy diffusivities, i.e., the eddy diffusivity of the DRM is on the order of 1 for the coarse resolution simulations, the eddy diffusivity of the Smagorinsky and TKE model is on the order of 100. Splitting the eddy viscosity/diffusivity scalars into vertical and horizontal components by using different length scales and strain rate components helps to reduce the errors, but does not completely remedy the problem. In contrast, the coarse resolution simulations using the DRM produce results that are more consistent with the high-resolution results, suggesting that the DRM is a more appropriate turbulence model for simulating convection in the TI.
Metabolic Cages for a Space Flight Model in the Rat
NASA Technical Reports Server (NTRS)
Harper, Jennifer S.; Mulenburg, Gerald M.; Evans, Juli; Navidi, Meena; Wolinsky, Ira; Arnaud, Sara B.
1994-01-01
A variety of space flight models are available to mimic the physiologic changes seen in the rat during weightlessness. The model reported by Wronski and Morey-Holton has been widely used by many investigators, in musculoskeletal physiologic studies especially, resulting in accumulation of an extensive database that enables scientists to mimic space flight effects in the 1-g environment of Earth. However, information on nutrition or gastrointestinal and renal function in this space flight model is limited by the difficulty in acquiring uncontaminated metabolic specimens for analysis. In the Holton system, a traction tape harness is applied to the tail, and the rat's hindquarters are elevated by attaching the harness to a pulley system. Weight-bearing hind limbs are unloaded, and there is a headward fluid shift. The tail-suspended rats are able to move freely about their cages on their forelimbs and tolerate this procedure with minimal signs of stress. The cage used in Holton's model is basically a clear acrylic box set on a plastic grid floor with the pulley and tail harness system attached to the open top of the cage. Food is available from a square food cup recessed into a corner of the floor. In this system, urine, feces, and spilled food fall through the grid floor onto absorbent paper beneath the cage and cannot be separated and recovered quantitatively for analysis in metabolic balance studies. Commercially available metabolic cages are generally cylindrical and have been used with a centrally located suspension apparatus in other space flight models. The large living area, three times as large as most metabolic cages, and the free range of motion unique to Holton's model, essential for musculoskeletal investigations, were sacrificed. Holton's cages can accommodate animals ranging in weight from 70 to 600 g. Although an alternative construction of Holton's cage has been reported, it does not permit collection of separate urine and fecal samples. We describe the modifications to Holton's food delivery system, cage base, and the addition of a separator system for the collection of urine and fecal samples for metabolic and nutrition studies in the tail suspension model.
LADM and IndoorGML for Support of Indoor Space Identification
NASA Astrophysics Data System (ADS)
Zlatanova, S.; Van Oosterom, P. J. M.; Lee, J.; Li, K.-J.; Lemmen, C. H. J.
2016-10-01
Guidance and security in large public buildings such as airports, museums and shopping malls requires much more information that traditional 2D methods offer. Therefore 3D semantically-reach models have been actively investigated with the aim to gather knowledge about availability and accessibility of spaces. Spaces can be unavailable to specific users because of plenty of reasons: the 3D geometry of spaces (too low, too narrow), the properties of the objects to be guided to a specific part of the building (walking, driving, flying), the status of the indoor environment (e.g. crowded, limited light, under reconstruction), property regulations (private areas), security considerations and so on. However, such information is not explicitly avaible in the existing 3D semantically-reach models. IFC and CityGML are restricted to architectural building components and provide little to no means to describe such properties. IndoorGML has been designed to establish a generic approach for space identification allowing a space subdivision and automatic creation of a network for route computation. But currently it also represents only spaces as they are defined by the architectural layout of the building. The Land Administration Domain Model is currently the only available model to specify spaces on the basis of ownership and rights for use. In this paper we compare the principles of IndoorGML and LADM, investigate the approaches to define spaces and suggest options to the linking of the two types of spaces. We argue that LADM space subdivision on basis of properties and rights of use can be used to define to semantically and geometrically available and accessible spaces and therefore can enrich the IndoorGML concept.
DEFINING THE CHEMICAL SPACE OF PUBLIC GENOMIC DATA.
The pharmaceutical industry has demonstrated success in integrating of chemogenomic knowledge into predictive toxicological models, due in part to industry's access to large amounts of proprietary and commercial reference genomic data sets.
Inner space/outer space - The interface between cosmology and particle physics
NASA Astrophysics Data System (ADS)
Kolb, Edward W.; Turner, Michael S.; Lindley, David; Olive, Keith; Seckel, David
A collection of papers covering the synthesis between particle physics and cosmology is presented. The general topics addressed include: standard models of particle physics and cosmology; microwave background radiation; origin and evolution of large-scale structure; inflation; massive magnetic monopoles; supersymmetry, supergravity, and quantum gravity; cosmological constraints on particle physics; Kaluza-Klein cosmology; and future directions and connections in particle physics and cosmology.
geneLAB: Expanding the Impact of NASA's Biological Research in Space
NASA Technical Reports Server (NTRS)
Rayl, Nicole; Smith, Jeffrey D.
2014-01-01
The geneLAB project is designed to leverage the value of large 'omics' datasets from molecular biology projects conducted on the ISS by making these datasets available, citable, discoverable, interpretable, reusable, and reproducible. geneLAB will create a collaboration space with an integrated set of tools for depositing, accessing, analyzing, and modeling these diverse datasets from spaceflight and related terrestrial studies.
A Model for Predicting Thermomechanical Response of Large Space Structures.
1985-06-01
Field in a Thermomechanically Heated Viscoplastic ....... Space Truss Structure 6.5 Analysis of a Thermoviscoplastic Uniaxial " Bar Under Prescribed...Stress Part I - Theoretical Development . -- 6.6 Analysis of a Thermoviscoplastic Uniaxial codes Bar Under Prescribed Stress Part II - or Boundary Layer...and Asymptotic Analysis 6.7 Analysis of a Thermoviscoplastic Uniaxial Bar Under Prescribed Stress Part III - Numerical Results for a Bar with Radiative
Simple gain probability functions for large reflector antennas of JPL/NASA
NASA Technical Reports Server (NTRS)
Jamnejad, V.
2003-01-01
Simple models for the patterns as well as their cumulative gain probability and probability density functions of the Deep Space Network antennas are developed. These are needed for the study and evaluation of interference from unwanted sources such as the emerging terrestrial system, High Density Fixed Service, with the Ka-band receiving antenna systems in Goldstone Station of the Deep Space Network.
Large Advanced Space Systems (LASS) computer-aided design program additions
NASA Technical Reports Server (NTRS)
Farrell, C. E.
1982-01-01
The LSS preliminary and conceptual design requires extensive iteractive analysis because of the effects of structural, thermal, and control intercoupling. A computer aided design program that will permit integrating and interfacing of required large space system (LSS) analyses is discussed. The primary objective of this program is the implementation of modeling techniques and analysis algorithms that permit interactive design and tradeoff studies of LSS concepts. Eight software modules were added to the program. The existing rigid body controls module was modified to include solar pressure effects. The new model generator modules and appendage synthesizer module are integrated (interfaced) to permit interactive definition and generation of LSS concepts. The mass properties module permits interactive specification of discrete masses and their locations. The other modules permit interactive analysis of orbital transfer requirements, antenna primary beam n, and attitude control requirements.
NASA Technical Reports Server (NTRS)
Joshi, S. M.; Armstrong, E. S.; Sundararajan, N.
1986-01-01
The problem of synthesizing a robust controller is considered for a large, flexible space-based antenna by using the linear-quadratic-Gaussian (LQG)/loop transfer recovery (LTR) method. The study is based on a finite-element model of the 122-m hoop/column antenna, which consists of three rigid-body rotational modes and the first 10 elastic modes. A robust compensator design for achieving the required performance bandwidth in the presence of modeling uncertainties is obtained using the LQG/LTR method for loop-shaping in the frequency domain. Different sensor actuator locations are analyzed in terms of the pole/zero locations of the multivariable systems and possible best locations are indicated. The computations are performed by using the LQG design package ORACLS augmented with frequency domain singular value analysis software.
Novel unimorph deformable mirror for space applications
NASA Astrophysics Data System (ADS)
Verpoort, Sven; Rausch, Peter; Wittrock, Ulrich
2017-11-01
We have developed a new type of unimorph deformable mirror, designed to correct for low-order Zernike modes. The mirror has a clear optical aperture of 50 mm combined with large peak-to-valley Zernike amplitudes of up to 35 μm. Newly developed fabrication processes allow the use of prefabricated super-polished and coated glass substrates. The mirror's unique features suggest the use in several astronomical applications like the precompensation of atmospheric aberrations seen by laser beacons and the use in woofer-tweeter systems. Additionally, the design enables an efficient correction of the inevitable wavefront error imposed by the floppy structure of primary mirrors in future large space-based telescopes. We have modeled the mirror by using analytical as well as finite element models. We will present design, key features and manufacturing steps of the deformable mirror.
Predicting the Consequences of MMOD Penetrations on the International Space Station
NASA Technical Reports Server (NTRS)
Hyde, James; Christiansen, E.; Lear, D.; Evans
2018-01-01
The threat from micrometeoroid and orbital debris (MMOD) impacts on space vehicles is often quantified in terms of the probability of no penetration (PNP). However, for large spacecraft, especially those with multiple compartments, a penetration may have a number of possible outcomes. The extent of the damage (diameter of hole, crack length or penetration depth), the location of the damage relative to critical equipment or crew, crew response, and even the time of day of the penetration are among the many factors that can affect the outcome. For the International Space Station (ISS), a Monte-Carlo style software code called Manned Spacecraft Crew Survivability (MSCSurv) is used to predict the probability of several outcomes of an MMOD penetration-broadly classified as loss of crew (LOC), crew evacuation (Evac), loss of escape vehicle (LEV), and nominal end of mission (NEOM). By generating large numbers of MMOD impacts (typically in the billions) and tracking the consequences, MSCSurv allows for the inclusion of a large number of parameters and models as well as enabling the consideration of uncertainties in the models and parameters. MSCSurv builds upon the results from NASA's Bumper software (which provides the probability of penetration and critical input data to MSCSurv) to allow analysts to estimate the probability of LOC, Evac, LEV, and NEOM. This paper briefly describes the overall methodology used by NASA to quantify LOC, Evac, LEV, and NEOM with particular emphasis on describing in broad terms how MSCSurv works and its capabilities and most significant models.
Space Radiation Cancer Risks and Uncertainities for Different Mission Time Periods
NASA Technical Reports Server (NTRS)
Kim,Myung-Hee Y.; Cucinotta, Francis A.
2012-01-01
Space radiation consists of solar particle events (SPEs), comprised largely of medium energy protons (less than several hundred MeV); and galactic cosmic ray (GCR), which includes high energy protons and high charge and energy (HZE) nuclei. For long duration missions, space radiation presents significant health risks including cancer mortality. Probabilistic risk assessment (PRA) is essential for radiation protection of crews on long term space missions outside of the protection of the Earth s magnetic field and for optimization of mission planning and costs. For the assessment of organ dosimetric quantities and cancer risks, the particle spectra at each critical body organs must be characterized. In implementing a PRA approach, a statistical model of SPE fluence was developed, because the individual SPE occurrences themselves are random in nature while the frequency distribution of SPEs depends strongly upon the phase within the solar activity cycle. Spectral variability of SPEs was also examined, because the detailed energy spectra of protons are important especially at high energy levels for assessing the cancer risk associated with energetic particles for large events. An overall cumulative probability of a GCR environment for a specified mission period was estimated for the temporal characterization of the GCR environment represented by the deceleration potential (theta). Finally, this probabilistic approach to space radiation cancer risk was coupled with a model of the radiobiological factors and uncertainties in projecting cancer risks. Probabilities of fatal cancer risk and 95% confidence intervals will be reported for various periods of space missions.
The use of computer models to predict temperature and smoke movement in high bay spaces
NASA Technical Reports Server (NTRS)
Notarianni, Kathy A.; Davis, William D.
1993-01-01
The Building and Fire Research Laboratory (BFRL) was given the opportunity to make measurements during fire calibration tests of the heat detection system in an aircraft hangar with a nominal 30.4 (100 ft) ceiling height near Dallas, TX. Fire gas temperatures resulting from an approximately 8250 kW isopropyl alcohol pool fire were measured above the fire and along the ceiling. The results of the experiments were then compared to predictions from the computer fire models DETACT-QS, FPETOOL, and LAVENT. In section A of the analysis conducted, DETACT-QS AND FPETOOL significantly underpredicted the gas temperature. LAVENT at the position below the ceiling corresponding to maximum temperature and velocity provided better agreement with the data. For large spaces, hot gas transport time and an improved fire plume dynamics model should be incorporated into the computer fire model activation routines. A computational fluid dynamics (CFD) model, HARWELL FLOW3D, was then used to model the hot gas movement in the space. Reasonable agreement was found between the temperatures predicted from the CFD calculations and the temperatures measured in the aircraft hangar. In section B, an existing NASA high bay space was modeled using the CFD model. The NASA space was a clean room, 27.4 m (90 ft) high with forced horizontal laminar flow. The purpose of this analysis is to determine how the existing fire detection devices would respond to various size fires in the space. The analysis was conducted for 32 MW, 400 kW, and 40 kW fires.
Real-space visualization of remnant Mott gap and magnon excitations.
Wang, Y; Jia, C J; Moritz, B; Devereaux, T P
2014-04-18
We demonstrate the ability to visualize real-space dynamics of charge gap and magnon excitations in the Mott phase of the single-band Hubbard model and the remnants of these excitations with hole or electron doping. At short times, the character of magnetic and charge excitations is maintained even for large doping away from the Mott and antiferromagnetic phases. Doping influences both the real-space patterns and long timescales of these excitations with a clear carrier asymmetry attributable to particle-hole symmetry breaking in the underlying model. Further, a rapidly oscillating charge-density-wave-like pattern weakens, but persists as a visible demonstration of a subleading instability at half-filling which remains upon doping. The results offer an approach to analyzing the behavior of systems where momentum space is either inaccessible or poorly defined.
NASA Astrophysics Data System (ADS)
Braun, Jens; Leonhardt, Marc; Pospiech, Martin
2018-04-01
Nambu-Jona-Lasinio-type models are often employed as low-energy models for the theory of the strong interaction to analyze its phase structure at finite temperature and quark chemical potential. In particular, at low temperature and large chemical potential, where the application of fully first-principles approaches is currently difficult at best, this class of models still plays a prominent role in guiding our understanding of the dynamics of dense strong-interaction matter. In this work, we consider a Fierz-complete version of the Nambu-Jona-Lasinio model with two massless quark flavors and study its renormalization group flow and fixed-point structure at leading order of the derivative expansion of the effective action. Sum rules for the various four-quark couplings then allow us to monitor the strength of the breaking of the axial UA(1 ) symmetry close to and above the phase boundary. We find that the dynamics in the ten-dimensional Fierz-complete space of four-quark couplings can only be reduced to a one-dimensional space associated with the scalar-pseudoscalar coupling in the strict large-Nc limit. Still, the interacting fixed point associated with this one-dimensional subspace appears to govern the dynamics at small quark chemical potential even beyond the large-Nc limit. At large chemical potential, corrections beyond the large-Nc limit become important, and the dynamics is dominated by diquarks, favoring the formation of a chirally symmetric diquark condensate. In this regime, our study suggests that the phase boundary is shifted to higher temperatures when a Fierz-complete set of four-quark interactions is considered.
Modeling of the Orbital Debris Environment Risks in the Past, Present, and Future
NASA Technical Reports Server (NTRS)
Matney, Mark
2016-01-01
Despite of the tireless work by space surveillance assets, much of the Earth debris environment is not easily measured or tracked. For every object that is in an orbit we can track, there are hundreds of small debris that are too small to be tracked but still large enough to damage spacecraft. In addition, even if we knew today's environment with perfect knowledge, the debris environment is dynamic and would change tomorrow. Therefore, orbital debris scientists rely on numerical modeling to understand the nature of the debris environment and its risk to space operations throughout Earth orbit and into the future. This talk will summarize the ways in which modeling complements measurements to help give us a better picture of what is occurring in Earth orbit, and helps us to better conduct current and future space operations.
Multi-objective trajectory optimization for the space exploration vehicle
NASA Astrophysics Data System (ADS)
Qin, Xiaoli; Xiao, Zhen
2016-07-01
The research determines temperature-constrained optimal trajectory for the space exploration vehicle by developing an optimal control formulation and solving it using a variable order quadrature collocation method with a Non-linear Programming(NLP) solver. The vehicle is assumed to be the space reconnaissance aircraft that has specified takeoff/landing locations, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom aircraft model is adapted from previous work and includes flight dynamics, and thermal constraints.Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and exploration of space targets. In addition, the vehicle models include the environmental models(gravity and atmosphere). How these models are appropriately employed is key to gaining confidence in the results and conclusions of the research. Optimal trajectories are developed using several performance costs in the optimal control formation,minimum time,minimum time with control penalties,and maximum distance.The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for large-scale space exloration.
Using MHD Models for Context for Multispacecraft Missions
NASA Astrophysics Data System (ADS)
Reiff, P. H.; Sazykin, S. Y.; Webster, J.; Daou, A.; Welling, D. T.; Giles, B. L.; Pollock, C.
2016-12-01
The use of global MHD models such as BATS-R-US to provide context to data from widely spaced multispacecraft mission platforms is gaining in popularity and in effectiveness. Examples are shown, primarily from the Magnetospheric Multiscale Mission (MMS) program compared to BATS-R-US. We present several examples of large-scale magnetospheric configuration changes such as tail dipolarization events and reconfigurations after a sector boundary crossing which are made much more easily understood by placing the spacecraft in the model fields. In general, the models can reproduce the large-scale changes observed by the various spacecraft but sometimes miss small-scale or rapid time changes.
Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.
Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela
2016-12-01
Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.
Program Model Checking: A Practitioner's Guide
NASA Technical Reports Server (NTRS)
Pressburger, Thomas T.; Mansouri-Samani, Masoud; Mehlitz, Peter C.; Pasareanu, Corina S.; Markosian, Lawrence Z.; Penix, John J.; Brat, Guillaume P.; Visser, Willem C.
2008-01-01
Program model checking is a verification technology that uses state-space exploration to evaluate large numbers of potential program executions. Program model checking provides improved coverage over testing by systematically evaluating all possible test inputs and all possible interleavings of threads in a multithreaded system. Model-checking algorithms use several classes of optimizations to reduce the time and memory requirements for analysis, as well as heuristics for meaningful analysis of partial areas of the state space Our goal in this guidebook is to assemble, distill, and demonstrate emerging best practices for applying program model checking. We offer it as a starting point and introduction for those who want to apply model checking to software verification and validation. The guidebook will not discuss any specific tool in great detail, but we provide references for specific tools.
Calculations of High-Temperature Jet Flow Using Hybrid Reynolds-Average Navier-Stokes Formulations
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Elmiligui, Alaa; Giriamaji, Sharath S.
2008-01-01
Two multiscale-type turbulence models are implemented in the PAB3D solver. The models are based on modifying the Reynolds-averaged Navier Stokes equations. The first scheme is a hybrid Reynolds-averaged- Navier Stokes/large-eddy-simulation model using the two-equation k(epsilon) model with a Reynolds-averaged-Navier Stokes/large-eddy-simulation transition function dependent on grid spacing and the computed turbulence length scale. The second scheme is a modified version of the partially averaged Navier Stokes model in which the unresolved kinetic energy parameter f(sub k) is allowed to vary as a function of grid spacing and the turbulence length scale. This parameter is estimated based on a novel two-stage procedure to efficiently estimate the level of scale resolution possible for a given flow on a given grid for partially averaged Navier Stokes. It has been found that the prescribed scale resolution can play a major role in obtaining accurate flow solutions. The parameter f(sub k) varies between zero and one and is equal to one in the viscous sublayer and when the Reynolds-averaged Navier Stokes turbulent viscosity becomes smaller than the large-eddy-simulation viscosity. The formulation, usage methodology, and validation examples are presented to demonstrate the enhancement of PAB3D's time-accurate turbulence modeling capabilities. The accurate simulations of flow and turbulent quantities will provide a valuable tool for accurate jet noise predictions. Solutions from these models are compared with Reynolds-averaged Navier Stokes results and experimental data for high-temperature jet flows. The current results show promise for the capability of hybrid Reynolds-averaged Navier Stokes and large eddy simulation and partially averaged Navier Stokes in simulating such flow phenomena.
Shielding in ungated field emitter arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, J. R.; Jensen, K. L.; Shiffler, D. A.
Cathodes consisting of arrays of high aspect ratio field emitters are of great interest as sources of electron beams for vacuum electronic devices. The desire for high currents and current densities drives the cathode designer towards a denser array, but for ungated emitters, denser arrays also lead to increased shielding, in which the field enhancement factor β of each emitter is reduced due to the presence of the other emitters in the array. To facilitate the study of these arrays, we have developed a method for modeling high aspect ratio emitters using tapered dipole line charges. This method can bemore » used to investigate proximity effects from similar emitters an arbitrary distance away and is much less computationally demanding than competing simulation approaches. Here, we introduce this method and use it to study shielding as a function of array geometry. Emitters with aspect ratios of 10{sup 2}–10{sup 4} are modeled, and the shielding-induced reduction in β is considered as a function of tip-to-tip spacing for emitter pairs and for large arrays with triangular and square unit cells. Shielding is found to be negligible when the emitter spacing is greater than the emitter height for the two-emitter array, or about 2.5 times the emitter height in the large arrays, in agreement with previously published results. Because the onset of shielding occurs at virtually the same emitter spacing in the square and triangular arrays, the triangular array is preferred for its higher emitter density at a given emitter spacing. The primary contribution to shielding in large arrays is found to come from emitters within a distance of three times the unit cell spacing for both square and triangular arrays.« less
Specification of the Surface Charging Environment with SHIELDS
NASA Astrophysics Data System (ADS)
Jordanova, V.; Delzanno, G. L.; Henderson, M. G.; Godinez, H. C.; Jeffery, C. A.; Lawrence, E. C.; Meierbachtol, C.; Moulton, J. D.; Vernon, L.; Woodroffe, J. R.; Brito, T.; Toth, G.; Welling, D. T.; Yu, Y.; Albert, J.; Birn, J.; Borovsky, J.; Denton, M.; Horne, R. B.; Lemon, C.; Markidis, S.; Thomsen, M. F.; Young, S. L.
2016-12-01
Predicting variations in the near-Earth space environment that can lead to spacecraft damage and failure, i.e. "space weather", remains a big space physics challenge. A recently funded project through the Los Alamos National Laboratory (LANL) Directed Research and Development (LDRD) program aims at developing a new capability to understand, model, and predict Space Hazards Induced near Earth by Large Dynamic Storms, the SHIELDS framework. The project goals are to understand the dynamics of the surface charging environment (SCE), the hot (keV) electrons representing the source and seed populations for the radiation belts, on both macro- and microscale. Important physics questions related to rapid particle injection and acceleration associated with magnetospheric storms and substorms as well as plasma waves are investigated. These challenging problems are addressed using a team of world-class experts in the fields of space science and computational plasma physics, and state-of-the-art models and computational facilities. In addition to physics-based models (like RAM-SCB, BATS-R-US, and iPIC3D), new data assimilation techniques employing data from LANL instruments on the Van Allen Probes and geosynchronous satellites are developed. Simulations with the SHIELDS framework of the near-Earth space environment where operational satellites reside are presented. Further model development and the organization of a "Spacecraft Charging Environment Challenge" by the SHIELDS project at LANL in collaboration with the NSF Geospace Environment Modeling (GEM) Workshop and the multi-agency Community Coordinated Modeling Center (CCMC) to assess the accuracy of SCE predictions are discussed.
Additional historical solid rocket motor burns
NASA Astrophysics Data System (ADS)
Wiedemann, Carsten; Homeister, Maren; Oswald, Michael; Stabroth, Sebastian; Klinkrad, Heiner; Vörsmann, Peter
2009-06-01
The use of orbital solid rocket motors (SRM) is responsible for the release of a high number of slag and Al 2O 3 dust particles which contribute to the space debris environment. This contribution has been modeled for the ESA space debris model MASTER (Meteoroid and Space Debris Terrestrial Environment Reference). The current model version, MASTER-2005, is based on the simulation of 1076 orbital SRM firings which mainly contributed to the long-term debris environment. SRM firings on very low earth orbits which produce only short living particles are not considered. A comparison of the modeled flux with impact data from returned surfaces shows that the shape and quantity of the modeled SRM dust distribution matches that of recent Hubble Space Telescope (HST) solar array measurements very well. However, the absolute flux level for dust is under-predicted for some of the analyzed Long Duration Exposure Facility (LDEF) surfaces. This indicates that some past SRM firings are not included in the current event database. Thus it is necessary to investigate, if additional historical SRM burns, like the retro-burn of low orbiting re-entry capsules, may be responsible for these dust impacts. The most suitable candidates for these firings are the large number of SRM retro-burns of return capsules. This paper focuses on the SRM retro-burns of Russian photoreconnaissance satellites, which were used in high numbers during the time of the LDEF mission. It is discussed which types of satellites and motors may have been responsible for this historical contribution. Altogether, 870 additional SRM retro-burns have been identified. An important task is the identification of such missions to complete the current event data base. Different types of motors have been used to de-orbit both large satellites and small film return capsules. The results of simulation runs are presented.
Multi-level optimization of a beam-like space truss utilizing a continuum model
NASA Technical Reports Server (NTRS)
Yates, K.; Gurdal, Z.; Thangjitham, S.
1992-01-01
A continuous beam model is developed for approximate analysis of a large, slender, beam-like truss. The model is incorporated in a multi-level optimization scheme for the weight minimization of such trusses. This scheme is tested against traditional optimization procedures for savings in computational cost. Results from both optimization methods are presented for comparison.
Farina, Marco; Pappadopulo, Duccio; Rompineve, Fabrizio; ...
2017-01-23
Here, we propose a framework in which the QCD axion has an exponentially large coupling to photons, relying on the “clockwork” mechanism. We discuss the impact of present and future axion experiments on the parameter space of the model. In addition to the axion, the model predicts a large number of pseudoscalars which can be light and observable at the LHC. In the most favorable scenario, axion Dark Matter will give a signal in multiple axion detection experiments and the pseudo-scalars will be discovered at the LHC, allowing us to determine most of the parameters of the model.
Baryon acoustic oscillations in 2D. II. Redshift-space halo clustering in N-body simulations
NASA Astrophysics Data System (ADS)
Nishimichi, Takahiro; Taruya, Atsushi
2011-08-01
We measure the halo power spectrum in redshift space from cosmological N-body simulations, and test the analytical models of redshift distortions particularly focusing on the scales of baryon acoustic oscillations. Remarkably, the measured halo power spectrum in redshift space exhibits a large-scale enhancement in amplitude relative to the real-space clustering, and the effect becomes significant for the massive or highly biased halo samples. These findings cannot be simply explained by the so-called streaming model frequently used in the literature. By contrast, a physically motivated perturbation theory model developed in the previous paper reproduces the halo power spectrum very well, and the model combining a simple linear scale-dependent bias can accurately characterize the clustering anisotropies of halos in two dimensions, i.e., line-of-sight and its perpendicular directions. The results highlight the significance of nonlinear coupling between density and velocity fields associated with two competing effects of redshift distortions, i.e., Kaiser and Finger-of-God effects, and a proper account of this effect would be important in accurately characterizing the baryon acoustic oscillations in two dimensions.
Action detection by double hierarchical multi-structure space-time statistical matching model
NASA Astrophysics Data System (ADS)
Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang
2018-03-01
Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.
The Influence of Solid Rocket Motor Retro-Burns on the Space Debris Environment
NASA Astrophysics Data System (ADS)
Stabroth, S.; Homeister, M.; Oswald, M.; Wiedemann, C.; Klinkrad, H.; Vörsmann, P.
The ESA space debris population model MASTER Meteoroid and Space Debris Terrestrial Environment Reference considers firings of solid rocket motors SRM as a debris source with the associated generation of slag and dust particles The resulting slag and dust population is a major contribution to the sub-millimetre size debris environment in Earth orbit The current model version MASTER-2005 is based on the simulation of 1 076 orbital SRM firings which contributed to the long-term debris environment A comparison of the modelled flux with impact data from returned surfaces shows that the shape and quantity of the modelled SRM dust distribution matches that of recent Hubble Space Telescope HST solar array measurements very well However the absolute flux level for dust is under-predicted for some of the analysed Long Duration Exposure Facility LDEF surfaces This points into the direction of some past SRM firings not included in the current event database The most suitable candidates for these firings are the large number of SRM retro-burns of return capsules Objects released by those firings have highly eccentric orbits with perigees in the lower regions of the atmosphere Thus they produce no long-term effect on the debris environment However a large number of those firings during the on-orbit time frame of LDEF might lead to an increase of the dust population for some of the LDEF surfaces In this paper the influence of SRM retro-burns on the short- and long-term debris environment is analysed The existing firing database is updated with gathered
ψ-Epistemic Models are Exponentially Bad at Explaining the Distinguishability of Quantum States
NASA Astrophysics Data System (ADS)
Leifer, M. S.
2014-04-01
The status of the quantum state is perhaps the most controversial issue in the foundations of quantum theory. Is it an epistemic state (state of knowledge) or an ontic state (state of reality)? In realist models of quantum theory, the epistemic view asserts that nonorthogonal quantum states correspond to overlapping probability measures over the true ontic states. This naturally accounts for a large number of otherwise puzzling quantum phenomena. For example, the indistinguishability of nonorthogonal states is explained by the fact that the ontic state sometimes lies in the overlap region, in which case there is nothing in reality that could distinguish the two states. For this to work, the amount of overlap of the probability measures should be comparable to the indistinguishability of the quantum states. In this Letter, I exhibit a family of states for which the ratio of these two quantities must be ≤2de-cd in Hilbert spaces of dimension d that are divisible by 4. This implies that, for large Hilbert space dimension, the epistemic explanation of indistinguishability becomes implausible at an exponential rate as the Hilbert space dimension increases.
Image degradation characteristics and restoration based on regularization for diffractive imaging
NASA Astrophysics Data System (ADS)
Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun
2017-11-01
The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.
Numerical simulation of the geodynamo reaches Earth's core dynamical regime
NASA Astrophysics Data System (ADS)
Aubert, J.; Gastine, T.; Fournier, A.
2016-12-01
Numerical simulations of the geodynamo have been successful at reproducing a number of static (field morphology) and kinematic (secular variation patterns, core surface flows and westward drift) features of Earth's magnetic field, making them a tool of choice for the analysis and retrieval of geophysical information on Earth's core. However, classical numerical models have been run in a parameter regime far from that of the real system, prompting the question of whether we do get "the right answers for the wrong reasons", i.e. whether the agreement between models and nature simply occurs by chance and without physical relevance in the dynamics. In this presentation, we show that classical models succeed in describing the geodynamo because their large-scale spatial structure is essentially invariant as one progresses along a well-chosen path in parameter space to Earth's core conditions. This path is constrained by the need to enforce the relevant force balance (MAC or Magneto-Archimedes-Coriolis) and preserve the ratio of the convective overturn and magnetic diffusion times. Numerical simulations performed along this path are shown to be spatially invariant at scales larger than that where the magnetic energy is ohmically dissipated. This property enables the definition of large-eddy simulations that show good agreement with direct numerical simulations in the range where both are feasible, and that can be computed at unprecedented values of the control parameters, such as an Ekman number E=10-8. Combining direct and large-eddy simulations, large-scale invariance is observed over half the logarithmic distance in parameter space between classical models and Earth. The conditions reached at this mid-point of the path are furthermore shown to be representative of the rapidly-rotating, asymptotic dynamical regime in which Earth's core resides, with a MAC force balance undisturbed by viscosity or inertia, the enforcement of a Taylor state and strong-field dynamo action. We conclude that numerical modelling has advanced to a stage where it is possible to use models correctly representing the statics, kinematics and now the dynamics of the geodynamo. This opens the way to a better analysis of the geomagnetic field in the time and space domains.
NASA Astrophysics Data System (ADS)
Lee, H.; Seo, D.; McKee, P.; Corby, R.
2009-12-01
One of the large challenges in data assimilation (DA) into distributed hydrologic models is to reduce the large degrees of freedom involved in the inverse problem to avoid overfitting. To assess the sensitivity of the performance of DA to the dimensionality of the inverse problem, we design and carry out real-world experiments in which the control vector in variational DA (VAR) is solved at different scales in space and time, e.g., lumped, semi-distributed, and fully-distributed in space, and hourly, 6 hourly, etc., in time. The size of the control vector is related to the degrees of freedom in the inverse problem. For the assessment, we use the prototype 4-dimenational variational data assimilator (4DVAR) that assimilates streamflow, precipitation and potential evaporation data into the NWS Hydrology Laboratory’s Research Distributed Hydrologic Model (HL-RDHM). In this talk, we present the initial results for a number of basins in Oklahoma and Texas.
NASA Astrophysics Data System (ADS)
Sardanyés, Josep; Simó, Carles; Martínez, Regina; Solé, Ricard V.; Elena, Santiago F.
2014-04-01
The distribution of mutational fitness effects (DMFE) is crucial to the evolutionary fate of quasispecies. In this article we analyze the effect of the DMFE on the dynamics of a large quasispecies by means of a phenotypic version of the classic Eigen's model that incorporates beneficial, neutral, deleterious, and lethal mutations. By parameterizing the model with available experimental data on the DMFE of Vesicular stomatitis virus (VSV) and Tobacco etch virus (TEV), we found that increasing mutation does not totally push the entire viral quasispecies towards deleterious or lethal regions of the phenotypic sequence space. The probability of finding regions in the parameter space of the general model that results in a quasispecies only composed by lethal phenotypes is extremely small at equilibrium and in transient times. The implications of our findings can be extended to other scenarios, such as lethal mutagenesis or genomically unstable cancer, where increased mutagenesis has been suggested as a potential therapy.
Equations of motion for a spectrum-generating algebra: Lipkin Meshkov Glick model
NASA Astrophysics Data System (ADS)
Rosensteel, G.; Rowe, D. J.; Ho, S. Y.
2008-01-01
For a spectrum-generating Lie algebra, a generalized equations-of-motion scheme determines numerical values of excitation energies and algebra matrix elements. In the approach to the infinite particle number limit or, more generally, whenever the dimension of the quantum state space is very large, the equations-of-motion method may achieve results that are impractical to obtain by diagonalization of the Hamiltonian matrix. To test the method's effectiveness, we apply it to the well-known Lipkin-Meshkov-Glick (LMG) model to find its low-energy spectrum and associated generator matrix elements in the eigenenergy basis. When the dimension of the LMG representation space is 106, computation time on a notebook computer is a few minutes. For a large particle number in the LMG model, the low-energy spectrum makes a quantum phase transition from a nondegenerate harmonic vibrator to a twofold degenerate harmonic oscillator. The equations-of-motion method computes critical exponents at the transition point.
REU Solar and Space Physics Summer School
NASA Astrophysics Data System (ADS)
Snow, M. A.; Wood, E. L.
2011-12-01
The Research Experience for Undergrads (REU) program in Solar and Space Physics at the University of Colorado begins with a week of lectures and labs on Solar and Space Physics. The students in our program come from a variety of majors (physics, engineering, meteorology, etc.) and from a wide range of schools (small liberal arts colleges up through large research universities). The majority of the students have never been exposed to solar and space physics before arriving in Boulder to begin their research projects. We have developed a week-long crash course in the field using the expertise of scientists in Boulder and the labs designed by the Center for Integrated Space Weather Modeling (CISM).
CCMC: bringing space weather awareness to the next generation
NASA Astrophysics Data System (ADS)
Chulaki, A.; Muglach, K.; Zheng, Y.; Mays, M. L.; Kuznetsova, M. M.; Taktakishvili, A.; Collado-Vega, Y. M.; Rastaetter, L.; Mendoza, A. M. M.; Thompson, B. J.; Pulkkinen, A. A.; Pembroke, A. D.
2017-12-01
Making space weather an element of core education is critical for the future of the young field of space weather. Community Coordinated Modeling Center (CCMC) is an interagency partnership established to aid the transition of modern space science models into space weather forecasting while supporting space science research. Additionally, over the past ten years it has established itself as a global space science education resource supporting undergraduate and graduate education and research, and spreading space weather awareness worldwide. A unique combination of assets, capabilities and close ties to the scientific and educational communities enable our small group to serve as a hub for rising generations of young space scientists and engineers. CCMC offers a variety of educational tools and resources publicly available online and providing access to the largest collection of modern space science models developed by the international research community. CCMC has revolutionized the way these simulations are utilized in classrooms settings, student projects, and scientific labs. Every year, this online system serves hundreds of students, educators and researchers worldwide. Another major CCMC asset is an expert space weather prototyping team primarily serving NASA's interplanetary space weather needs. Capitalizing on its unique capabilities and experiences, the team also provides in-depth space weather training to hundreds of students and professionals. One training module offers undergraduates an opportunity to actively engage in real-time space weather monitoring, analysis, forecasting, tools development and research, eventually serving remotely as NASA space weather forecasters. In yet another project, CCMC is collaborating with Hayden Planetarium and Linkoping University on creating a visualization platform for planetariums (and classrooms) to provide simulations of dynamic processes in the large domain stretching from the solar corona to the Earth's upper atmosphere, for near real-time and historical space weather events.
Models for Multimegawatt Space Power Systems
1990-06-01
devices such as batteries, flywheels, and large, cryogenic inductors. Turbines with generators, thermionics, thermoelectrics, alkali metal...NTCA Weapons Laboratory Kirtland AFB, NM 87117 C. Perry Bankston California Institute of Technology Jet Propulsion Laboratory 4800 Oak Grove
NASA Technical Reports Server (NTRS)
Tischer, A. E.
1987-01-01
The failure information propagation model (FIPM) data base was developed to store and manipulate the large amount of information anticipated for the various Space Shuttle Main Engine (SSME) FIPMs. The organization and structure of the FIPM data base is described, including a summary of the data fields and key attributes associated with each FIPM data file. The menu-driven software developed to facilitate and control the entry, modification, and listing of data base records is also discussed. The transfer of the FIPM data base and software to the NASA Marshall Space Flight Center is described. Complete listings of all of the data base definition commands and software procedures are included in the appendixes.
Data analysis and interpretation related to space system/environment interactions at LEO altitude
NASA Technical Reports Server (NTRS)
Raitt, W. John; Schunk, Robert W.
1991-01-01
Several studies made on the interaction of active systems with the LEO space environment experienced from orbital or suborbital platforms are covered. The issue of high voltage space interaction is covered by theoretical modeling studies of the interaction of charged solar cell arrays with the ionospheric plasma. The theoretical studies were complemented by experimental measurements made in a vacuum chamber. The other active system studied was the emission of effluent from a space platform. In one study the emission of plasma into the LEO environment was studied by using initially a 2-D model, and then extending this model to 3-D to correctly take account of plasma motion parallel to the geomagnetic field. The other effluent studies related to the releases of neutral gas from an orbiting platform. One model which was extended and used determined the density, velocity, and energy of both an effluent gas and the ambient upper atmospheric gases over a large volume around the platform. This model was adapted to study both ambient and contaminant distributions around smaller objects in the orbital frame of reference with scale sizes of 1 m. The other effluent studies related to the interaction of the released neutral gas with the ambient ionospheric plasma. An electrostatic model was used to help understand anomalously high plasma densities measured at times in the vicinity of the space shuttle orbiter.
Quantum gravity as an information network self-organization of a 4D universe
NASA Astrophysics Data System (ADS)
Trugenberger, Carlo A.
2015-10-01
I propose a quantum gravity model in which the fundamental degrees of freedom are information bits for both discrete space-time points and links connecting them. The Hamiltonian is a very simple network model consisting of a ferromagnetic Ising model for space-time vertices and an antiferromagnetic Ising model for the links. As a result of the frustration between these two terms, the ground state self-organizes as a new type of low-clustering graph with finite Hausdorff dimension 4. The spectral dimension is lower than the Hausdorff dimension: it coincides with the Hausdorff dimension 4 at a first quantum phase transition corresponding to an IR fixed point, while at a second quantum phase transition describing small scales space-time dissolves into disordered information bits. The large-scale dimension 4 of the universe is related to the upper critical dimension 4 of the Ising model. At finite temperatures the universe graph emerges without a big bang and without singularities from a ferromagnetic phase transition in which space-time itself forms out of a hot soup of information bits. When the temperature is lowered the universe graph unfolds and expands by lowering its connectivity, a mechanism I have called topological expansion. The model admits topological black hole excitations corresponding to graphs containing holes with no space-time inside and with "Schwarzschild-like" horizons with a lower spectral dimension.
Large Eddy Simulation of Heat Entrainment Under Arctic Sea Ice
NASA Astrophysics Data System (ADS)
Ramudu, Eshwan; Gelderloos, Renske; Yang, Di; Meneveau, Charles; Gnanadesikan, Anand
2018-01-01
Arctic sea ice has declined rapidly in recent decades. The faster than projected retreat suggests that free-running large-scale climate models may not be accurately representing some key processes. The small-scale turbulent entrainment of heat from the mixed layer could be one such process. To better understand this mechanism, we model the Arctic Ocean's Canada Basin, which is characterized by a perennial anomalously warm Pacific Summer Water (PSW) layer residing at the base of the mixed layer and a summertime Near-Surface Temperature Maximum (NSTM) within the mixed layer trapping heat from solar radiation. We use large eddy simulation (LES) to investigate heat entrainment for different ice-drift velocities and different initial temperature profiles. The value of LES is that the resolved turbulent fluxes are greater than the subgrid-scale fluxes for most of our parameter space. The results show that the presence of the NSTM enhances heat entrainment from the mixed layer. Additionally there is no PSW heat entrained under the parameter space considered. We propose a scaling law for the ocean-to-ice heat flux which depends on the initial temperature anomaly in the NSTM layer and the ice-drift velocity. A case study of "The Great Arctic Cyclone of 2012" gives a turbulent heat flux from the mixed layer that is approximately 70% of the total ocean-to-ice heat flux estimated from the PIOMAS model often used for short-term predictions. Present results highlight the need for large-scale climate models to account for the NSTM layer.
Application of field dependent polynomial model
NASA Astrophysics Data System (ADS)
Janout, Petr; Páta, Petr; Skala, Petr; Fliegel, Karel; Vítek, Stanislav; Bednář, Jan
2016-09-01
Extremely wide-field imaging systems have many advantages regarding large display scenes whether for use in microscopy, all sky cameras, or in security technologies. The Large viewing angle is paid by the amount of aberrations, which are included with these imaging systems. Modeling wavefront aberrations using the Zernike polynomials is known a longer time and is widely used. Our method does not model system aberrations in a way of modeling wavefront, but directly modeling of aberration Point Spread Function of used imaging system. This is a very complicated task, and with conventional methods, it was difficult to achieve the desired accuracy. Our optimization techniques of searching coefficients space-variant Zernike polynomials can be described as a comprehensive model for ultra-wide-field imaging systems. The advantage of this model is that the model describes the whole space-variant system, unlike the majority models which are partly invariant systems. The issue that this model is the attempt to equalize the size of the modeled Point Spread Function, which is comparable to the pixel size. Issues associated with sampling, pixel size, pixel sensitivity profile must be taken into account in the design. The model was verified in a series of laboratory test patterns, test images of laboratory light sources and consequently on real images obtained by an extremely wide-field imaging system WILLIAM. Results of modeling of this system are listed in this article.
The damper placement problem for large flexible space structures
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.
1992-01-01
The damper placement problem for large flexible space truss structures is formulated as a combinatorial optimization problem. The objective is to determine the p truss members of the structure to replace with active (or passive) dampers so that the modal damping ratio is as large as possible for all significant modes of vibration. Equivalently, given a strain energy matrix with rows indexed on the modes and the columns indexed on the truss members, we seek to find the set of p columns such that the smallest row sum, over the p columns, is maximized. We develop a tabu search heuristic for the damper placement problems on the Controls Structures Interaction (CSI) Phase 1 Evolutionary Model (10 modes and 1507 truss members). The resulting solutions are shown to be of high quality.
Latent degradation indicators estimation and prediction: A Monte Carlo approach
NASA Astrophysics Data System (ADS)
Zhou, Yifan; Sun, Yong; Mathew, Joseph; Wolff, Rodney; Ma, Lin
2011-01-01
Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.
Analysis of Direct Solar Illumination on the Backside of Space Station Solar Cells
NASA Technical Reports Server (NTRS)
Delleur, Ann M.; Kerslake, Thomas W.; Scheiman, David A.
1999-01-01
The International Space Station (ISS) is a complex spacecraft that will take several years to assemble in orbit. During many of the assembly and maintenance procedures, the space station's large solar arrays must he locked, which can significantly reduce power generation. To date, power generation analyses have not included power generation from the backside of the solar cells in a desire to produce a conservative analysis. This paper describes the testing of ISS solar cell backside power generation, analytical modeling and analysis results on an ISS assembly mission.
Analytical solutions of the space-time fractional Telegraph and advection-diffusion equations
NASA Astrophysics Data System (ADS)
Tawfik, Ashraf M.; Fichtner, Horst; Schlickeiser, Reinhard; Elhanbaly, A.
2018-02-01
The aim of this paper is to develop a fractional derivative model of energetic particle transport for both uniform and non-uniform large-scale magnetic field by studying the fractional Telegraph equation and the fractional advection-diffusion equation. Analytical solutions of the space-time fractional Telegraph equation and space-time fractional advection-diffusion equation are obtained by use of the Caputo fractional derivative and the Laplace-Fourier technique. The solutions are given in terms of Fox's H function. As an illustration they are applied to the case of solar energetic particles.
NASA Technical Reports Server (NTRS)
Glaese, John R.; McDonald, Emmett J.
2000-01-01
Orbiting space solar power systems are currently being investigated for possible flight in the time frame of 2015-2020 and later. Such space solar power (SSP) satellites are required to be extremely large in order to make practical the process of collection, conversion to microwave radiation, and reconversion to electrical power at earth stations or at remote locations in space. These large structures are expected to be very flexible presenting unique problems associated with their dynamics and control. The purpose of this project is to apply the expanded TREETOPS multi-body dynamics analysis computer simulation program (with expanded capabilities developed in the previous activity) to investigate the control problems associated with the integrated symmetrical concentrator (ISC) conceptual SSP system. SSP satellites are, as noted, large orbital systems having many bodies (perhaps hundreds) with flexible arrays operating in an orbiting environment where the non-uniform gravitational forces may be the major load producers on the structure so that a high fidelity gravity model is required. The current activity arises from our NRA8-23 SERT proposal. Funding, as a supplemental selection, has been provided by NASA with reduced scope from that originally proposed.
Analysis and trade-off studies of large lightweight mirror structures. [large space telescope
NASA Technical Reports Server (NTRS)
Soosaar, K.; Grin, R.; Ayer, F.
1975-01-01
A candidate mirror, hexagonally lightweighted, is analyzed under various loadings using as complete a procedure as possible. Successive simplifications are introduced and compared to an original analysis. A model which is a reasonable compromise between accuracy and cost is found and is used for making trade-off studies of the various structural parameters of the lightweighted mirror.
Sensitivity analysis for future space missions with segmented telescopes for high-contrast imaging
NASA Astrophysics Data System (ADS)
Leboulleux, Lucie; Pueyo, Laurent; Sauvage, Jean-François; Mazoyer, Johan; Soummer, Remi; Fusco, Thierry; Sivaramakrishnan, Anand
2018-01-01
The detection and analysis of biomarkers on earth-like planets using direct-imaging will require both high-contrast imaging and spectroscopy at very close angular separation (10^10 star to planet flux ratio at a few 0.1”). This goal can only be achieved with large telescopes in space to overcome atmospheric turbulence, often combined with a coronagraphic instrument with wavefront control. Large segmented space telescopes such as studied for the LUVOIR mission will generate segment-level instabilities and cophasing errors in addition to local mirror surface errors and other aberrations of the overall optical system. These effects contribute directly to the degradation of the final image quality and contrast. We present an analytical model that produces coronagraphic images of a segmented pupil telescope in the presence of segment phasing aberrations expressed as Zernike polynomials. This model relies on a pair-based projection of the segmented pupil and provides results that match an end-to-end simulation with an rms error on the final contrast of ~3%. This analytical model can be applied both to static and dynamic modes, and either in monochromatic or broadband light. It retires the need for end-to-end Monte-Carlo simulations that are otherwise needed to build a rigorous error budget, by enabling quasi-instantaneous analytical evaluations. The ability to invert directly the analytical model provides direct constraints and tolerances on all segments-level phasing and aberrations.
Risk transfer modeling among hierarchically associated stakeholders in development of space systems
NASA Astrophysics Data System (ADS)
Henkle, Thomas Grove, III
Research develops an empirically derived cardinal model that prescribes handling and transfer of risks between organizations with hierarchical relationships. Descriptions of mission risk events, risk attitudes, and conditions for risk transfer are determined for client and underwriting entities associated with acquisition, production, and deployment of space systems. The hypothesis anticipates that large client organizations should be able to assume larger dollar-value risks of a program in comparison to smaller organizations even though many current risk transfer arrangements via space insurance violate this hypothesis. A literature survey covers conventional and current risk assessment methods, current techniques used in the satellite industry for complex system development, cardinal risk modeling, and relevant aspects of utility theory. Data gathered from open literature on demonstrated launch vehicle and satellite in-orbit reliability, annual space insurance premiums and losses, and ground fatalities and range damage associated with satellite launch activities are presented. Empirically derived models are developed for risk attitudes of space system clients and third-party underwriters associated with satellite system development and deployment. Two application topics for risk transfer are examined: the client-underwriter relationship on assumption or transfer of risks associated with first-year mission success, and statutory risk transfer agreements between space insurance underwriters and the US government to promote growth in both commercial client and underwriting industries. Results indicate that client entities with wealth of at least an order of magnitude above satellite project costs should retain risks to first-year mission success despite present trends. Furthermore, large client entities such as the US government should never pursue risk transfer via insurance under previously demonstrated probabilities of mission success; potential savings may reasonably exceed multiple tens of $millions per space project. Additional results indicate that current US government statutory arrangements on risk sharing with underwriting entities appears reasonable with respect to stated objectives. This research combines aspects of multiple disciplines to include risk management, decision theory, utility theory, and systems architecting. It also demonstrates development of a more general theory on prescribing risk transfer criteria between distinct, but hierarchically associated entities involved in complex system development with applicability to a variety of technical domains.
Geological implications of impacts of large asteroids and comets on the earth
NASA Technical Reports Server (NTRS)
Silver, L. T. (Editor); Schultz, P. H. (Editor)
1982-01-01
The present conference discusses such topics as large object fluxes in near-earth space and the probabilities of terrestrial impacts, the geological record of impacts, dynamics modeling for large body impacts on continents and oceans, physical, chemical, and biological models of large impacts' atmospheric effects, dispersed impact ejecta and their signatures, general considerations concerning mass biological extinctions, the Cretaceous/Tertiary boundary event, geochemical signatures in the stratigraphic record, and other phanerozoic events. Attention is given to terrestrial impact rates for long- and short-period comets, estimates of crater size for large body impact, a first-order estimate of shock heating and vaporization in oceanic impacts, atmospheric effects in the first few minutes after an impact, a feasibility test for biogeographic extinction, and the planktonic and dinosaur extinctions.
SP_Ace: Stellar Parameters And Chemical abundances Estimator
NASA Astrophysics Data System (ADS)
Boeche, C.; Grebel, E. K.
2018-05-01
SP_Ace (Stellar Parameters And Chemical abundances Estimator) estimates the stellar parameters Teff, log g, [M/H], and elemental abundances. It employs 1D stellar atmosphere models in Local Thermodynamic Equilibrium (LTE). The code is highly automated and suitable for analyzing the spectra of large spectroscopic surveys with low or medium spectral resolution (R = 2000-20 000). A web service for calculating these values with the software is also available.
Kim, Sang-Woo; Nishimura, Jun; Tsuchiya, Asato
2012-01-06
We reconsider the matrix model formulation of type IIB superstring theory in (9+1)-dimensional space-time. Unlike the previous works in which the Wick rotation was used to make the model well defined, we regularize the Lorentzian model by introducing infrared cutoffs in both the spatial and temporal directions. Monte Carlo studies reveal that the two cutoffs can be removed in the large-N limit and that the theory thus obtained has no parameters other than one scale parameter. Moreover, we find that three out of nine spatial directions start to expand at some "critical time," after which the space has SO(3) symmetry instead of SO(9).
Advantages of Fast Ignition Scenarios with Two Hot Spots for Space Propulsion Systems
NASA Astrophysics Data System (ADS)
Shmatov, M. L.
The use of the fast ignition scenarios with the attempts to create two hot spots in one blob of the compressed thermonuclear fuel or, briefly, scenarios with two hot spots in space propulsion systems is proposed. The model, predicting that for such scenarios the probability pf of failure of ignition of thermonuclear microexplosion can be significantly less than that for the similar scenarios with the attempts to create one hot spot in one blob of the compressed fuel, is presented. For space propulsion systems consuming a relatively large amount of propellant, a decrease in pf due to the choice of the scenario with two hot spots can result in large, for example, two-fold, increase in the payload mass. Other advantages of the scenarios with two hot spots and some problems related to them are considered.
N-point statistics of large-scale structure in the Zel'dovich approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tassev, Svetlin, E-mail: tassev@astro.princeton.edu
2014-06-01
Motivated by the results presented in a companion paper, here we give a simple analytical expression for the matter n-point functions in the Zel'dovich approximation (ZA) both in real and in redshift space (including the angular case). We present numerical results for the 2-dimensional redshift-space correlation function, as well as for the equilateral configuration for the real-space 3-point function. We compare those to the tree-level results. Our analysis is easily extendable to include Lagrangian bias, as well as higher-order perturbative corrections to the ZA. The results should be especially useful for modelling probes of large-scale structure in the linear regime,more » such as the Baryon Acoustic Oscillations. We make the numerical code used in this paper freely available.« less
Controls on Variations of Surface Energy, Water, and Carbon Budgets within Large-Scale Amazon Basin
NASA Technical Reports Server (NTRS)
Smith, Eric A.; Cooper, Harry J.; Grose, Andrew; Gu, Jiu-Jing; Norman, John; daRocha, Humberto R.; Dias, Pedro Silva
2002-01-01
A key research focus of the LBA Research Program is understanding the space-time variations in interlinked surface energy, water, and carbon budgets, the controls on these variations, and the implications of these controls on the carbon sequestering capacity of the large scale forest-pasture system that dominates the Amaz6nia landscape. Quantification of these variations and controls are investigated by a combination of in situ measurements, remotely sensed measurements from space, and a realistically forced hydrometeorological model coupled to a carbon assimilation model, capable of simulating details within the surface energy and water budgets along with the principle processes of photosynthesis and respiration. Herein we describe the results of an investigation concerning the space-time controls of carbon sources and sinks distributed over the large scale Amazon basin. The results are derived from a carbon-water-energy budget retrieval system for the large scale Amazon basin, which uses a coupled carbon assimilation-hydrometeorological model as an integrating system, forced by both in situ meteorological measurements and remotely sensed radiation and precipitation fluxes obtained from a combination of GOES, SSM/I, TOMS, and TRh4M satellite measurements. Results include validation of (a) retrieved surface radiation and precipitation fluxes based on 30-min averaged surface measurements taken at Ji-Parani in Rondania and Manaus in Amazonas, and (b) modeled sensible, latent, and C02 fluxes based on tower measurements taken at Reserva Jaru, Manaus and Fazenda Nossa Senhora. The space-time controls on carbon sequestration are partitioned into sets of factors classified by: (1) above canopy meteorology, (2) incoming surface radiation, (3) precipitation interception, and (4) indigenous stomatal processes varied over the different land covers of pristine rainforest, partially, and fully logged rainforests, and pasture lands. These are the principle meteorological, thermodynamical, hydrological, and biophysical control paths which perturb net carbon fluxes and sequestration, produce time-space switching of carbon sources and sinks, undergo modulation through atmospheric boundary layer feedbacks, and respond to any discontinuous intervention on the landscape itself such as produced by human intervention in converting rainforest to pasture or conducting selective/clearcut logging operations. The results demonstrate how relative carbon sequestration capacity of the Amazonian ecosystem responds to these controls, and how interpretation of space-time heterogeneities in carbon sequestration depends on a fairly exact quantification of the interacting non-linear properties of photosynthesis in response to incoming solar flux, air-canopy temperatures, and leaf water interception -- and soil respiration in response to upper layer soil temperature and water content. The results also show how the interpretation of the control processes is highly sensitive to the scales at which the surface fluxes are analyzed.
Lehtola, Susi; Parkhill, John; Head-Gordon, Martin
2016-10-07
Novel implementations based on dense tensor storage are presented here for the singlet-reference perfect quadruples (PQ) [J. A. Parkhill et al., J. Chem. Phys. 130, 084101 (2009)] and perfect hextuples (PH) [J. A. Parkhill and M. Head-Gordon, J. Chem. Phys. 133, 024103 (2010)] models. The methods are obtained as block decompositions of conventional coupled-cluster theory that are exact for four electrons in four orbitals (PQ) and six electrons in six orbitals (PH), but that can also be applied to much larger systems. PQ and PH have storage requirements that scale as the square, and as the cube of the numbermore » of active electrons, respectively, and exhibit quartic scaling of the computational effort for large systems. Applications of the new implementations are presented for full-valence calculations on linear polyenes (C nH n+2), which highlight the excellent computational scaling of the present implementations that can routinely handle active spaces of hundreds of electrons. The accuracy of the models is studied in the π space of the polyenes, in hydrogen chains (H 50), and in the π space of polyacene molecules. In all cases, the results compare favorably to density matrix renormalization group values. With the novel implementation of PQ, active spaces of 140 electrons in 140 orbitals can be solved in a matter of minutes on a single core workstation, and the relatively low polynomial scaling means that very large systems are also accessible using parallel computing.« less
NASA Astrophysics Data System (ADS)
Lehtola, Susi; Parkhill, John; Head-Gordon, Martin
2016-10-01
Novel implementations based on dense tensor storage are presented for the singlet-reference perfect quadruples (PQ) [J. A. Parkhill et al., J. Chem. Phys. 130, 084101 (2009)] and perfect hextuples (PH) [J. A. Parkhill and M. Head-Gordon, J. Chem. Phys. 133, 024103 (2010)] models. The methods are obtained as block decompositions of conventional coupled-cluster theory that are exact for four electrons in four orbitals (PQ) and six electrons in six orbitals (PH), but that can also be applied to much larger systems. PQ and PH have storage requirements that scale as the square, and as the cube of the number of active electrons, respectively, and exhibit quartic scaling of the computational effort for large systems. Applications of the new implementations are presented for full-valence calculations on linear polyenes (CnHn+2), which highlight the excellent computational scaling of the present implementations that can routinely handle active spaces of hundreds of electrons. The accuracy of the models is studied in the π space of the polyenes, in hydrogen chains (H50), and in the π space of polyacene molecules. In all cases, the results compare favorably to density matrix renormalization group values. With the novel implementation of PQ, active spaces of 140 electrons in 140 orbitals can be solved in a matter of minutes on a single core workstation, and the relatively low polynomial scaling means that very large systems are also accessible using parallel computing.
Analytical Model for Mean Flow and Fluxes of Momentum and Energy in Very Large Wind Farms
NASA Astrophysics Data System (ADS)
Markfort, Corey D.; Zhang, Wei; Porté-Agel, Fernando
2018-01-01
As wind-turbine arrays continue to be installed and the array size continues to grow, there is an increasing need to represent very large wind-turbine arrays in numerical weather prediction models, for wind-farm optimization, and for environmental assessment. We propose a simple analytical model for boundary-layer flow in fully-developed wind-turbine arrays, based on the concept of sparsely-obstructed shear flows. In describing the vertical distribution of the mean wind speed and shear stress within wind farms, our model estimates the mean kinetic energy harvested from the atmospheric boundary layer, and determines the partitioning between the wind power captured by the wind turbines and that absorbed by the underlying land or water. A length scale based on the turbine geometry, spacing, and performance characteristics, is able to estimate the asymptotic limit for the fully-developed flow through wind-turbine arrays, and thereby determine if the wind-farm flow is fully developed for very large turbine arrays. Our model is validated using data collected in controlled wind-tunnel experiments, and its usefulness for the prediction of wind-farm performance and optimization of turbine-array spacing are described. Our model may also be useful for assessing the extent to which the extraction of wind power affects the land-atmosphere coupling or air-water exchange of momentum, with implications for the transport of heat, moisture, trace gases such as carbon dioxide, methane, and nitrous oxide, and ecologically important oxygen.
NASA Astrophysics Data System (ADS)
Yu, Garmay; A, Shvetsov; D, Karelov; D, Lebedev; A, Radulescu; M, Petukhov; V, Isaev-Ivanov
2012-02-01
Based on X-ray crystallographic data available at Protein Data Bank, we have built molecular dynamics (MD) models of homologous recombinases RecA from E. coli and D. radiodurans. Functional form of RecA enzyme, which is known to be a long helical filament, was approximated by a trimer, simulated in periodic water box. The MD trajectories were analyzed in terms of large-scale conformational motions that could be detectable by neutron and X-ray scattering techniques. The analysis revealed that large-scale RecA monomer dynamics can be described in terms of relative motions of 7 subdomains. Motion of C-terminal domain was the major contributor to the overall dynamics of protein. Principal component analysis (PCA) of the MD trajectories in the atom coordinate space showed that rotation of C-domain is correlated with the conformational changes in the central domain and N-terminal domain, that forms the monomer-monomer interface. Thus, even though C-terminal domain is relatively far from the interface, its orientation is correlated with large-scale filament conformation. PCA of the trajectories in the main chain dihedral angle coordinate space implicates a co-existence of a several different large-scale conformations of the modeled trimer. In order to clarify the relationship of independent domain orientation with large-scale filament conformation, we have performed analysis of independent domain motion and its implications on the filament geometry.
Large Space Systems Technology, 1979. [antenna and space platform systems conference
NASA Technical Reports Server (NTRS)
Ward, J. C., Jr. (Compiler)
1980-01-01
Items of technology and developmental efforts in support of the large space systems technology programs are described. The major areas of interest are large antennas systems, large space platform systems, and activities that support both antennas and platform systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Ho Jun, E-mail: tiger.anima@gmail.com; Yang, Wonkyun; Joo, Junghoon
Semiconductor fabrication often requires the deposition of hydrogenated silicon nitride (SiN{sub x}H{sub y}) film using SiH{sub 4}/NH{sub 3}/N{sub 2}/He capacitively coupled plasma (CCP) discharge. As analysis of the discharge geometry is essential to understanding CCP deposition, the effect of electrode spacing on the two-dimensional distributions of electrons, ions, and metastable and radical molecules was analyzed numerically using a fluid model. The simulation shows that the spatial variations in the ionization rates near the sheath become more obvious as the electrode spacing increases. In addition, as molecule-molecule gas-phase reactions are significantly affected by the local residence time, large electrode spacings aremore » associated with significant volumetric losses for positive ions. Consequently, an increase of the electrode spacing leads axial density profiles of ions to change from bell shaped to double humped. However, NH{sub 4}{sup +} persistently maintains a bell-shaped axial density profile regardless of the degree of electrode spacing. We set the mole fraction of NH{sub 3} to only 1% of the total flow at the inlet, but NH{sub 4}{sup +} is the most abundant positive ion at the large electrode spacings. As the gas flow can transport the radicals around the space between the electrodes, we found that radical density distribution shifts toward the grounded electrode. The shift becomes pronounced as the electrode spacing increases. Finally, to validate our model, we compared the calculated deposition rate profile with the experimental data obtained along the wafer radius. According to our numerical results, the SiN{sub x}H{sub y} deposition rate decreases by approximately 16% when the electrode spacing increases from 9 to 20 mm.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paparó, M.; Benkő, J. M.; Hareter, M.
A sequence search method was developed to search the regular frequency spacing in δ Scuti stars through visual inspection and an algorithmic search. We searched for sequences of quasi-equally spaced frequencies, containing at least four members per sequence, in 90 δ Scuti stars observed by CoRoT . We found an unexpectedly large number of independent series of regular frequency spacing in 77 δ Scuti stars (from one to eight sequences) in the non-asymptotic regime. We introduce the sequence search method presenting the sequences and echelle diagram of CoRoT 102675756 and the structure of the algorithmic search. Four sequences (echelle ridges)more » were found in the 5–21 d{sup −1} region where the pairs of the sequences are shifted (between 0.5 and 0.59 d{sup −1}) by twice the value of the estimated rotational splitting frequency (0.269 d{sup −1}). The general conclusions for the whole sample are also presented in this paper. The statistics of the spacings derived by the sequence search method, by FT (Fourier transform of the frequencies), and the statistics of the shifts are also compared. In many stars more than one almost equally valid spacing appeared. The model frequencies of FG Vir and their rotationally split components were used to formulate the possible explanation that one spacing is the large separation while the other is the sum of the large separation and the rotational frequency. In CoRoT 102675756, the two spacings (2.249 and 1.977 d{sup −1}) are in better agreement with the sum of a possible 1.710 d{sup −1} large separation and two or one times, respectively, the value of the rotational frequency.« less
Dynamics in the Parameter Space of a Neuron Model
NASA Astrophysics Data System (ADS)
Paulo, C. Rech
2012-06-01
Some two-dimensional parameter-space diagrams are numerically obtained by considering the largest Lyapunov exponent for a four-dimensional thirteen-parameter Hindmarsh—Rose neuron model. Several different parameter planes are considered, and it is shown that depending on the combination of parameters, a typical scenario can be preserved: for some choice of two parameters, the parameter plane presents a comb-shaped chaotic region embedded in a large periodic region. It is also shown that there exist regions close to these comb-shaped chaotic regions, separated by the comb teeth, organizing themselves in period-adding bifurcation cascades.
Meteor Impact Model in the new Space Power Chambers
1962-09-21
S-65 Meteor Impact Model set up in the former Altitude Wind Tunnel at the National Aeronautics and Space Administration (NASA) Lewis Research Center just days after the September 12, 1962 rededication of the facility as the Space Power Chamber. Although larger test chambers would later be constructed, the rapid conversion of the wind tunnel into two space tanks allowed the facility to play a vital role in the early years of the space program. The eastern section of the tunnel, seen here became a vacuum chamber capable of simulating 100 miles altitude. This space tank was envisioned for the study of small satellites like this one. The transfer of the Centaur Program to Lewis one month late, however, permanently changed this mission. NASA was undertaking an in depth study at the time on the effect of micrometeoroids on satellites. Large space radiators were particularly vulnerable to damage from the small particles of space debris. In order to determine the hazard from meteoroids researchers had to define the flux rate relative to the mass and the velocity distribution because the greater the mass or the velocity of a meteoroid the greater the damage.
Quasi-equilibria in reduced Liouville spaces.
Halse, Meghan E; Dumez, Jean-Nicolas; Emsley, Lyndon
2012-06-14
The quasi-equilibrium behaviour of isolated nuclear spin systems in full and reduced Liouville spaces is discussed. We focus in particular on the reduced Liouville spaces used in the low-order correlations in Liouville space (LCL) simulation method, a restricted-spin-space approach to efficiently modelling the dynamics of large networks of strongly coupled spins. General numerical methods for the calculation of quasi-equilibrium expectation values of observables in Liouville space are presented. In particular, we treat the cases of a time-independent Hamiltonian, a time-periodic Hamiltonian (with and without stroboscopic sampling) and powder averaging. These quasi-equilibrium calculation methods are applied to the example case of spin diffusion in solid-state nuclear magnetic resonance. We show that there are marked differences between the quasi-equilibrium behaviour of spin systems in the full and reduced spaces. These differences are particularly interesting in the time-periodic-Hamiltonian case, where simulations carried out in the reduced space demonstrate ergodic behaviour even for small spins systems (as few as five homonuclei). The implications of this ergodic property on the success of the LCL method in modelling the dynamics of spin diffusion in magic-angle spinning experiments of powders is discussed.
Effect of normalized plasma frequency on electron phase-space orbits in a free-electron laser
NASA Astrophysics Data System (ADS)
Ji, Yu-Pin; Wang, Shi-Jian; Xu, Jing-Yue; Xu, Yong-Gen; Liu, Xiao-Xu; Lu, Hong; Huang, Xiao-Li; Zhang, Shi-Chang
2014-02-01
Irregular phase-space orbits of the electrons are harmful to the electron-beam transport quality and hence deteriorate the performance of a free-electron laser (FEL). In previous literature, it was demonstrated that the irregularity of the electron phase-space orbits could be caused in several ways, such as varying the wiggler amplitude and inducing sidebands. Based on a Hamiltonian model with a set of self-consistent differential equations, it is shown in this paper that the electron-beam normalized plasma frequency functions not only couple the electron motion with the FEL wave, which results in the evolution of the FEL wave field and a possible power saturation at a large beam current, but also cause the irregularity of the electron phase-space orbits when the normalized plasma frequency has a sufficiently large value, even if the initial energy of the electron is equal to the synchronous energy or the FEL wave does not reach power saturation.
Realtime Space Weather Forecasts Via Android Phone App
NASA Astrophysics Data System (ADS)
Crowley, G.; Haacke, B.; Reynolds, A.
2010-12-01
For the past several years, ASTRA has run a first-principles global 3-D fully coupled thermosphere-ionosphere model in real-time for space weather applications. The model is the Thermosphere-Ionosphere Mesosphere Electrodynamics General Circulation Model (TIMEGCM). ASTRA also runs the Assimilative Mapping of Ionospheric Electrodynamics (AMIE) in real-time. Using AMIE to drive the high latitude inputs to the TIMEGCM produces high fidelity simulations of the global thermosphere and ionosphere. These simulations can be viewed on the Android Phone App developed by ASTRA. The SpaceWeather app for the Android operating system is free and can be downloaded from the Google Marketplace. We present the current status of realtime thermosphere-ionosphere space-weather forcasting and discuss the way forward. We explore some of the issues in maintaining real-time simulations with assimilative data feeds in a quasi-operational setting. We also discuss some of the challenges of presenting large amounts of data on a smartphone. The ASTRA SpaceWeather app includes the broadest and most unique range of space weather data yet to be found on a single smartphone app. This is a one-stop-shop for space weather and the only app where you can get access to ASTRA’s real-time predictions of the global thermosphere and ionosphere, high latitude convection and geomagnetic activity. Because of the phone's GPS capability, users can obtain location specific vertical profiles of electron density, temperature, and time-histories of various parameters from the models. The SpaceWeather app has over 9000 downloads, 30 reviews, and a following of active users. It is clear that real-time space weather on smartphones is here to stay, and must be included in planning for any transition to operational space-weather use.
NASA Technical Reports Server (NTRS)
McElwain, Michael; Van Gorkom, Kyle; Bowers, Charles W.; Carnahan, Timothy M.; Kimble, Randy A.; Knight, J. Scott; Lightsey, Paul; Maghami, Peiman G.; Mustelier, David; Niedner, Malcolm B.;
2017-01-01
The James Webb Space Telescope (JWST) is a large (6.5 m) cryogenic segmented aperture telescope with science instruments that cover the near- and mid-infrared from 0.6-27 microns. The large aperture not only provides high photometric sensitivity, but it also enables high angular resolution across the bandpass, with a diffraction limited point spread function (PSF) at wavelengths longer than 2 microns. The JWST PSF quality and stability are intimately tied to the science capabilities as it is convolved with the astrophysical scene. However, the PSF evolves at a variety of timescales based on telescope jitter and thermal distortion as the observatory attitude is varied. We present the image quality and stability requirements, recent predictions from integrated modeling, measurements made during ground-based testing, and performance characterization activities that will be carried out as part of the commissioning process.
NASA Technical Reports Server (NTRS)
Soula, Serge
1994-01-01
The evolution of the vertical electric field profile deduced from simultaneous field measurements at several levels below a thundercloud shows the development of a space charge layer at least up to 600 m. The average charge density in the whole layer from 0 m to 600 m can reach about 1 nC m(exp -3). The ions are generated at the ground by corona effect and the production rate is evaluated with a new method from the comparison of field evolutions at the ground and at altitude after a lightning flash. The modeling of the relevant processes shows tht ground corona accounts for the observed field evolutions and that the aerosol particles concentration has a very large effect on the evolution of corona ions. However, with a realistic value for this concentration a large amount of ground corona ions reach the level of 600 m.
A Historical Perspective on Dynamics Testing at the Langley Research Center
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Kvaternik, Raymond G.
2000-01-01
The history of structural dynamics testing research over the past four decades at the Langley Research Center of the National Aeronautics and Space Administration is reviewed. Beginning in the early sixties, Langley investigated several scale model and full-scale spacecraft including the NIMBUS and various concepts for Apollo and Viking landers. Langley engineers pioneered the use of scaled models to study the dynamics of launch vehicles including Saturn I, Saturn V, and Titan III. In the seventies, work emphasized the Space Shuttle and advanced test and data analysis methods. In the eighties, the possibility of delivering large structures to orbit by the Space Shuttle shifted focus towards understanding the interaction of flexible space structures with attitude control systems. Although Langley has maintained a tradition of laboratory-based research, some flight experiments were supported. This review emphasizes work that, in some way, advanced the state of knowledge at the time.
NASA Astrophysics Data System (ADS)
Raman, Kumar; Papanikolaou, Stefanos; Fradkin, Eduardo
2007-03-01
We construct a two-dimensional microscopic model of interacting quantum dimers that displays an infinite number of periodic striped phases in its T=0 phase diagram. The phases form an incomplete devil's staircase and the period becomes arbitrarily large as the staircase is traversed. The Hamiltonian has purely short-range interactions, does not break any symmetries, and is generic in that it does not involve the fine tuning of a large number of parameters. Our model, a quantum mechanical analog of the Pokrovsky-Talapov model of fluctuating domain walls in two dimensional classical statistical mechanics, provides a mechanism by which striped phases with periods large compared to the lattice spacing can, in principle, form in frustrated quantum magnetic systems with only short-ranged interactions and no explicitly broken symmetries. Please see cond-mat/0611390 for more details.
Solar array electrical performance assessment for Space Station Freedom
NASA Technical Reports Server (NTRS)
Smith, Bryan K.; Brisco, Holly
1993-01-01
Electrical power for Space Station Freedom will be generated by large Photovoltaic arrays with a beginning of life power requirement of 30.8 kW per array. The solar arrays will operate in a Low Earth Orbit (LEO) over a design life of fifteen years. This paper provides an analysis of the predicted solar array electrical performance over the design life and presents a summary of supporting analysis and test data for the assigned model parameters and performance loss factors. Each model parameter and loss factor is assessed based upon program requirements, component analysis, and test data to date. A description of the LMSC performance model, future test plans, and predicted performance ranges are also given.
Systematics of first 2+ state g factors around mass 80
NASA Astrophysics Data System (ADS)
Mertzimekis, T. J.; Stuchbery, A. E.; Benczer-Koller, N.; Taylor, M. J.
2003-11-01
The systematics of the first 2+ state g factors in the mass 80 region are investigated in terms of an IBM-II analysis, a pairing-corrected geometrical model, and a shell-model approach. Subshell closure effects at N=38 and overall trends were examined using IBM-II. A large-space shell-model calculation was successful in describing the behavior for N=48 and N=50 nuclei, where single-particle features are prominent. A schematic truncated-space calculation was applied to the lighter isotopes. The variations of the effective boson g factors are discussed in connection with the role of F -spin breaking, and comparisons are made between the mass 80 and mass 180 regions.
Solar array electrical performance assessment for Space Station Freedom
NASA Technical Reports Server (NTRS)
Smith, Bryan K.; Brisco, Holly
1993-01-01
Electrical power for Space Station Freedom will be generated by large photovoltaic arrays with a beginning of life power requirement of 30.8 kW per array. The solar arrays will operate in a Low Earth Orbit (LEO) over a design life of fifteen years. This paper provides an analysis of the predicted solar array electrical performance over the design life and presents a summary of supporting analysis and test data for the assigned model parameters and performance loss factors. Each model parameter and loss factor is assessed based upon program requirements, component analysis and test data to date. A description of the LMSC performance model future test plans and predicted performance ranges are also given.
An investigation of the use of temporal decomposition in space mission scheduling
NASA Technical Reports Server (NTRS)
Bullington, Stanley E.; Narayanan, Venkat
1994-01-01
This research involves an examination of techniques for solving scheduling problems in long-duration space missions. The mission timeline is broken up into several time segments, which are then scheduled incrementally. Three methods are presented for identifying the activities that are to be attempted within these segments. The first method is a mathematical model, which is presented primarily to illustrate the structure of the temporal decomposition problem. Since the mathematical model is bound to be computationally prohibitive for realistic problems, two heuristic assignment procedures are also presented. The first heuristic method is based on dispatching rules for activity selection, and the second heuristic assigns performances of a model evenly over timeline segments. These heuristics are tested using a sample Space Station mission and a Spacelab mission. The results are compared with those obtained by scheduling the missions without any problem decomposition. The applicability of this approach to large-scale mission scheduling problems is also discussed.
Small Fish Species as Powerful Model Systems to Study Vertebrate Physiology in Space
NASA Astrophysics Data System (ADS)
Muller, M.; Aceto, J.; Dalcq, J.; Alestrom, P.; Nourizadeh-Lillabadi, R.; Goerlich, R.; Schiller, V.; Winkler, C.; Renn, J.; Eberius, M.; Slenzka, K.
2008-06-01
Small fish models, mainly zebrafish (Danio rerio) and medaka (Oryzias latipes), have been used for many years as powerful model systems for vertebrate developmental biology. Moreover, these species are increasingly recognized as valuable systems to study vertebrate physiology, pathology, pharmacology and toxicology, including in particular bone physiology. The biology of small fishes presents many advantages, such as transparency of the embryos, external and rapid development, small size and easy reproduction. Further characteristics are particularly useful for space research or for large scale screening approaches. Finally, many technologies for easily characterizing bones are available. Our objective is to investigate the changes induced by microgravity in small fish. By combining whole genome analysis (microarray, DNA methylation, chromatin modification) with live imaging of selected genes in transgenic animals, a comprehensive and integrated characterization of physiological changes in space could be gained, especially concerning bone physiology.
Aerospace applications of SINDA/FLUINT at the Johnson Space Center
NASA Technical Reports Server (NTRS)
Ewert, Michael K.; Bellmore, Phillip E.; Andish, Kambiz K.; Keller, John R.
1992-01-01
SINDA/FLUINT has been found to be a versatile code for modeling aerospace systems involving single or two-phase fluid flow and all modes of heat transfer. Several applications of SINDA/FLUINT are described in this paper. SINDA/FLUINT is being used extensively to model the single phase water loops and the two-phase ammonia loops of the Space Station Freedom active thermal control system (ATCS). These models range from large integrated system models with multiple submodels to very detailed subsystem models. An integrated Space Station ATCS model has been created with ten submodels representing five water loops, three ammonia loops, a Freon loop and a thermal submodel representing the air loop. The model, which has approximately 800 FLUINT lumps and 300 thermal nodes, is used to determine the interaction between the multiple fluid loops which comprise the Space Station ATCS. Several detailed models of the flow-through radiator subsystem of the Space Station ATCS have been developed. One model, which has approximately 70 FLUINT lumps and 340 thermal nodes, provides a representation of the ATCS low temperature radiator array with two fluid loops connected only by conduction through the radiator face sheet. The detailed models are used to determine parameters such as radiator fluid return temperature, fin efficiency, flow distribution and total heat rejection for the baseline design as well as proposed alternate designs. SINDA/FLUINT has also been used as a design tool for several systems using pressurized gasses. One model examined the pressurization and depressurization of the Space Station airlock under a variety of operating conditions including convection with the side walls and internal cooling. Another model predicted the performance of a new generation of manned maneuvering units. This model included high pressure gas depressurization, internal heat transfer and supersonic thruster equations. The results of both models were used to size components, such as the heaters and gas bottles and also to point to areas where hardware testing was needed.
NASA Astrophysics Data System (ADS)
Cucinotta, Francis
Uncertainties in estimating health risks from exposures to galactic cosmic rays (GCR) — comprised of protons and high-energy and charge (HZE) nuclei are an important limitation to long duration space travel. HZE nuclei produce both qualitative and quantitative differences in biological effects compared to terrestrial radiation leading to large uncertainties in predicting risks to humans. Our NASA Space Cancer Risk Model-2012 (NSCR-2012) for estimating lifetime cancer risks from space radiation included several new features compared to earlier models from the National Council on Radiation Protection and Measurements (NCRP) used at NASA. New features of NSCR-2012 included the introduction of NASA defined radiation quality factors based on track structure concepts, a Bayesian analysis of the dose and dose-rate reduction effectiveness factor (DDREF) and its uncertainty, and the use of a never-smoker population to represent astronauts. However, NSCR-2012 did not include estimates of the role of qualitative differences between HZE particles and low LET radiation. In this report we discuss evidence for non-targeted effects increasing cancer risks at space relevant HZE particle absorbed doses in tissue (<0.2 Gy), and for increased tumor lethality due to the propensity for higher rates of metastatic tumors from high LET radiation suggested by animal experiments. The NSCR-2014 model considers how these qualitative differences modify the overall probability distribution functions (PDF) for cancer mortality risk estimates from space radiation. Predictions of NSCR-2014 for International Space Station missions and Mars exploration will be described, and compared to those of our earlier NSCR-2012 model.
Trap configuration and spacing influences parameter estimates in spatial capture-recapture models
Sun, Catherine C.; Fuller, Angela K.; Royle, J. Andrew
2014-01-01
An increasing number of studies employ spatial capture-recapture models to estimate population size, but there has been limited research on how different spatial sampling designs and trap configurations influence parameter estimators. Spatial capture-recapture models provide an advantage over non-spatial models by explicitly accounting for heterogeneous detection probabilities among individuals that arise due to the spatial organization of individuals relative to sampling devices. We simulated black bear (Ursus americanus) populations and spatial capture-recapture data to evaluate the influence of trap configuration and trap spacing on estimates of population size and a spatial scale parameter, sigma, that relates to home range size. We varied detection probability and home range size, and considered three trap configurations common to large-mammal mark-recapture studies: regular spacing, clustered, and a temporal sequence of different cluster configurations (i.e., trap relocation). We explored trap spacing and number of traps per cluster by varying the number of traps. The clustered arrangement performed well when detection rates were low, and provides for easier field implementation than the sequential trap arrangement. However, performance differences between trap configurations diminished as home range size increased. Our simulations suggest it is important to consider trap spacing relative to home range sizes, with traps ideally spaced no more than twice the spatial scale parameter. While spatial capture-recapture models can accommodate different sampling designs and still estimate parameters with accuracy and precision, our simulations demonstrate that aspects of sampling design, namely trap configuration and spacing, must consider study area size, ranges of individual movement, and home range sizes in the study population.
Approximate symmetries in atomic nuclei from a large-scale shell-model perspective
NASA Astrophysics Data System (ADS)
Launey, K. D.; Draayer, J. P.; Dytrych, T.; Sun, G.-H.; Dong, S.-H.
2015-05-01
In this paper, we review recent developments that aim to achieve further understanding of the structure of atomic nuclei, by capitalizing on exact symmetries as well as approximate symmetries found to dominate low-lying nuclear states. The findings confirm the essential role played by the Sp(3, ℝ) symplectic symmetry to inform the interaction and the relevant model spaces in nuclear modeling. The significance of the Sp(3, ℝ) symmetry for a description of a quantum system of strongly interacting particles naturally emerges from the physical relevance of its generators, which directly relate to particle momentum and position coordinates, and represent important observables, such as, the many-particle kinetic energy, the monopole operator, the quadrupole moment and the angular momentum. We show that it is imperative that shell-model spaces be expanded well beyond the current limits to accommodate particle excitations that appear critical to enhanced collectivity in heavier systems and to highly-deformed spatial structures, exemplified by the second 0+ state in 12C (the challenging Hoyle state) and 8Be. While such states are presently inaccessible by large-scale no-core shell models, symmetry-based considerations are found to be essential.
Bayesian state space models for dynamic genetic network construction across multiple tissues.
Liang, Yulan; Kelemen, Arpad
2016-08-01
Construction of gene-gene interaction networks and potential pathways is a challenging and important problem in genomic research for complex diseases while estimating the dynamic changes of the temporal correlations and non-stationarity are the keys in this process. In this paper, we develop dynamic state space models with hierarchical Bayesian settings to tackle this challenge for inferring the dynamic profiles and genetic networks associated with disease treatments. We treat both the stochastic transition matrix and the observation matrix time-variant and include temporal correlation structures in the covariance matrix estimations in the multivariate Bayesian state space models. The unevenly spaced short time courses with unseen time points are treated as hidden state variables. Hierarchical Bayesian approaches with various prior and hyper-prior models with Monte Carlo Markov Chain and Gibbs sampling algorithms are used to estimate the model parameters and the hidden state variables. We apply the proposed Hierarchical Bayesian state space models to multiple tissues (liver, skeletal muscle, and kidney) Affymetrix time course data sets following corticosteroid (CS) drug administration. Both simulation and real data analysis results show that the genomic changes over time and gene-gene interaction in response to CS treatment can be well captured by the proposed models. The proposed dynamic Hierarchical Bayesian state space modeling approaches could be expanded and applied to other large scale genomic data, such as next generation sequence (NGS) combined with real time and time varying electronic health record (EHR) for more comprehensive and robust systematic and network based analysis in order to transform big biomedical data into predictions and diagnostics for precision medicine and personalized healthcare with better decision making and patient outcomes.
Geant4 hadronic physics for space radiation environment.
Ivantchenko, Anton V; Ivanchenko, Vladimir N; Molina, Jose-Manuel Quesada; Incerti, Sebastien L
2012-01-01
To test and to develop Geant4 (Geometry And Tracking version 4) Monte Carlo hadronic models with focus on applications in a space radiation environment. The Monte Carlo simulations have been performed using the Geant4 toolkit. Binary (BIC), its extension for incident light ions (BIC-ion) and Bertini (BERT) cascades were used as main Monte Carlo generators. For comparisons purposes, some other models were tested too. The hadronic testing suite has been used as a primary tool for model development and validation against experimental data. The Geant4 pre-compound (PRECO) and de-excitation (DEE) models were revised and improved. Proton, neutron, pion, and ion nuclear interactions were simulated with the recent version of Geant4 9.4 and were compared with experimental data from thin and thick target experiments. The Geant4 toolkit offers a large set of models allowing effective simulation of interactions of particles with matter. We have tested different Monte Carlo generators with our hadronic testing suite and accordingly we can propose an optimal configuration of Geant4 models for the simulation of the space radiation environment.
Improved treatment of optics in the Lindquist-Wheeler models
NASA Astrophysics Data System (ADS)
Clifton, Timothy; Ferreira, Pedro G.; O'Donnell, Kane
2012-01-01
We consider the optical properties of Lindquist-Wheeler (LW) models of the Universe. These models consist of lattices constructed from regularly arranged discrete masses. They are akin to the Wigner-Seitz construction of solid state physics, and result in a dynamical description of the large-scale Universe in which the global expansion is given by a Friedmann-like equation. We show that if these models are constructed in a particular way then the redshifts of distant objects, as well as the dynamics of the global space-time, can be made to be in good agreement with the homogeneous and isotropic Friedmann-Lemaître-Robertson-Walker (FLRW) solutions of Einstein’s equations, at the level of ≲3% out to z≃2. Angular diameter and luminosity distances, on the other hand, differ from those found in the corresponding FLRW models, while being consistent with the “empty beam” approximation, together with the shearing effects due to the nearest masses. This can be compared with the large deviations found from the corresponding FLRW values obtained in a previous study that considered LW models constructed in a different way. We therefore advocate the improved LW models we consider here as useful constructions that appear to faithfully reproduce both the dynamical and observational properties of space-times containing discrete masses.
Definition of technology development missions for early space stations: Large space structures
NASA Technical Reports Server (NTRS)
1983-01-01
The testbed role of an early (1990-95) manned space station in large space structures technology development is defined and conceptual designs for large space structures development missions to be conducted at the space station are developed. Emphasis is placed on defining requirements and benefits of development testing on a space station in concert with ground and shuttle tests.
Large-scale shell-model calculations for 32-39P isotopes
NASA Astrophysics Data System (ADS)
Srivastava, P. C.; Hirsch, J. G.; Ermamatov, M. J.; Kota, V. K. B.
2012-10-01
In this work, the structure of 32-39P isotopes is described in the framework of stateof-the-art large-scale shell-model calculations, employing the code ANTOINE with three modern effective interactions: SDPF-U, SDPF-NR and the extended pairing plus quadrupole-quadrupoletype forces with inclusion of monopole interaction (EPQQM). Protons are restricted to fill the sd shell, while neutrons are active in the sd - pf valence space. Results for positive and negative level energies and electromagnetic observables are compared with the available experimental data.
Atomic displacements in the charge ice pyrochlore Bi2Ti2O6O' studied by neutron total scattering
NASA Astrophysics Data System (ADS)
Shoemaker, Daniel P.; Seshadri, Ram; Hector, Andrew L.; Llobet, Anna; Proffen, Thomas; Fennie, Craig J.
2010-04-01
The oxide pyrochlore Bi2Ti2O6O' is known to be associated with large displacements of Bi and O' atoms from their ideal crystallographic positions. Neutron total scattering, analyzed in both reciprocal and real space, is employed here to understand the nature of these displacements. Rietveld analysis and maximum entropy methods are used to produce an average picture of the structural nonideality. Local structure is modeled via large-box reverse Monte Carlo simulations constrained simultaneously by the Bragg profile and real-space pair distribution function. Direct visualization and statistical analyses of these models show the precise nature of the static Bi and O' displacements. Correlations between neighboring Bi displacements are analyzed using coordinates from the large-box simulations. The framework of continuous symmetry measures has been applied to distributions of O'Bi4 tetrahedra to examine deviations from ideality. Bi displacements from ideal positions appear correlated over local length scales. The results are consistent with the idea that these nonmagnetic lone-pair containing pyrochlore compounds can be regarded as highly structurally frustrated systems.
Reinforced dynamics for enhanced sampling in large atomic and molecular systems
NASA Astrophysics Data System (ADS)
Zhang, Linfeng; Wang, Han; E, Weinan
2018-03-01
A new approach for efficiently exploring the configuration space and computing the free energy of large atomic and molecular systems is proposed, motivated by an analogy with reinforcement learning. There are two major components in this new approach. Like metadynamics, it allows for an efficient exploration of the configuration space by adding an adaptively computed biasing potential to the original dynamics. Like deep reinforcement learning, this biasing potential is trained on the fly using deep neural networks, with data collected judiciously from the exploration and an uncertainty indicator from the neural network model playing the role of the reward function. Parameterization using neural networks makes it feasible to handle cases with a large set of collective variables. This has the potential advantage that selecting precisely the right set of collective variables has now become less critical for capturing the structural transformations of the system. The method is illustrated by studying the full-atom explicit solvent models of alanine dipeptide and tripeptide, as well as the system of a polyalanine-10 molecule with 20 collective variables.
NASA Technical Reports Server (NTRS)
Curreri, Peter A.; Nall, Mark
2013-01-01
The cost of energy is humanity's economic exchange rate with the universe. Space solar power is the first great step that our technological species has to utilize the energy of its star. The classic Peter Glaser Solar Power Satellite, SPS, and later designs collect a large area of solar energy in space and beam it back to Earth for use in the electric grid, but even with optimistic launch costs and technology innovation a clear economic path is not evident using Earth launch of SPS. O Neill in 1969 solved the transportation costs problem by a model that uses lunar and asteroid materials to build SPS and locates the labor force permanently in space (O Neill free space habitats). This solution closes the economics and predicts large profits after 17-35 years. However the costs of time have up to now prevented this solution. We discuss a strategy to move forward in SPS with the motivations to stop global warming and prevent human selfextinction. There are near term steps that can be taken that place us on this path forward. First, we must reevaluate the technologies for the classic model and update the parameters to current technology. As technological capability continues to increase exponentially, we need to understand when the monetary potential energy hills are small as the technology gets larger. But the chance for self-extinction, if humanity remains in a single vulnerable habitat, also increased exponentially with time. The path forward is to identify investment points while assessing the risks of non-action.
Estimating Ω from Galaxy Redshifts: Linear Flow Distortions and Nonlinear Clustering
NASA Astrophysics Data System (ADS)
Bromley, B. C.; Warren, M. S.; Zurek, W. H.
1997-02-01
We propose a method to determine the cosmic mass density Ω from redshift-space distortions induced by large-scale flows in the presence of nonlinear clustering. Nonlinear structures in redshift space, such as fingers of God, can contaminate distortions from linear flows on scales as large as several times the small-scale pairwise velocity dispersion σv. Following Peacock & Dodds, we work in the Fourier domain and propose a model to describe the anisotropy in the redshift-space power spectrum; tests with high-resolution numerical data demonstrate that the model is robust for both mass and biased galaxy halos on translinear scales and above. On the basis of this model, we propose an estimator of the linear growth parameter β = Ω0.6/b, where b measures bias, derived from sampling functions that are tuned to eliminate distortions from nonlinear clustering. The measure is tested on the numerical data and found to recover the true value of β to within ~10%. An analysis of IRAS 1.2 Jy galaxies yields β=0.8+0.4-0.3 at a scale of 1000 km s-1, which is close to optimal given the shot noise and finite size of the survey. This measurement is consistent with dynamical estimates of β derived from both real-space and redshift-space information. The importance of the method presented here is that nonlinear clustering effects are removed to enable linear correlation anisotropy measurements on scales approaching the translinear regime. We discuss implications for analyses of forthcoming optical redshift surveys in which the dispersion is more than a factor of 2 greater than in the IRAS data.
Collaboration Between NASA Centers of Excellence on Autonomous System Software Development
NASA Technical Reports Server (NTRS)
Goodrich, Charles H.; Larson, William E.; Delgado, H. (Technical Monitor)
2001-01-01
Software for space systems flight operations has its roots in the early days of the space program when computer systems were incapable of supporting highly complex and flexible control logic. Control systems relied on fast data acquisition and supervisory control from a roomful of systems engineers on the ground. Even though computer hardware and software has become many orders of magnitude more capable, space systems have largely adhered to this original paradigm In an effort to break this mold, Kennedy Space Center (KSC) has invested in the development of model-based diagnosis and control applications for ten years having broad experience in both ground and spacecraft systems and software. KSC has now partnered with Ames Research Center (ARC), NASA's Center of Excellence in Information Technology, to create a new paradigm for the control of dynamic space systems. ARC has developed model-based diagnosis and intelligent planning software that enables spacecraft to handle most routine problems automatically and allocate resources in a flexible way to realize mission objectives. ARC demonstrated the utility of onboard diagnosis and planning with an experiment aboard Deep Space I in 1999. This paper highlights the software control system collaboration between KSC and ARC. KSC has developed a Mars In-situ Resource Utilization testbed based on the Reverse Water Gas Shift (RWGS) reaction. This plant, built in KSC's Applied Chemistry Laboratory, is capable of producing the large amount of Oxygen that would be needed to support a Human Mars Mission. KSC and ARC are cooperating to develop an autonomous, fault-tolerant control system for RWGS to meet the need for autonomy on deep space missions. The paper will also describe how the new system software paradigm will be applied to Vehicle Health Monitoring, tested on the new X vehicles and integrated into future launch processing systems.
Rectenna thermal model development
NASA Technical Reports Server (NTRS)
Kadiramangalam, Murall; Alden, Adrian; Speyer, Daniel
1992-01-01
Deploying rectennas in space requires adapting existing designs developed for terrestrial applications to the space environment. One of the major issues in doing so is to understand the thermal performance of existing designs in the space environment. Toward that end, a 3D rectenna thermal model has been developed, which involves analyzing shorted rectenna elements and finite size rectenna element arrays. A shorted rectenna element is a single element whose ends are connected together by a material of negligible thermal resistance. A shorted element is a good approximation to a central element of a large array. This model has been applied to Brown's 2.45 GHz rectenna design. Results indicate that Brown's rectenna requires redesign or some means of enhancing the heat dissipation in order for the diode temperature to be maintained below 200 C above an output power density of 620 W/sq.m. The model developed in this paper is very general and can be used for the analysis and design of any type of rectenna design of any frequency.
NASA Technical Reports Server (NTRS)
Xapsos, M. A.; Barth, J. L.; Stassinopoulos, E. G.; Burke, E. A.; Gee, G. B.
1999-01-01
The effects that solar proton events have on microelectronics and solar arrays are important considerations for spacecraft in geostationary and polar orbits and for interplanetary missions. Designers of spacecraft and mission planners are required to assess the performance of microelectronic systems under a variety of conditions. A number of useful approaches exist for predicting information about solar proton event fluences and, to a lesser extent, peak fluxes. This includes the cumulative fluence over the course of a mission, the fluence of a worst-case event during a mission, the frequency distribution of event fluences, and the frequency distribution of large peak fluxes. Naval Research Laboratory (NRL) and NASA Goddard Space Flight Center, under the sponsorship of NASA's Space Environments and Effects (SEE) Program, have developed a new model for predicting cumulative solar proton fluences and worst-case solar proton events as functions of mission duration and user confidence level. This model is called the Emission of Solar Protons (ESP) model.
NASA Technical Reports Server (NTRS)
Xapsos, M. A.; Barth, J. L.; Stassinopoulos, E. G.; Burke, Edward A.; Gee, G. B.
1999-01-01
The effects that solar proton events have on microelectronics and solar arrays are important considerations for spacecraft in geostationary and polar orbits and for interplanetary missions. Designers of spacecraft and mission planners are required to assess the performance of microelectronic systems under a variety of conditions. A number of useful approaches exist for predicting information about solar proton event fluences and, to a lesser extent, peak fluxes. This includes the cumulative fluence over the course of a mission, the fluence of a worst-case event during a mission, the frequency distribution of event fluences, and the frequency distribution of large peak fluxes. Naval Research Laboratory (NRL) and NASA Goddard Space Flight Center, under the sponsorship of NASA's Space Environments and Effects (SEE) Program, have developed a new model for predicting cumulative solar proton fluences and worst-case solar proton events as functions of mission duration and user confidence level. This model is called the Emission of Solar Protons (ESP) model.
Unified control/structure design and modeling research
NASA Technical Reports Server (NTRS)
Mingori, D. L.; Gibson, J. S.; Blelloch, P. A.; Adamian, A.
1986-01-01
To demonstrate the applicability of the control theory for distributed systems to large flexible space structures, research was focused on a model of a space antenna which consists of a rigid hub, flexible ribs, and a mesh reflecting surface. The space antenna model used is discussed along with the finite element approximation of the distributed model. The basic control problem is to design an optimal or near-optimal compensator to suppress the linear vibrations and rigid-body displacements of the structure. The application of an infinite dimensional Linear Quadratic Gaussian (LQG) control theory to flexible structure is discussed. Two basic approaches for robustness enhancement were investigated: loop transfer recovery and sensitivity optimization. A third approach synthesized from elements of these two basic approaches is currently under development. The control driven finite element approximation of flexible structures is discussed. Three sets of finite element basic vectors for computing functional control gains are compared. The possibility of constructing a finite element scheme to approximate the infinite dimensional Hamiltonian system directly, instead of indirectly is discussed.
Modeling microbial community structure and functional diversity across time and space.
Larsen, Peter E; Gibbons, Sean M; Gilbert, Jack A
2012-07-01
Microbial communities exhibit exquisitely complex structure. Many aspects of this complexity, from the number of species to the total number of interactions, are currently very difficult to examine directly. However, extraordinary efforts are being made to make these systems accessible to scientific investigation. While recent advances in high-throughput sequencing technologies have improved accessibility to the taxonomic and functional diversity of complex communities, monitoring the dynamics of these systems over time and space - using appropriate experimental design - is still expensive. Fortunately, modeling can be used as a lens to focus low-resolution observations of community dynamics to enable mathematical abstractions of functional and taxonomic dynamics across space and time. Here, we review the approaches for modeling bacterial diversity at both the very large and the very small scales at which microbial systems interact with their environments. We show that modeling can help to connect biogeochemical processes to specific microbial metabolic pathways. © 2012 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Couderc, F.; Duran, A.; Vila, J.-P.
2017-08-01
We present an explicit scheme for a two-dimensional multilayer shallow water model with density stratification, for general meshes and collocated variables. The proposed strategy is based on a regularized model where the transport velocity in the advective fluxes is shifted proportionally to the pressure potential gradient. Using a similar strategy for the potential forces, we show the stability of the method in the sense of a discrete dissipation of the mechanical energy, in general multilayer and non-linear frames. These results are obtained at first-order in space and time and extended using a second-order MUSCL extension in space and a Heun's method in time. With the objective of minimizing the diffusive losses in realistic contexts, sufficient conditions are exhibited on the regularizing terms to ensure the scheme's linear stability at first and second-order in time and space. The other main result stands in the consistency with respect to the asymptotics reached at small and large time scales in low Froude regimes, which governs large-scale oceanic circulation. Additionally, robustness and well-balanced results for motionless steady states are also ensured. These stability properties tend to provide a very robust and efficient approach, easy to implement and particularly well suited for large-scale simulations. Some numerical experiments are proposed to highlight the scheme efficiency: an experiment of fast gravitational modes, a smooth surface wave propagation, an initial propagating surface water elevation jump considering a non-trivial topography, and a last experiment of slow Rossby modes simulating the displacement of a baroclinic vortex subject to the Coriolis force.
Architectural Large Constructed Environment. Modeling and Interaction Using Dynamic Simulations
NASA Astrophysics Data System (ADS)
Fiamma, P.
2011-09-01
How to use for the architectural design, the simulation coming from a large size data model? The topic is related to the phase coming usually after the acquisition of the data, during the construction of the model and especially after, when designers must have an interaction with the simulation, in order to develop and verify their idea. In the case of study, the concept of interaction includes the concept of real time "flows". The work develops contents and results that can be part of the large debate about the current connection between "architecture" and "movement". The focus of the work, is to realize a collaborative and participative virtual environment on which different specialist actors, client and final users can share knowledge, targets and constraints to better gain the aimed result. The goal is to have used a dynamic micro simulation digital resource that allows all the actors to explore the model in powerful and realistic way and to have a new type of interaction in a complex architectural scenario. On the one hand, the work represents a base of knowledge that can be implemented more and more; on the other hand the work represents a dealt to understand the large constructed architecture simulation as a way of life, a way of being in time and space. The architectural design before, and the architectural fact after, both happen in a sort of "Spatial Analysis System". The way is open to offer to this "system", knowledge and theories, that can support architectural design work for every application and scale. We think that the presented work represents a dealt to understand the large constructed architecture simulation as a way of life, a way of being in time and space. Architecture like a spatial configuration, that can be reconfigurable too through designing.
Activities of the Center for Space Construction
NASA Technical Reports Server (NTRS)
1993-01-01
The Center for Space Construction (CSC) at the University of Colorado at Boulder is one of eight University Space Engineering Research Centers established by NASA in 1988. The mission of the center is to conduct research into space technology and to directly contribute to space engineering education. The center reports to the Department of Aerospace Engineering Sciences and resides in the College of Engineering and Applied Science. The college has a long and successful track record of cultivating multi-disciplinary research and education programs. The Center for Space Construction is prominent evidence of this record. At the inception of CSC, the center was primarily founded on the need for research on in-space construction of large space systems like space stations and interplanetary space vehicles. The scope of CSC's research has now evolved to include the design and construction of all spacecraft, large and small. Within this broadened scope, our research projects seek to impact the underlying technological basis for such spacecraft as remote sensing satellites, communication satellites, and other special purpose spacecraft, as well as the technological basis for large space platforms. The center's research focuses on three areas: spacecraft structures, spacecraft operations and control, and regolith and surface systems. In the area of spacecraft structures, our current emphasis is on concepts and modeling of deployable structures, analysis of inflatable structures, structural damage detection algorithms, and composite materials for lightweight structures. In the area of spacecraft operations and control, we are continuing our previous efforts in process control of in-orbit structural assembly. In addition, we have begun two new efforts in formal approach to spacecraft flight software systems design and adaptive attitude control systems. In the area of regolith and surface systems, we are continuing the work of characterizing the physical properties of lunar regolith, and we are at work on a project on path planning for planetary surface rovers.
Seasonal predictions for wildland fire severity
Shyh-Chin Chen; Haiganoush Preisler; Francis Fujioka; John W. Benoit; John O. Roads
2009-01-01
The National Fire Danger Rating System (NFDRS) indices deduced from the monthly to seasonal predictions of a meteorological climate model at 50-km grid space from January 1998 through December 2003 were used in conjunction with a probability model to predict the expected number of fire occurrences and large fires over the U.S. West. The short-term climate forecasts are...
ERIC Educational Resources Information Center
Quinlan, Andrea; Fogel, Curtis A.
2014-01-01
In 1970, education theorist Paulo Freire (1970) sharply critiqued dominant pedagogy--or what he called the banking model of education--for stripping students of their agency. In the banking model, he wrote, instructors are empowered as narrating subjects as students who become alienated as passive listening objects. In the decades since, research…
NASA Technical Reports Server (NTRS)
Morgenthaler, George W.
1989-01-01
The ability to launch-on-time and to send payloads into space has progressed dramatically since the days of the earliest missile and space programs. Causes for delay during launch, i.e., unplanned 'holds', are attributable to several sources: weather, range activities, vehicle conditions, human performance, etc. Recent developments in space program, particularly the need for highly reliable logistic support of space construction and the subsequent planned operation of space stations, large unmanned space structures, lunar and Mars bases, and the necessity of providing 'guaranteed' commercial launches have placed increased emphasis on understanding and mastering every aspect of launch vehicle operations. The Center of Space Construction has acquired historical launch vehicle data and is applying these data to the analysis of space launch vehicle logistic support of space construction. This analysis will include development of a better understanding of launch-on-time capability and simulation of required support systems for vehicle assembly and launch which are necessary to support national space program construction schedules. In this paper, the author presents actual launch data on unscheduled 'hold' distributions of various launch vehicles. The data have been supplied by industrial associate companies of the Center for Space Construction. The paper seeks to determine suitable probability models which describe these historical data and that can be used for several purposes such as: inputs to broader simulations of launch vehicle logistic space construction support processes and the determination of which launch operations sources cause the majority of the unscheduled 'holds', and hence to suggest changes which might improve launch-on-time. In particular, the paper investigates the ability of a compound distribution probability model to fit actual data, versus alternative models, and recommends the most productive avenues for future statistical work.
NASA Technical Reports Server (NTRS)
Frisbee, Robert H.
1996-01-01
This presentation describes a number of advanced space propulsion technologies with the potential for meeting the need for dramatic reductions in the cost of access to space, and the need for new propulsion capabilities to enable bold new space exploration (and, ultimately, space exploitation) missions of the 21st century. For example, current Earth-to-orbit (e.g., low Earth orbit, LEO) launch costs are extremely high (ca. $10,000/kg); a factor 25 reduction (to ca. $400/kg) will be needed to produce the dramatic increases in space activities in both the civilian and government sectors identified in the Commercial Space Transportation Study (CSTS). Similarly, in the area of space exploration, all of the relatively 'easy' missions (e.g., robotic flybys, inner solar system orbiters and landers; and piloted short-duration Lunar missions) have been done. Ambitious missions of the next century (e.g., robotic outer-planet orbiters/probes, landers, rovers, sample returns; and piloted long-duration Lunar and Mars missions) will require major improvements in propulsion capability. In some cases, advanced propulsion can enable a mission by making it faster or more affordable, and in some cases, by directly enabling the mission (e.g., interstellar missions). As a general rule, advanced propulsion systems are attractive because of their low operating costs (e.g., higher specific impulse, ISD) and typically show the most benefit for relatively 'big' missions (i.e., missions with large payloads or AV, or a large overall mission model). In part, this is due to the intrinsic size of the advanced systems as compared to state-of-the-art (SOTA) chemical propulsion systems. Also, advanced systems often have a large 'infrastructure' cost, either in the form of initial R&D costs or in facilities hardware costs (e.g., laser or microwave transmission ground stations for beamed energy propulsion). These costs must then be amortized over a large mission to be cost-competitive with a SOTA system with a low initial development and infrastructure cost and a high operating cost. Note however that this has resulted in a 'Catch 22' standoff between the need for large initial investment that is amortized over many launches to reduce costs, and the limited number of launches possible at today's launch costs. Some examples of missions enabled (either in cost or capability) by advanced propulsion include long-life station-keeping or micro-spacecraft applications using electric propulsion or BMDO-derived micro-thrusters, low-cost orbit raising (LEO to GEO or Lunar orbit) using electric propulsion, robotic planetary missions using aerobraking or electric propulsion, piloted Mars missions using aerobraking and/or propellant production from Martian resources, very fast (100-day round-trip) piloted Mars missions using fission or fusion propulsion, and, finally, interstellar missions using fusion, antimatter, or beamed energy. The NASA Advanced Propulsion Technology program at the Jet Propulsion Laboratory (JPL) is aimed at assessing the feasibility of a range of near-term to far term advanced propulsion technologies that have the potential to reduce costs and/or enable future space activities. The program includes cooperative modeling and research activities between JPL and various universities and industry; and directly supported independent research at universities and industry. The cooperative program consists of mission studies, research and development of ion engine technology using C60 (Buckminsterfullerene) propellant, and research and development of lithium-propellant Lorentz-force accelerator (LFA) engine technology. The university/industry-supported research includes modeling and proof-of-concept experiments in advanced, high-lsp, long-life electric propulsion, and in fusion propulsion.
Coupled Boltzmann computation of mixed axion neutralino dark matter in the SUSY DFSZ axion model
NASA Astrophysics Data System (ADS)
Bae, Kyu Jung; Baer, Howard; Lessa, Andre; Serce, Hasan
2014-10-01
The supersymmetrized DFSZ axion model is highly motivated not only because it offers solutions to both the gauge hierarchy and strong CP problems, but also because it provides a solution to the SUSY μ-problem which naturally allows for a Little Hierarchy. We compute the expected mixed axion-neutralino dark matter abundance for the SUSY DFSZ axion model in two benchmark cases—a natural SUSY model with a standard neutralino underabundance (SUA) and an mSUGRA/CMSSM model with a standard overabundance (SOA). Our computation implements coupled Boltzmann equations which track the radiation density along with neutralino, axion, axion CO (produced via coherent oscillations), saxion, saxion CO, axino and gravitino densities. In the SUSY DFSZ model, axions, axinos and saxions go through the process of freeze-in—in contrast to freeze-out or out-of-equilibrium production as in the SUSY KSVZ model—resulting in thermal yields which are largely independent of the re-heat temperature. We find the SUA case with suppressed saxion-axion couplings (ξ=0) only admits solutions for PQ breaking scale falesssim 6× 1012 GeV where the bulk of parameter space tends to be axion-dominated. For SUA with allowed saxion-axion couplings (ξ =1), then fa values up to ~ 1014 GeV are allowed. For the SOA case, almost all of SUSY DFSZ parameter space is disallowed by a combination of overproduction of dark matter, overproduction of dark radiation or violation of BBN constraints. An exception occurs at very large fa~ 1015-1016 GeV where large entropy dilution from CO-produced saxions leads to allowed models.
NASA Technical Reports Server (NTRS)
Yam, Y.; Lang, J. H.; Johnson, T. L.; Shih, S.; Staelin, D. H.
1983-01-01
A model reduction procedure based on aggregation with respect to sensor and actuator influences rather than modes is presented for large systems of coupled second-order differential equations. Perturbation expressions which can predict the effects of spillover on both the aggregated and residual states are derived. These expressions lead to the development of control system design constraints which are sufficient to guarantee, to within the validity of the perturbations, that the residual states are not destabilized by control systems designed from the reduced model. A numerical example is provided to illustrate the application of the aggregation and control system design method.
The use of remotely sensed soil moisture data in large-scale models of the hydrological cycle
NASA Technical Reports Server (NTRS)
Salomonson, V. V.; Gurney, R. J.; Schmugge, T. J.
1985-01-01
Manabe (1982) has reviewed numerical simulations of the atmosphere which provided a framework within which an examination of the dynamics of the hydrological cycle could be conducted. It was found that the climate is sensitive to soil moisture variability in space and time. The challenge arises now to improve the observations of soil moisture so as to provide up-dated boundary condition inputs to large scale models including the hydrological cycle. Attention is given to details regarding the significance of understanding soil moisture variations, soil moisture estimation using remote sensing, and energy and moisture balance modeling.
NASA Technical Reports Server (NTRS)
Brewer, W. V.; Rasis, E. P.; Shih, H. R.
1993-01-01
Results from NASA/HBCU Grant No. NAG-1-1125 are summarized. Designs developed for model fabrication, exploratory concepts drafted, interface of computer with robot and end-effector, and capability enhancement are discussed.
Atmospheric Science Data Center
2013-04-19
... (right) The structure of tightly packed "closed cells" in a layer of marine stratocumulus over the southeastern Pacific Ocean ... they are bright and abundant, and reflect a large amount of solar energy toward space. They are difficult to represent in climate models ...
NASA Astrophysics Data System (ADS)
Michel, Eric; Belkacem, Kevin; Samadi, Reza; Assis Peralta, Raphael de; Renié, Christian; Abed, Mahfoudh; Lin, Guangyuan; Christensen-Dalsgaard, Jørgen; Houdek, Günter; Handberg, Rasmus; Gizon, Laurent; Burston, Raymond; Nagashima, Kaori; Pallé, Pere; Poretti, Ennio; Rainer, Monica; Mistò, Angelo; Panzera, Maria Rosa; Roth, Markus
2017-10-01
The growing amount of seismic data available from space missions (SOHO, CoRoT, Kepler, SDO,…) but also from ground-based facilities (GONG, BiSON, ground-based large programmes…), stellar modelling and numerical simulations, creates new scientific perspectives such as characterizing stellar populations in our Galaxy or planetary systems by providing model-independent global properties of stars such as mass, radius, and surface gravity within several percent accuracy, as well as constraints on the age. These applications address a broad scientific community beyond the solar and stellar one and require combining indices elaborated with data from different databases (e.g. seismic archives and ground-based spectroscopic surveys). It is thus a basic requirement to develop a simple and effcient access to these various data resources and dedicated tools. In the framework of the European project SpaceInn (FP7), several data sources have been developed or upgraded. The Seismic Plus Portal has been developed, where synthetic descriptions of the most relevant existing data sources can be found, as well as tools allowing to localize existing data for given objects or period and helping the data query. This project has been developed within the Virtual Observatory (VO) framework. In this paper, we give a review of the various facilities and tools developed within this programme. The SpaceInn project (Exploitation of Space Data for Innovative Helio- and Asteroseismology) has been initiated by the European Helio- and Asteroseismology Network (HELAS).
Concepts and challenges in cancer risk prediction for the space radiation environment.
Barcellos-Hoff, Mary Helen; Blakely, Eleanor A; Burma, Sandeep; Fornace, Albert J; Gerson, Stanton; Hlatky, Lynn; Kirsch, David G; Luderer, Ulrike; Shay, Jerry; Wang, Ya; Weil, Michael M
2015-07-01
Cancer is an important long-term risk for astronauts exposed to protons and high-energy charged particles during travel and residence on asteroids, the moon, and other planets. NASA's Biomedical Critical Path Roadmap defines the carcinogenic risks of radiation exposure as one of four type I risks. A type I risk represents a demonstrated, serious problem with no countermeasure concepts, and may be a potential "show-stopper" for long duration spaceflight. Estimating the carcinogenic risks for humans who will be exposed to heavy ions during deep space exploration has very large uncertainties at present. There are no human data that address risk from extended exposure to complex radiation fields. The overarching goal in this area to improve risk modeling is to provide biological insight and mechanistic analysis of radiation quality effects on carcinogenesis. Understanding mechanisms will provide routes to modeling and predicting risk and designing countermeasures. This white paper reviews broad issues related to experimental models and concepts in space radiation carcinogenesis as well as the current state of the field to place into context recent findings and concepts derived from the NASA Space Radiation Program. Copyright © 2015 The Committee on Space Research (COSPAR). Published by Elsevier Ltd. All rights reserved.
Nuclear model calculations and their role in space radiation research
NASA Technical Reports Server (NTRS)
Townsend, L. W.; Cucinotta, F. A.; Heilbronn, L. H.
2002-01-01
Proper assessments of spacecraft shielding requirements and concomitant estimates of risk to spacecraft crews from energetic space radiation requires accurate, quantitative methods of characterizing the compositional changes in these radiation fields as they pass through thick absorbers. These quantitative methods are also needed for characterizing accelerator beams used in space radiobiology studies. Because of the impracticality/impossibility of measuring these altered radiation fields inside critical internal body organs of biological test specimens and humans, computational methods rather than direct measurements must be used. Since composition changes in the fields arise from nuclear interaction processes (elastic, inelastic and breakup), knowledge of the appropriate cross sections and spectra must be available. Experiments alone cannot provide the necessary cross section and secondary particle (neutron and charged particle) spectral data because of the large number of nuclear species and wide range of energies involved in space radiation research. Hence, nuclear models are needed. In this paper current methods of predicting total and absorption cross sections and secondary particle (neutrons and ions) yields and spectra for space radiation protection analyses are reviewed. Model shortcomings are discussed and future needs presented. c2002 COSPAR. Published by Elsevier Science Ltd. All right reserved.
NASA Astrophysics Data System (ADS)
Lewandowski, Jerzy; Lin, Chun-Yen
2017-03-01
We explicitly solved the anomaly-free quantum constraints proposed by Tomlin and Varadarajan for the weak Euclidean model of canonical loop quantum gravity, in a large subspace of the model's kinematic Hilbert space, which is the space of the charge network states. In doing so, we first identified the subspace on which each of the constraints acts convergingly, and then by explicitly evaluating such actions we found the complete set of the solutions in the identified subspace. We showed that the space of solutions consists of two classes of states, with the first class having a property that involves the condition known from the Minkowski theorem on polyhedra, and the second class satisfying a weaker form of the spatial diffeomorphism invariance.
NASA Technical Reports Server (NTRS)
Zapata, Edgar
2012-01-01
This paper presents past and current work in dealing with indirect industry and NASA costs when providing cost estimation or analysis for NASA projects and programs. Indirect costs, when defined as those costs in a project removed from the actual hardware or software hands-on labor; makes up most of the costs of today's complex large scale NASA space/industry projects. This appears to be the case across phases from research into development into production and into the operation of the system. Space transportation is the case of interest here. Modeling and cost estimation as a process rather than a product will be emphasized. Analysis as a series of belief systems in play among decision makers and decision factors will also be emphasized to provide context.
What If We Had A Magnetograph at Lagrangian L5?
NASA Technical Reports Server (NTRS)
Pevtsov, Alexei A.; Bertello, Luca; MacNeice, Peter; Petrie, Gordon
2016-01-01
Synoptic Carrington charts of magnetic field are routinely used as an input for modelings of solar wind and other aspects of space weather forecast. However, these maps are constructed using only the observations from the solar hemisphere facing Earth. The evolution of magnetic flux on the "farside" of the Sun, which may affect the topology of coronal field in the "nearside," is largely ignored. It is commonly accepted that placing a magnetograph in Lagrangian L5 point would improve the space weather forecast. However, the quantitative estimates of anticipated improvements have been lacking. We use longitudinal magnetograms from the Synoptic Optical Long-term Investigations of the Sun (SOLIS) to investigate how adding data from L5 point would affect the outcome of two major models used in space weather forecast.
On the Space-Time Structure of Sheared Turbulence
NASA Astrophysics Data System (ADS)
de Maré, Martin; Mann, Jakob
2016-09-01
We develop a model that predicts all two-point correlations in high Reynolds number turbulent flow, in both space and time. This is accomplished by combining the design philosophies behind two existing models, the Mann spectral velocity tensor, in which isotropic turbulence is distorted according to rapid distortion theory, and Kristensen's longitudinal coherence model, in which eddies are simultaneously advected by larger eddies as well as decaying. The model is compared with data from both observations and large-eddy simulations and is found to predict spatial correlations comparable to the Mann spectral tensor and temporal coherence better than any known model. Within the developed framework, Lagrangian two-point correlations in space and time are also predicted, and the predictions are compared with measurements of isotropic turbulence. The required input to the models, which are formulated as spectral velocity tensors, can be estimated from measured spectra or be derived from the rate of dissipation of turbulent kinetic energy, the friction velocity and the mean shear of the flow. The developed models can, for example, be used in wind-turbine engineering, in applications such as lidar-assisted feed forward control and wind-turbine wake modelling.
NASA Technical Reports Server (NTRS)
Fuh, Jon-Shen; Panda, Brahmananda; Peters, David A.
1988-01-01
A finite element approach is presented for the modeling of rotorcraft undergoing elastic deformation in addition to large rigid body motion with respect to inertial space, with particular attention given to the coupling of the rotor and fuselage subsystems subject to large relative rotations. The component synthesis technique used here allows the coupling of rotors to the fuselage for different rotorcraft configurations. The formulation is general and applicable to any rotorcraft vibration, aeroelasticity, and dynamics problem.
The influence of solid rocket motor retro-burns on the space debris environment
NASA Astrophysics Data System (ADS)
Stabroth, Sebastian; Homeister, Maren; Oswald, Michael; Wiedemann, Carsten; Klinkrad, Heiner; Vörsmann, Peter
The ESA space debris population model MASTER (Meteoroid and Space Debris Terrestrial Environment Reference) considers firings of solid rocket motors (SRM) as a debris source with the associated generation of slag and dust particles. The resulting slag and dust population is a major contribution to the sub-millimetre size debris environment in Earth orbit. The current model version, MASTER-2005, is based on the simulation of 1076 orbital SRM firings which contributed to the long-term debris environment. A comparison of the modelled flux with impact data from returned surfaces shows that the shape and quantity of the modelled SRM dust distribution matches that of recent Hubble Space Telescope (HST) solar array measurements very well. However, the absolute flux level for dust is under-predicted for some of the analysed Long Duration Exposure Facility (LDEF) surfaces. This points into the direction of some past SRM firings not included in the current event database. The most suitable candidates for these firings are the large number of SRM retro-burns of return capsules. Objects released by those firings have highly eccentric orbits with perigees in the lower regions of the atmosphere. Thus, they produce no long-term effect on the debris environment. However, a large number of those firings during the on-orbit time frame of LDEF might lead to an increase of the dust population for some of the LDEF surfaces. In this paper, the influence of SRM retro-burns on the short- and long-term debris environment is analysed. The existing firing database is updated with gathered information of some 800 Russian retro-firings. Each firing is simulated with the MASTER population generation module. The resulting population is compared against the existing background population of SRM slag and dust particles in terms of spatial density and flux predictions.
Prediction of frequency and exposure level of solar particle events.
Kim, Myung-Hee Y; Hayat, Matthew J; Feiveson, Alan H; Cucinotta, Francis A
2009-07-01
For future space missions outside of the Earth's magnetic field, the risk of radiation exposure from solar particle events (SPEs) during extra-vehicular activities (EVAs) or in lightly shielded vehicles is a major concern when designing radiation protection including determining sufficient shielding requirements for astronauts and hardware. While the expected frequency of SPEs is strongly influenced by solar modulation, SPE occurrences themselves are chaotic in nature. We report on a probabilistic modeling approach, where a cumulative expected occurrence curve of SPEs for a typical solar cycle was formed from a non-homogeneous Poisson process model fitted to a database of proton fluence measurements of SPEs that occurred during the past 5 solar cycles (19-23) and those of large SPEs identified from impulsive nitrate enhancements in polar ice. From the fitted model, we then estimated the expected frequency of SPEs at any given proton fluence threshold with energy >30 MeV (Phi(30)) during a defined space mission period. Analytic energy spectra of 34 large SPEs observed in the space era were fitted over broad energy ranges extending to GeV, and subsequently used to calculate the distribution of mGy equivalent (mGy-Eq) dose for a typical blood-forming organ (BFO) inside a spacecraft as a function of total Phi(30) fluence. This distribution was combined with a simulation of SPE events using the Poisson model to estimate the probability of the BFO dose exceeding the NASA 30-d limit of 250 mGy-Eq per 30 d. These results will be useful in implementing probabilistic risk assessment approaches at NASA and guidelines for protection systems for astronauts on future space exploration missions.
Cross-correlating Planck tSZ with RCSLenS weak lensing: implications for cosmology and AGN feedback
NASA Astrophysics Data System (ADS)
Hojjati, Alireza; Tröster, Tilman; Harnois-Déraps, Joachim; McCarthy, Ian G.; van Waerbeke, Ludovic; Choi, Ami; Erben, Thomas; Heymans, Catherine; Hildebrandt, Hendrik; Hinshaw, Gary; Ma, Yin-Zhe; Miller, Lance; Viola, Massimo; Tanimura, Hideki
2017-10-01
We present measurements of the spatial mapping between (hot) baryons and the total matter in the Universe, via the cross-correlation between the thermal Sunyaev-Zeldovich (tSZ) map from Planck and the weak gravitational lensing maps from the Red Cluster Sequence Lensing Survey (RCSLenS). The cross-correlations are performed on the map level where all the sources (including diffuse intergalactic gas) contribute to the signal. We consider two configuration-space correlation function estimators, ξy-κ and ξ ^ {y-γ t}, and a Fourier-space estimator, C_{ℓ}^{y-κ}, in our analysis. We detect a significant correlation out to 3° of angular separation on the sky. Based on statistical noise only, we can report 13σ and 17σ detections of the cross-correlation using the configuration-space y-κ and y-γt estimators, respectively. Including a heuristic estimate of the sampling variance yields a detection significance of 7σ and 8σ, respectively. A similar level of detection is obtained from the Fourier-space estimator, C_{ℓ}^{y-κ}. As each estimator probes different dynamical ranges, their combination improves the significance of the detection. We compare our measurements with predictions from the cosmo-OverWhelmingly Large Simulations suite of cosmological hydrodynamical simulations, where different galactic feedback models are implemented. We find that a model with considerable active galactic nuclei (AGN) feedback that removes large quantities of hot gas from galaxy groups and Wilkinson Microwave Anisotropy Probe 7-yr best-fitting cosmological parameters provides the best match to the measurements. All baryonic models in the context of a Planck cosmology overpredict the observed signal. Similar cosmological conclusions are drawn when we employ a halo model with the observed 'universal' pressure profile.
Anomalous properties of the acoustic excitations in glasses on the mesoscopic length scale.
Monaco, Giulio; Mossa, Stefano
2009-10-06
The low-temperature thermal properties of dielectric crystals are governed by acoustic excitations with large wavelengths that are well described by plane waves. This is the Debye model, which rests on the assumption that the medium is an elastic continuum, holds true for acoustic wavelengths large on the microscopic scale fixed by the interatomic spacing, and gradually breaks down on approaching it. Glasses are characterized as well by universal low-temperature thermal properties that are, however, anomalous with respect to those of the corresponding crystalline phases. Related universal anomalies also appear in the low-frequency vibrational density of states and, despite a longstanding debate, remain poorly understood. By using molecular dynamics simulations of a model monatomic glass of extremely large size, we show that in glasses the structural disorder undermines the Debye model in a subtle way: The elastic continuum approximation for the acoustic excitations breaks down abruptly on the mesoscopic, medium-range-order length scale of approximately 10 interatomic spacings, where it still works well for the corresponding crystalline systems. On this scale, the sound velocity shows a marked reduction with respect to the macroscopic value. This reduction turns out to be closely related to the universal excess over the Debye model prediction found in glasses at frequencies of approximately 1 THz in the vibrational density of states or at temperatures of approximately 10 K in the specific heat.
Space Science at Los Alamos National Laboratory
NASA Astrophysics Data System (ADS)
Smith, Karl
2017-09-01
The Space Science and Applications group (ISR-1) in the Intelligence and Space Research (ISR) division at the Los Alamos National Laboratory lead a number of space science missions for civilian and defense-related programs. In support of these missions the group develops sensors capable of detecting nuclear emissions and measuring radiations in space including γ-ray, X-ray, charged-particle, and neutron detection. The group is involved in a number of stages of the lifetime of these sensors including mission concept and design, simulation and modeling, calibration, and data analysis. These missions support monitoring of the atmosphere and near-Earth space environment for nuclear detonations as well as monitoring of the local space environment including space-weather type events. Expertise in this area has been established over a long history of involvement with cutting-edge projects continuing back to the first space based monitoring mission Project Vela. The group's interests cut across a large range of topics including non-proliferation, space situational awareness, nuclear physics, material science, space physics, astrophysics, and planetary physics.
Control law synthesis and optimization software for large order aeroservoelastic systems
NASA Technical Reports Server (NTRS)
Mukhopadhyay, V.; Pototzky, A.; Noll, Thomas
1989-01-01
A flexible aircraft or space structure with active control is typically modeled by a large-order state space system of equations in order to accurately represent the rigid and flexible body modes, unsteady aerodynamic forces, actuator dynamics and gust spectra. The control law of this multi-input/multi-output (MIMO) system is expected to satisfy multiple design requirements on the dynamic loads, responses, actuator deflection and rate limitations, as well as maintain certain stability margins, yet should be simple enough to be implemented on an onboard digital microprocessor. A software package for performing an analog or digital control law synthesis for such a system, using optimal control theory and constrained optimization techniques is described.
Caldwell, Robert R
2011-12-28
The challenge to understand the physical origin of the cosmic acceleration is framed as a problem of gravitation. Specifically, does the relationship between stress-energy and space-time curvature differ on large scales from the predictions of general relativity. In this article, we describe efforts to model and test a generalized relationship between the matter and the metric using cosmological observations. Late-time tracers of large-scale structure, including the cosmic microwave background, weak gravitational lensing, and clustering are shown to provide good tests of the proposed solution. Current data are very close to proving a critical test, leaving only a small window in parameter space in the case that the generalized relationship is scale free above galactic scales.
Laboratory simulation of space plasma phenomena*
NASA Astrophysics Data System (ADS)
Amatucci, B.; Tejero, E. M.; Ganguli, G.; Blackwell, D.; Enloe, C. L.; Gillman, E.; Walker, D.; Gatling, G.
2017-12-01
Laboratory devices, such as the Naval Research Laboratory's Space Physics Simulation Chamber, are large-scale experiments dedicated to the creation of large-volume plasmas with parameters realistically scaled to those found in various regions of the near-Earth space plasma environment. Such devices make valuable contributions to the understanding of space plasmas by investigating phenomena under carefully controlled, reproducible conditions, allowing for the validation of theoretical models being applied to space data. By working in collaboration with in situ experimentalists to create realistic conditions scaled to those found during the observations of interest, the microphysics responsible for the observed events can be investigated in detail not possible in space. To date, numerous investigations of phenomena such as plasma waves, wave-particle interactions, and particle energization have been successfully performed in the laboratory. In addition to investigations such as plasma wave and instability studies, the laboratory devices can also make valuable contributions to the development and testing of space plasma diagnostics. One example is the plasma impedance probe developed at NRL. Originally developed as a laboratory diagnostic, the sensor has now been flown on a sounding rocket, is included on a CubeSat experiment, and will be included on the DoD Space Test Program's STP-H6 experiment on the International Space Station. In this presentation, we will describe several examples of the laboratory investigation of space plasma waves and instabilities and diagnostic development. *This work supported by the NRL Base Program.
NASA Astrophysics Data System (ADS)
Schmitz, Oliver; Soenario, Ivan; Vaartjes, Ilonca; Strak, Maciek; Hoek, Gerard; Brunekreef, Bert; Dijst, Martin; Karssenberg, Derek
2016-04-01
Air pollution is one of the major concerns for human health. Associations between air pollution and health are often calculated using long-term (i.e. years to decades) information on personal exposure for each individual in a cohort. Personal exposure is the air pollution aggregated along the space-time path visited by an individual. As air pollution may vary considerably in space and time, for instance due to motorised traffic, the estimation of the spatio-temporal location of a persons' space-time path is important to identify the personal exposure. However, long term exposure is mostly calculated using the air pollution concentration at the x, y location of someone's home which does not consider that individuals are mobile (commuting, recreation, relocation). This assumption is often made as it is a major challenge to estimate space-time paths for all individuals in large cohorts, mostly because limited information on mobility of individuals is available. We address this issue by evaluating multiple approaches for the calculation of space-time paths, thereby estimating the personal exposure along these space-time paths with hyper resolution air pollution maps at national scale. This allows us to evaluate the effect of the space-time path and resulting personal exposure. Air pollution (e.g. NO2, PM10) was mapped for the entire Netherlands at a resolution of 5×5 m2 using the land use regression models developed in the European Study of Cohorts for Air Pollution Effects (ESCAPE, http://escapeproject.eu/) and the open source software PCRaster (http://www.pcraster.eu). The models use predictor variables like population density, land use, and traffic related data sets, and are able to model spatial variation and within-city variability of annual average concentration values. We approximated space-time paths for all individuals in a cohort using various aggregations, including those representing space-time paths as the outline of a persons' home or associated parcel of land, the 4 digit postal code area or neighbourhood of a persons' home, circular areas around the home, and spatial probability distributions of space-time paths during commuting. Personal exposure was estimated by averaging concentrations over these space-time paths, for each individual in a cohort. Preliminary results show considerable differences of a persons' exposure using these various approaches of space-time path aggregation, presumably because air pollution shows large variation over short distances.
NASA Technical Reports Server (NTRS)
1984-01-01
The large space structures technology development missions to be performed on an early manned space station was studied and defined and the resources needed and the design implications to an early space station to carry out these large space structures technology development missions were determined. Emphasis is being placed on more detail in mission designs and space station resource requirements.
Launch vehicle selection model
NASA Technical Reports Server (NTRS)
Montoya, Alex J.
1990-01-01
Over the next 50 years, humans will be heading for the Moon and Mars to build scientific bases to gain further knowledge about the universe and to develop rewarding space activities. These large scale projects will last many years and will require large amounts of mass to be delivered to Low Earth Orbit (LEO). It will take a great deal of planning to complete these missions in an efficient manner. The planning of a future Heavy Lift Launch Vehicle (HLLV) will significantly impact the overall multi-year launching cost for the vehicle fleet depending upon when the HLLV will be ready for use. It is desirable to develop a model in which many trade studies can be performed. In one sample multi-year space program analysis, the total launch vehicle cost of implementing the program reduced from 50 percent to 25 percent. This indicates how critical it is to reduce space logistics costs. A linear programming model has been developed to answer such questions. The model is now in its second phase of development, and this paper will address the capabilities of the model and its intended uses. The main emphasis over the past year was to make the model user friendly and to incorporate additional realistic constraints that are difficult to represent mathematically. We have developed a methodology in which the user has to be knowledgeable about the mission model and the requirements of the payloads. We have found a representation that will cut down the solution space of the problem by inserting some preliminary tests to eliminate some infeasible vehicle solutions. The paper will address the handling of these additional constraints and the methodology for incorporating new costing information utilizing learning curve theory. The paper will review several test cases that will explore the preferred vehicle characteristics and the preferred period of construction, i.e., within the next decade, or in the first decade of the next century. Finally, the paper will explore the interaction between the primary mission model (all payloads going from Earth to Low Earth Orbit (LEO)) and the secondary mission model (all payloads from LEO to Lunar and LEO to Mars and return).
Research on The Construction of Flexible Multi-body Dynamics Model based on Virtual Components
NASA Astrophysics Data System (ADS)
Dong, Z. H.; Ye, X.; Yang, F.
2018-05-01
Focus on the harsh operation condition of space manipulator, which cannot afford relative large collision momentum, this paper proposes a new concept and technology, called soft-contact technology. In order to solve the problem of collision dynamics of flexible multi-body system caused by this technology, this paper also proposes the concepts of virtual components and virtual hinges, and constructs flexible dynamic model based on virtual components, and also studies on its solutions. On this basis, this paper uses NX to carry out model and comparison simulation for space manipulator in 3 different modes. The results show that using the model of multi-rigid body + flexible body hinge + controllable damping can make effective control on amplitude for the force and torque caused by target satellite collision.
NASA Astrophysics Data System (ADS)
Lemarié, F.; Debreu, L.
2016-02-01
Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost. To our knowledge no unconditionally stable scheme with such high order accuracy in time and space have been presented so far in the literature. Furthermore, we show how those schemes can be made monotonic without compromising their stability properties.
Protection from Space Radiation
NASA Technical Reports Server (NTRS)
Tripathi, R. K.; Wilson, J. W.; Shinn, J. L.; Singleterry, R. C.; Clowdsley, M. S.; Cucinotta, F. A.; Badhwar, G. D.; Kim, M. Y.; Badavi, F. F.; Heinbockel, J. H.
2000-01-01
The exposures anticipated for our astronauts in the anticipated Human Exploration and Development of Space (HEDS) will be significantly higher (both annual and carrier) than any other occupational group. In addition, the exposures in deep space result largely from the Galactic Cosmic Rays (GCR) for which there is as yet little experience. Some evidence exists indicating that conventional linear energy transfer (LET) defined protection quantities (quality factors) may not be appropriate [1,2]. The purpose of this presentation is to evaluate our current understanding of radiation protection with laboratory and flight experimental data and to discuss recent improvements in interaction models and transport methods.
Tuning the fragility of a glass-forming liquid by curving space.
Sausset, François; Tarjus, Gilles; Viot, Pascal
2008-10-10
We investigate the influence of space curvature, and of the associated frustration, on the dynamics of a model glass former: a monatomic liquid on the hyperbolic plane. We find that the system's fragility, i.e., the sensitivity of the relaxation time to temperature changes, increases as one decreases the frustration. As a result, curving space provides a way to tune fragility and make it as large as wanted. We also show that the nature of the emerging "dynamic heterogeneities", another distinctive feature of slowly relaxing systems, is directly connected to the presence of frustration-induced topological defects.
On Space Exploration and Human Error: A Paper on Reliability and Safety
NASA Technical Reports Server (NTRS)
Bell, David G.; Maluf, David A.; Gawdiak, Yuri
2005-01-01
NASA space exploration should largely address a problem class in reliability and risk management stemming primarily from human error, system risk and multi-objective trade-off analysis, by conducting research into system complexity, risk characterization and modeling, and system reasoning. In general, in every mission we can distinguish risk in three possible ways: a) known-known, b) known-unknown, and c) unknown-unknown. It is probably almost certain that space exploration will partially experience similar known or unknown risks embedded in the Apollo missions, Shuttle or Station unless something alters how NASA will perceive and manage safety and reliability
NASA Technical Reports Server (NTRS)
Soosaar, K.
1982-01-01
Some performance requirements and development needs for the design of large space structures are described. Areas of study include: (1) dynamic response of large space structures; (2) structural control and systems integration; (3) attitude control; and (4) large optics and flexibility. Reference is made to a large space telescope.
Concepts and challenges in cancer risk prediction for the space radiation environment
NASA Astrophysics Data System (ADS)
Barcellos-Hoff, Mary Helen; Blakely, Eleanor A.; Burma, Sandeep; Fornace, Albert J.; Gerson, Stanton; Hlatky, Lynn; Kirsch, David G.; Luderer, Ulrike; Shay, Jerry; Wang, Ya; Weil, Michael M.
2015-07-01
Cancer is an important long-term risk for astronauts exposed to protons and high-energy charged particles during travel and residence on asteroids, the moon, and other planets. NASA's Biomedical Critical Path Roadmap defines the carcinogenic risks of radiation exposure as one of four type I risks. A type I risk represents a demonstrated, serious problem with no countermeasure concepts, and may be a potential "show-stopper" for long duration spaceflight. Estimating the carcinogenic risks for humans who will be exposed to heavy ions during deep space exploration has very large uncertainties at present. There are no human data that address risk from extended exposure to complex radiation fields. The overarching goal in this area to improve risk modeling is to provide biological insight and mechanistic analysis of radiation quality effects on carcinogenesis. Understanding mechanisms will provide routes to modeling and predicting risk and designing countermeasures. This white paper reviews broad issues related to experimental models and concepts in space radiation carcinogenesis as well as the current state of the field to place into context recent findings and concepts derived from the NASA Space Radiation Program.
Space station architectural elements model study
NASA Technical Reports Server (NTRS)
Taylor, T. C.; Spencer, J. S.; Rocha, C. J.; Kahn, E.; Cliffton, E.; Carr, C.
1987-01-01
The worksphere, a user controlled computer workstation enclosure, was expanded in scope to an engineering workstation suitable for use on the Space Station as a crewmember desk in orbit. The concept was also explored as a module control station capable of enclosing enough equipment to control the station from each module. The concept has commercial potential for the Space Station and surface workstation applications. The central triangular beam interior configuration was expanded and refined to seven different beam configurations. These included triangular on center, triangular off center, square, hexagonal small, hexagonal medium, hexagonal large and the H beam. Each was explored with some considerations as to the utilities and a suggested evaluation factor methodology was presented. Scale models of each concept were made. The models were helpful in researching the seven beam configurations and determining the negative residual (unused) volume of each configuration. A flexible hardware evaluation factor concept is proposed which could be helpful in evaluating interior space volumes from a human factors point of view. A magnetic version with all the graphics is available from the author or the technical monitor.
Evaluation of Cartosat-1 Multi-Scale Digital Surface Modelling Over France
Gianinetto, Marco
2009-01-01
On 5 May 2005, the Indian Space Research Organization launched Cartosat-1, the eleventh satellite of its constellation, dedicated to the stereo viewing of the Earth's surface for terrain modeling and large-scale mapping, from the Satish Dhawan Space Centre (India). In early 2006, the Indian Space Research Organization started the Cartosat-1 Scientific Assessment Programme, jointly established with the International Society for Photogrammetry and Remote Sensing. Within this framework, this study evaluated the capabilities of digital surface modeling from Cartosat-1 stereo data for the French test sites of Mausanne les Alpilles and Salon de Provence. The investigation pointed out that for hilly territories it is possible to produce high-resolution digital surface models with a root mean square error less than 7.1 m and a linear error at 90% confidence level less than 9.5 m. The accuracy of the generated digital surface models also fulfilled the requirements of the French Reference 3D®, so Cartosat-1 data may be used to produce or update such kinds of products. PMID:22412311
Analysis and testing of a soft actuation system for segmented reflector articulation and isolation
NASA Technical Reports Server (NTRS)
Jandura, Louise; Agronin, Michael L.
1991-01-01
Segmented reflectors have been proposed for space-based applications such as optical communication and large-diameter telescopes. An actuation system for mirrors in a space-based segmented mirror array has been developed as part of the National Aeronautics and Space Administration-sponsored Precision Segmented Reflector program. The actuation system, called the Articulated Panel Module (APM), articulates a mirror panel in 3 degrees of freedom in the submicron regime, isolates the panel from structural motion, and simplifies space assembly of the mirrors to the reflector backup truss. A breadboard of the APM has been built and is described. Three-axis modeling, analysis, and testing of the breadboard is discussed.
Shilov, Ignat V; Seymour, Sean L; Patel, Alpesh A; Loboda, Alex; Tang, Wilfred H; Keating, Sean P; Hunter, Christie L; Nuwaysir, Lydia M; Schaeffer, Daniel A
2007-09-01
The Paragon Algorithm, a novel database search engine for the identification of peptides from tandem mass spectrometry data, is presented. Sequence Temperature Values are computed using a sequence tag algorithm, allowing the degree of implication by an MS/MS spectrum of each region of a database to be determined on a continuum. Counter to conventional approaches, features such as modifications, substitutions, and cleavage events are modeled with probabilities rather than by discrete user-controlled settings to consider or not consider a feature. The use of feature probabilities in conjunction with Sequence Temperature Values allows for a very large increase in the effective search space with only a very small increase in the actual number of hypotheses that must be scored. The algorithm has a new kind of user interface that removes the user expertise requirement, presenting control settings in the language of the laboratory that are translated to optimal algorithmic settings. To validate this new algorithm, a comparison with Mascot is presented for a series of analogous searches to explore the relative impact of increasing search space probed with Mascot by relaxing the tryptic digestion conformance requirements from trypsin to semitrypsin to no enzyme and with the Paragon Algorithm using its Rapid mode and Thorough mode with and without tryptic specificity. Although they performed similarly for small search space, dramatic differences were observed in large search space. With the Paragon Algorithm, hundreds of biological and artifact modifications, all possible substitutions, and all levels of conformance to the expected digestion pattern can be searched in a single search step, yet the typical cost in search time is only 2-5 times that of conventional small search space. Despite this large increase in effective search space, there is no drastic loss of discrimination that typically accompanies the exploration of large search space.
Analytical Verifications in Cryogenic Testing of NGST Advanced Mirror System Demonstrators
NASA Technical Reports Server (NTRS)
Cummings, Ramona; Levine, Marie; VanBuren, Dave; Kegley, Jeff; Green, Joseph; Hadaway, James; Presson, Joan; Cline, Todd; Stahl, H. Philip (Technical Monitor)
2002-01-01
Ground based testing is a critical and costly part of component, assembly, and system verifications of large space telescopes. At such tests, however, with integral teamwork by planners, analysts, and test personnel, segments can be included to validate specific analytical parameters and algorithms at relatively low additional cost. This paper opens with strategy of analytical verification segments added to vacuum cryogenic testing of Advanced Mirror System Demonstrator (AMSD) assemblies. These AMSD assemblies incorporate material and architecture concepts being considered in the Next Generation Space Telescope (NGST) design. The test segments for workmanship testing, cold survivability, and cold operation optical throughput are supplemented by segments for analytical verifications of specific structural, thermal, and optical parameters. Utilizing integrated modeling and separate materials testing, the paper continues with support plan for analyses, data, and observation requirements during the AMSD testing, currently slated for late calendar year 2002 to mid calendar year 2003. The paper includes anomaly resolution as gleaned by authors from similar analytical verification support of a previous large space telescope, then closes with draft of plans for parameter extrapolations, to form a well-verified portion of the integrated modeling being done for NGST performance predictions.
Quantiprot - a Python package for quantitative analysis of protein sequences.
Konopka, Bogumił M; Marciniak, Marta; Dyrka, Witold
2017-07-17
The field of protein sequence analysis is dominated by tools rooted in substitution matrices and alignments. A complementary approach is provided by methods of quantitative characterization. A major advantage of the approach is that quantitative properties defines a multidimensional solution space, where sequences can be related to each other and differences can be meaningfully interpreted. Quantiprot is a software package in Python, which provides a simple and consistent interface to multiple methods for quantitative characterization of protein sequences. The package can be used to calculate dozens of characteristics directly from sequences or using physico-chemical properties of amino acids. Besides basic measures, Quantiprot performs quantitative analysis of recurrence and determinism in the sequence, calculates distribution of n-grams and computes the Zipf's law coefficient. We propose three main fields of application of the Quantiprot package. First, quantitative characteristics can be used in alignment-free similarity searches, and in clustering of large and/or divergent sequence sets. Second, a feature space defined by quantitative properties can be used in comparative studies of protein families and organisms. Third, the feature space can be used for evaluating generative models, where large number of sequences generated by the model can be compared to actually observed sequences.
Dynamic test/analysis correlation using reduced analytical models
NASA Technical Reports Server (NTRS)
Mcgowan, Paul E.; Angelucci, A. Filippo; Javeed, Mehzad
1992-01-01
Test/analysis correlation is an important aspect of the verification of analysis models which are used to predict on-orbit response characteristics of large space structures. This paper presents results of a study using reduced analysis models for performing dynamic test/analysis correlation. The reduced test-analysis model (TAM) has the same number and orientation of DOF as the test measurements. Two reduction methods, static (Guyan) reduction and the Improved Reduced System (IRS) reduction, are applied to the test/analysis correlation of a laboratory truss structure. Simulated test results and modal test data are used to examine the performance of each method. It is shown that selection of DOF to be retained in the TAM is critical when large structural masses are involved. In addition, the use of modal test results may provide difficulties in TAM accuracy even if a large number of DOF are retained in the TAM.
Evidence for Large Decadal Variability in the Tropical Mean Radiative Energy Budget
NASA Technical Reports Server (NTRS)
Wielicki, Bruce A.; Wong, Takmeng; Allan, Richard; Slingo, Anthony; Kiehl, Jeffrey T.; Soden, Brian J.; Gordon, C. T.; Miller, Alvin J.; Yang, Shi-Keng; Randall, David R.;
2001-01-01
It is widely assumed that variations in the radiative energy budget at large time and space scales are very small. We present new evidence from a compilation of over two decades of accurate satellite data that the top-of-atmosphere (TOA) tropical radiative energy budget is much more dynamic and variable than previously thought. We demonstrate that the radiation budget changes are caused by changes In tropical mean cloudiness. The results of several current climate model simulations fall to predict this large observed variation In tropical energy budget. The missing variability in the models highlights the critical need to Improve cloud modeling in the tropics to support Improved prediction of tropical climate on Inter-annual and decadal time scales. We believe that these data are the first rigorous demonstration of decadal time scale changes In the Earth's tropical cloudiness, and that they represent a new and necessary test of climate models.
GRAMS: A Grid of RSG and AGB Models
NASA Astrophysics Data System (ADS)
Srinivasan, S.; Sargent, B. A.; Meixner, M.
2011-09-01
We present a grid of oxygen- and carbon-rich circumstellar dust radiative transfer models for asymptotic giant branch (AGB) and red supergiant (RSG) stars. The grid samples a large region of the relevant parameter space, and it allows for a quick calculation of bolometric fluxes and dust mass-loss rates from multi-wavelength photometry. This method of fitting observed spectral energy distributions (SEDs) is preferred over detailed radiative transfer calculations, especially for large data sets such as the SAGE (Surveying the Agents of a Galaxy's Evolution) survey of the Magellanic Clouds. The mass-loss rates calculated for SAGE data will allow us to quantify the dust returned to the interstellar medium (ISM) by the entire AGB population. The total injection rate provides an important constraint for models of galactic chemical evolution. Here, we discuss our carbon star models and compare the results to SAGE observations in the Large Magellanic Cloud (LMC).
Structure analysis for hole-nuclei close to 132Sn by a large-scale shell-model calculation
NASA Astrophysics Data System (ADS)
Wang, Han-Kui; Sun, Yang; Jin, Hua; Kaneko, Kazunari; Tazaki, Shigeru
2013-11-01
The structure of neutron-rich nuclei with a few holes in respect of the doubly magic nucleus 132Sn is investigated by means of large-scale shell-model calculations. For a considerably large model space, including orbitals allowing both neutron and proton core excitations, an effective interaction for the extended pairing-plus-quadrupole model with monopole corrections is tested through detailed comparison between the calculation and experimental data. By using the experimental energy of the core-excited 21/2+ level in 131In as a benchmark, monopole corrections are determined that describe the size of the neutron N=82 shell gap. The level spectra, up to 5 MeV of excitation in 131In, 131Sn, 130In, 130Cd, and 130Sn, are well described and clearly explained by couplings of single-hole orbitals and by core excitations.
Integration of RAM-SCB into the Space Weather Modeling Framework
Welling, Daniel; Toth, Gabor; Jordanova, Vania Koleva; ...
2018-02-07
We present that numerical simulations of the ring current are a challenging endeavor. They require a large set of inputs, including electric and magnetic fields and plasma sheet fluxes. Because the ring current broadly affects the magnetosphere-ionosphere system, the input set is dependent on the ring current region itself. This makes obtaining a set of inputs that are self-consistent with the ring current difficult. To overcome this challenge, researchers have begun coupling ring current models to global models of the magnetosphere-ionosphere system. This paper describes the coupling between the Ring current Atmosphere interaction Model with Self-Consistent Magnetic field (RAM-SCB) tomore » the models within the Space Weather Modeling Framework. Full details on both previously introduced and new coupling mechanisms are defined. Finally, the impact of self-consistently including the ring current on the magnetosphere-ionosphere system is illustrated via a set of example simulations.« less
Integration of RAM-SCB into the Space Weather Modeling Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Welling, Daniel; Toth, Gabor; Jordanova, Vania Koleva
We present that numerical simulations of the ring current are a challenging endeavor. They require a large set of inputs, including electric and magnetic fields and plasma sheet fluxes. Because the ring current broadly affects the magnetosphere-ionosphere system, the input set is dependent on the ring current region itself. This makes obtaining a set of inputs that are self-consistent with the ring current difficult. To overcome this challenge, researchers have begun coupling ring current models to global models of the magnetosphere-ionosphere system. This paper describes the coupling between the Ring current Atmosphere interaction Model with Self-Consistent Magnetic field (RAM-SCB) tomore » the models within the Space Weather Modeling Framework. Full details on both previously introduced and new coupling mechanisms are defined. Finally, the impact of self-consistently including the ring current on the magnetosphere-ionosphere system is illustrated via a set of example simulations.« less
On the streaming model for redshift-space distortions
NASA Astrophysics Data System (ADS)
Kuruvilla, Joseph; Porciani, Cristiano
2018-06-01
The streaming model describes the mapping between real and redshift space for 2-point clustering statistics. Its key element is the probability density function (PDF) of line-of-sight pairwise peculiar velocities. Following a kinetic-theory approach, we derive the fundamental equations of the streaming model for ordered and unordered pairs. In the first case, we recover the classic equation while we demonstrate that modifications are necessary for unordered pairs. We then discuss several statistical properties of the pairwise velocities for DM particles and haloes by using a suite of high-resolution N-body simulations. We test the often used Gaussian ansatz for the PDF of pairwise velocities and discuss its limitations. Finally, we introduce a mixture of Gaussians which is known in statistics as the generalised hyperbolic distribution and show that it provides an accurate fit to the PDF. Once inserted in the streaming equation, the fit yields an excellent description of redshift-space correlations at all scales that vastly outperforms the Gaussian and exponential approximations. Using a principal-component analysis, we reduce the complexity of our model for large redshift-space separations. Our results increase the robustness of studies of anisotropic galaxy clustering and are useful for extending them towards smaller scales in order to test theories of gravity and interacting dark-energy models.
Crane cabins' interior space multivariate anthropometric modeling.
Essdai, Ahmed; Spasojević Brkić, Vesna K; Golubović, Tamara; Brkić, Aleksandar; Popović, Vladimir
2018-01-01
Previous research has shown that today's crane cabins fail to meet the needs of a large proportion of operators. Performance and financial losses and effects on safety should not be overlooked as well. The first aim of this survey is to model the crane cabin interior space using up-to-date crane operator anthropometric data and to compare the multivariate and univariate method anthropometric models. The second aim of the paper is to define the crane cabin interior space dimensions that enable anthropometric convenience. To facilitate the cabin design, the anthropometric dimensions of 64 crane operators in the first sample and 19 more in the second sample were collected in Serbia. The multivariate anthropometric models, spanning 95% of the population on the basis of a set of 8 anthropometric dimensions, have been developed. The percentile method was also used on the same set of data. The dimensions of the interior space, necessary for the accommodation of the crane operator, are 1174×1080×1865 mm. The percentiles results for the 5th and 95th model are within the obtained dimensions. The results of this study may prove useful to crane cabin designers in eliminating anthropometric inconsistencies and improving the health of operators, but can also aid in improving the safety, performance and financial results of the companies where crane cabins operate.
NASA Astrophysics Data System (ADS)
Matthes, J. H.; Dietze, M.; Fox, A. M.; Goring, S. J.; McLachlan, J. S.; Moore, D. J.; Poulter, B.; Quaife, T. L.; Schaefer, K. M.; Steinkamp, J.; Williams, J. W.
2014-12-01
Interactions between ecological systems and the atmosphere are the result of dynamic processes with system memories that persist from seconds to centuries. Adequately capturing long-term biosphere-atmosphere exchange within earth system models (ESMs) requires an accurate representation of changes in plant functional types (PFTs) through time and space, particularly at timescales associated with ecological succession. However, most model parameterization and development has occurred using datasets than span less than a decade. We tested the ability of ESMs to capture the ecological dynamics observed in paleoecological and historical data spanning the last millennium. Focusing on an area from the Upper Midwest to New England, we examined differences in the magnitude and spatial pattern of PFT distributions and ecotones between historic datasets and the CMIP5 inter-comparison project's large-scale ESMs. We then conducted a 1000-year model inter-comparison using six state-of-the-art biosphere models at sites that bridged regional temperature and precipitation gradients. The distribution of ecosystem characteristics in modeled climate space reveals widely disparate relationships between modeled climate and vegetation that led to large differences in long-term biosphere-atmosphere fluxes for this region. Model simulations revealed that both the interaction between climate and vegetation and the representation of ecosystem dynamics within models were important controls on biosphere-atmosphere exchange.
NASA Astrophysics Data System (ADS)
Creixell-Mediante, Ester; Jensen, Jakob S.; Naets, Frank; Brunskog, Jonas; Larsen, Martin
2018-06-01
Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we present an adaptive pMOR technique where the construction of the projection basis is embedded in the optimization process and requires fewer full system analyses, while the accuracy of the reduced system is monitored by a cheap error indicator. The performance of the proposed method is evaluated for a 4-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system.
Reconciling long-term cultural diversity and short-term collective social behavior.
Valori, Luca; Picciolo, Francesco; Allansdottir, Agnes; Garlaschelli, Diego
2012-01-24
An outstanding open problem is whether collective social phenomena occurring over short timescales can systematically reduce cultural heterogeneity in the long run, and whether offline and online human interactions contribute differently to the process. Theoretical models suggest that short-term collective behavior and long-term cultural diversity are mutually excluding, since they require very different levels of social influence. The latter jointly depends on two factors: the topology of the underlying social network and the overlap between individuals in multidimensional cultural space. However, while the empirical properties of social networks are intensively studied, little is known about the large-scale organization of real societies in cultural space, so that random input specifications are necessarily used in models. Here we use a large dataset to perform a high-dimensional analysis of the scientific beliefs of thousands of Europeans. We find that interopinion correlations determine a nontrivial ultrametric hierarchy of individuals in cultural space. When empirical data are used as inputs in models, ultrametricity has strong and counterintuitive effects. On short timescales, it facilitates a symmetry-breaking phase transition triggering coordinated social behavior. On long timescales, it suppresses cultural convergence by restricting it within disjoint groups. Moreover, ultrametricity implies that these results are surprisingly robust to modifications of the dynamical rules considered. Thus the empirical distribution of individuals in cultural space appears to systematically optimize the coexistence of short-term collective behavior and long-term cultural diversity, which can be realized simultaneously for the same moderate level of mutual influence in a diverse range of online and offline settings.
Energy Efficient Engine acoustic supporting technology report
NASA Technical Reports Server (NTRS)
Lavin, S. P.; Ho, P. Y.
1985-01-01
The acoustic development of the Energy Efficient Engine combined testing and analysis using scale model rigs and an integrated Core/Low Spool demonstration engine. The scale model tests show that a cut-on blade/vane ratio fan with a large spacing (S/C = 2.3) is as quiet as a cut-off blade/vane ratio with a tighter spacing (S/C = 1.27). Scale model mixer tests show that separate flow nozzles are the noisiest, conic nozzles the quietest, with forced mixers in between. Based on projections of ICLS data the Energy Efficient Engine (E3) has FAR 36 margins of 3.7 EPNdB at approach, 4.5 EPNdB at full power takeoff, and 7.2 EPNdB at sideline conditions.
A system level model for preliminary design of a space propulsion solid rocket motor
NASA Astrophysics Data System (ADS)
Schumacher, Daniel M.
Preliminary design of space propulsion solid rocket motors entails a combination of components and subsystems. Expert design tools exist to find near optimal performance of subsystems and components. Conversely, there is no system level preliminary design process for space propulsion solid rocket motors that is capable of synthesizing customer requirements into a high utility design for the customer. The preliminary design process for space propulsion solid rocket motors typically builds on existing designs and pursues feasible rather than the most favorable design. Classical optimization is an extremely challenging method when dealing with the complex behavior of an integrated system. The complexity and combinations of system configurations make the number of the design parameters that are traded off unreasonable when manual techniques are used. Existing multi-disciplinary optimization approaches generally address estimating ratios and correlations rather than utilizing mathematical models. The developed system level model utilizes the Genetic Algorithm to perform the necessary population searches to efficiently replace the human iterations required during a typical solid rocket motor preliminary design. This research augments, automates, and increases the fidelity of the existing preliminary design process for space propulsion solid rocket motors. The system level aspect of this preliminary design process, and the ability to synthesize space propulsion solid rocket motor requirements into a near optimal design, is achievable. The process of developing the motor performance estimate and the system level model of a space propulsion solid rocket motor is described in detail. The results of this research indicate that the model is valid for use and able to manage a very large number of variable inputs and constraints towards the pursuit of the best possible design.
Low energy dipole strength from large scale shell model calculations
NASA Astrophysics Data System (ADS)
Sieja, Kamila
2017-09-01
Low energy enhancement of radiative strength functions has been deduced from experiments in several mass regions of nuclei. Such an enhancement is believed to impact the calculated neutron capture rates which are crucial input for reaction rates of astrophysical interest. Recently, shell model calculations have been performed to explain the upbend of the γ-strength as due to the M1 transitions between close-lying states in the quasi-continuum in Fe and Mo nuclei. Beyond mean-↓eld calculations in Mo suggested, however, a non-negligible role of electric dipole in the low energy enhancement. So far, no calculations of both dipole components within the same theoretical framework have been presented in this context. In this work we present newly developed large scale shell model appraoch that allows to treat on the same footing natural and non-natural parity states. The calculations are performed in a large sd - pf - gds model space, allowing for 1p{1h excitations on the top of the full pf-shell con↓guration mixing. We restrict the discussion to the magnetic part of the dipole strength, however, we calculate for the ↓rst time the magnetic dipole strength between states built of excitations going beyond the classical shell model spaces. Our results corroborate previous ↓ndings for the M1 enhancement for the natural parity states while we observe no enhancement for the 1p{1h contributions. We also discuss in more detail the e↑ects of con↓guration mixing limitations on the enhancement coming out from shell model calculations.
Calbindins decreased after space flight
NASA Technical Reports Server (NTRS)
Sergeev, I. N.; Rhoten, W. B.; Carney, M. D.
1996-01-01
Exposure of the body to microgravity during space flight causes a series of well-documented changes in Ca2+ metabolism, yet the cellular and molecular mechanisms leading to these changes are poorly understood. Calbindins, vitamin D-dependent Ca2+ binding proteins, are believed to have a significant role in maintaining cellular Ca2+ homeostasis. In this study, we used biochemical and immunocytochemical approaches to analyze the expression of calbindin-D28k and calbindin-D9k in kidneys, small intestine, and pancreas of rats flown for 9 d aboard the space shuttle. The effects of microgravity on calbindins in rats from space were compared with synchronous Animal Enclosure Module controls, modeled weightlessness animals (tail suspension), and their controls. Exposure to microgravity resulted in a significant and sustained decrease in calbindin-D28k content in the kidney and calbindin-D9k in the small intestine of flight animals, as measured by enzyme-linked immunosorbent assay (ELISA). Modeled weightlessness animals exhibited a similar decrease in calbindins by ELISA. Immunocytochemistry (ICC) in combination with quantitative computer image analysis was used to measure in situ the expression of calbindins in the kidney and the small intestine, and the expression of insulin in pancreas. There was a large decrease of immunoreactivity in renal distal tubular cell-associated calbindin-D28k and in intestinal absorptive cell-associated calbindin-D9k of space flight and modeled weightlessness animals compared with matched controls. No consistent difference in pancreatic insulin immunoreactivity between space flight, modeled weightlessness, and controls was observed. Regression analysis of results obtained by quantitative ICC and ELISA for space flight, modeled weightlessness animals, and their controls demonstrated a significant correlation. These findings after a short-term exposure to microgravity or modeled weightlessness suggest that a decreased expression of calbindins may contribute to the disorders of Ca2+ metabolism induced by space flight.
Exploring the hyperchargeless Higgs triplet model up to the Planck scale
NASA Astrophysics Data System (ADS)
Khan, Najimuddin
2018-04-01
We examine an extension of the SM Higgs sector by a Higgs triplet taking into consideration the discovery of a Higgs-like particle at the LHC with mass around 125 GeV. We evaluate the bounds on the scalar potential through the unitarity of the scattering matrix. Considering the cases with and without Z_2-symmetry of the extra triplet, we derive constraints on the parameter space. We identify the region of the parameter space that corresponds to the stability and metastability of the electroweak vacuum. We also show that at large field values the scalar potential of this model is suitable to explain inflation.
Space shuttle: Aerodynamic heating tests of the MDAC delta wing orbiter and canard booster
NASA Technical Reports Server (NTRS)
Andresen, T. L.
1972-01-01
Design of an efficient thermal protection system for the space shuttle orbiter and booster is discussed, based on knowledge of the thermal environment to be experienced by the vehicles in all flight phases. The complex configurations of these vehicles limit the level of confidence which can be associated with purely analytical thermal environment predictions. Tests were conducted during April and May 1971 using an orbiter and booster model at a 96-in. hypersonic shock tunnel. Both models were tested separately as well as together. A sufficiently large range in Reynolds number was covered so that laminar, transitional, and turbulent data could be obtained.
Radiation protection for manned space activities
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1983-01-01
The Earth's natural radiation environment poses a hazard to manned space activities directly through biological effects and indirectly through effects on materials and electronics. The following standard practices are indicated that address: (1) environment models for all radiation species including uncertainties and temporal variations; (2) upper bound and nominal quality factors for biological radiation effects that include dose, dose rate, critical organ, and linear energy transfer variations; (3) particle transport and shielding methodology including system and man modeling and uncertainty analysis; (4) mission planning that includes active dosimetry, minimizes exposure during extravehicular activities, subjects every mission to a radiation review, and specifies operational procedures for forecasting, recognizing, and dealing with large solar flaes.