Shape control of large space structures
NASA Technical Reports Server (NTRS)
Hagan, M. T.
1982-01-01
A survey has been conducted to determine the types of control strategies which have been proposed for controlling the vibrations in large space structures. From this survey several representative control strategies were singled out for detailed analyses. The application of these strategies to a simplified model of a large space structure has been simulated. These simulations demonstrate the implementation of the control algorithms and provide a basis for a preliminary comparison of their suitability for large space structure control.
Indian LSSC (Large Space Simulation Chamber) facility
NASA Technical Reports Server (NTRS)
Brar, A. S.; Prasadarao, V. S.; Gambhir, R. D.; Chandramouli, M.
1988-01-01
The Indian Space Agency has undertaken a major project to acquire in-house capability for thermal and vacuum testing of large satellites. This Large Space Simulation Chamber (LSSC) facility will be located in Bangalore and is to be operational in 1989. The facility is capable of providing 4 meter diameter solar simulation with provision to expand to 4.5 meter diameter at a later date. With such provisions as controlled variations of shroud temperatures and availability of infrared equipment as alternative sources of thermal radiation, this facility will be amongst the finest anywhere. The major design concept and major aspects of the LSSC facility are presented here.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Xiexiaomen; Tutuncu, Azra; Eustes, Alfred
Enhanced Geothermal Systems (EGS) could potentially use technological advancements in coupled implementation of horizontal drilling and multistage hydraulic fracturing techniques in tight oil and shale gas reservoirs along with improvements in reservoir simulation techniques to design and create EGS reservoirs. In this study, a commercial hydraulic fracture simulation package, Mangrove by Schlumberger, was used in an EGS model with largely distributed pre-existing natural fractures to model fracture propagation during the creation of a complex fracture network. The main goal of this study is to investigate optimum treatment parameters in creating multiple large, planar fractures to hydraulically connect a horizontal injectionmore » well and a horizontal production well that are 10,000 ft. deep and spaced 500 ft. apart from each other. A matrix of simulations for this study was carried out to determine the influence of reservoir and treatment parameters on preventing (or aiding) the creation of large planar fractures. The reservoir parameters investigated during the matrix simulations include the in-situ stress state and properties of the natural fracture set such as the primary and secondary fracture orientation, average fracture length, and average fracture spacing. The treatment parameters investigated during the simulations were fluid viscosity, proppant concentration, pump rate, and pump volume. A final simulation with optimized design parameters was performed. The optimized design simulation indicated that high fluid viscosity, high proppant concentration, large pump volume and pump rate tend to minimize the complexity of the created fracture network. Additionally, a reservoir with 'friendly' formation characteristics such as large stress anisotropy, natural fractures set parallel to the maximum horizontal principal stress (SHmax), and large natural fracture spacing also promote the creation of large planar fractures while minimizing fracture complexity.« less
Use of cryopumps on large space simulation systems
NASA Technical Reports Server (NTRS)
Mccrary, L. E.
1980-01-01
The need for clean, oil free space simulation systems has demanded the development of large, clean pumping systems. The assurance of optically dense liquid nitrogen baffles over diffusion pumps prevents backstreaming to a large extent, but does not preclude contamination from accidents or a control failure. Turbomolecular pumps or ion pumps achieve oil free systems but are only practical for relatively small chambers. Large cryopumps were developed and checked out which do achieve clean pumping of very large chambers. These pumps can be used as the original pumping system or can be retrofitted as a replacement for existing diffusion pumps.
Modeling space-time correlations of velocity fluctuations in wind farms
NASA Astrophysics Data System (ADS)
Lukassen, Laura J.; Stevens, Richard J. A. M.; Meneveau, Charles; Wilczek, Michael
2018-07-01
An analytical model for the streamwise velocity space-time correlations in turbulent flows is derived and applied to the special case of velocity fluctuations in large wind farms. The model is based on the Kraichnan-Tennekes random sweeping hypothesis, capturing the decorrelation in time while including a mean wind velocity in the streamwise direction. In the resulting model, the streamwise velocity space-time correlation is expressed as a convolution of the pure space correlation with an analytical temporal decorrelation kernel. Hence, the spatio-temporal structure of velocity fluctuations in wind farms can be derived from the spatial correlations only. We then explore the applicability of the model to predict spatio-temporal correlations in turbulent flows in wind farms. Comparisons of the model with data from a large eddy simulation of flow in a large, spatially periodic wind farm are performed, where needed model parameters such as spatial and temporal integral scales and spatial correlations are determined from the large eddy simulation. Good agreement is obtained between the model and large eddy simulation data showing that spatial data may be used to model the full temporal structure of fluctuations in wind farms.
EVA assembly of large space structure element
NASA Technical Reports Server (NTRS)
Bement, L. J.; Bush, H. G.; Heard, W. L., Jr.; Stokes, J. W., Jr.
1981-01-01
The results of a test program to assess the potential of manned extravehicular activity (EVA) assembly of erectable space trusses are described. Seventeen tests were conducted in which six "space-weight" columns were assembled into a regular tetrahedral cell by a team of two "space"-suited test subjects. This cell represents the fundamental "element" of a tetrahedral truss structure. The tests were conducted under simulated zero-gravity conditions. Both manual and simulated remote manipulator system modes were evaluated. Articulation limits of the pressure suit and zero gravity could be accommodated by work stations with foot restraints. The results of this study have confirmed that astronaut EVA assembly of large, erectable space structures is well within man's capabilities.
Ibrahim, Mohamed; Wickenhauser, Patrick; Rautek, Peter; Reina, Guido; Hadwiger, Markus
2018-01-01
Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.
NASA Technical Reports Server (NTRS)
Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung
2016-01-01
Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.
State-space reduction and equivalence class sampling for a molecular self-assembly model.
Packwood, Daniel M; Han, Patrick; Hitosugi, Taro
2016-07-01
Direct simulation of a model with a large state space will generate enormous volumes of data, much of which is not relevant to the questions under study. In this paper, we consider a molecular self-assembly model as a typical example of a large state-space model, and present a method for selectively retrieving 'target information' from this model. This method partitions the state space into equivalence classes, as identified by an appropriate equivalence relation. The set of equivalence classes H, which serves as a reduced state space, contains none of the superfluous information of the original model. After construction and characterization of a Markov chain with state space H, the target information is efficiently retrieved via Markov chain Monte Carlo sampling. This approach represents a new breed of simulation techniques which are highly optimized for studying molecular self-assembly and, moreover, serves as a valuable guideline for analysis of other large state-space models.
NASA Technical Reports Server (NTRS)
Early, Derrick A.; Haile, William B.; Turczyn, Mark T.; Griffin, Thomas J. (Technical Monitor)
2001-01-01
NASA Goddard Space Flight Center and the European Space Agency (ESA) conducted a disturbance verification test on a flight Solar Array 3 (SA3) for the Hubble Space Telescope using the ESA Large Space Simulator (LSS) in Noordwijk, the Netherlands. The LSS cyclically illuminated the SA3 to simulate orbital temperature changes in a vacuum environment. Data acquisition systems measured signals from force transducers and accelerometers resulting from thermally induced vibrations of the SAI The LSS with its seismic mass boundary provided an excellent background environment for this test. This paper discusses the analysis performed on the measured transient SA3 responses and provides a summary of the results.
NASA Technical Reports Server (NTRS)
Parlos, Alexander G.; Sunkel, John W.
1990-01-01
An attitude-control and momentum-management (ACMM) system for the Space Station in a large-angle torque-equilibrium-attitude (TEA) configuration is developed analytically and demonstrated by means of numerical simulations. The equations of motion for a rigid-body Space Station model are outlined; linearized equations for an arbitrary TEA (resulting from misalignment of control and body axes) are derived; the general requirements for an ACMM are summarized; and a pole-placement linear-quadratic regulator solution based on scheduled gains is proposed. Results are presented in graphs for (1) simulations based on configuration MB3 (showing the importance of accounting for the cross-inertia terms in the TEA estimate) and (2) simulations of a stepwise change from configuration MB3 to the 'assembly complete' stage over 130 orbits (indicating that the present ACCM scheme maintains sufficient control over slowly varying Space Station dynamics).
NASA Technical Reports Server (NTRS)
Buchanan, H. J.
1983-01-01
Work performed in Large Space Structures Controls research and development program at Marshall Space Flight Center is described. Studies to develop a multilevel control approach which supports a modular or building block approach to the buildup of space platforms are discussed. A concept has been developed and tested in three-axis computer simulation utilizing a five-body model of a basic space platform module. Analytical efforts have continued to focus on extension of the basic theory and subsequent application. Consideration is also given to specifications to evaluate several algorithms for controlling the shape of Large Space Structures.
A TREETOPS simulation of the Hubble Space Telescope-High Gain Antenna interaction
NASA Technical Reports Server (NTRS)
Sharkey, John P.
1987-01-01
Virtually any project dealing with the control of a Large Space Structure (LSS) will involve some level of verification by digital computer simulation. While the Hubble Space Telescope might not normally be included in a discussion of LSS, it is presented to highlight a recently developed simulation and analysis program named TREETOPS. TREETOPS provides digital simulation, linearization, and control system interaction of flexible, multibody spacecraft which admit to a point-connected tree topology. The HST application of TREETOPS is intended to familiarize the LSS community with TREETOPS by presenting a user perspective of its key features.
Mathematical modeling and simulation of the space shuttle imaging radar antennas
NASA Technical Reports Server (NTRS)
Campbell, R. W.; Melick, K. E.; Coffey, E. L., III
1978-01-01
Simulations of space shuttle synthetic aperture radar antennas under the influence of space environmental conditions were carried out at L, C, and X-band. Mathematical difficulties in modeling large, non-planar array antennas are discussed, and an approximate modeling technique is presented. Results for several antenna error conditions are illustrated in far-field profile patterns, earth surface footprint contours, and summary graphs.
The future of simulations for space applications
NASA Astrophysics Data System (ADS)
Matsumoto, H.
Space development has been rapidly increasing and there will be huge investment by business markets for space development and applications such as space factory and Solar Power Station (SPS). In such a situation, we would like to send a warning message regarding the future space simulations. It is widely recognized that space simulation have been contributing to the quantitative understanding of various plasma phenomena occurring in the solarterrestrial environment. In the current century, however, in addition to the conventional contribution to the solar-terrestrial physics, we also have to pay our attention to the application of space simulation for human activities in space. We believe that space simulations can be a a powerful and helpful tool for the understanding the spacecraft-environment interactions occurring in space development and applications. The global influence by exhausted heavy ions from electric propulsion on the plasmasphere can be also analyzed by the combination of MHD and particle simulations. The results obtained in the simulations can provide us very significant and beneficial information so that we can minimize the undesirable effects in space development and applications. 1 Brief history of ISSS and contribution to the space plasma physics Numerical simulation has been largely recognized as a powerful tool in the advance of space plasma physics. The International School for Space Simulation (ISSS) series was set up in order to emphasize such a recognition in the early eighties, on the common initiative of M. Ashour-Abdalla, R. Gendrin, T. Sato and myself. The preceding five ISSS's (in Japan, USA, France, Japan, and Japan again) have greatly contributed to the promotion of and advance of computer simulations as well as the education of students trying to start the simulation study for their own research objectives.
NASA Astrophysics Data System (ADS)
Simpson, R.; Broussely, M.; Edwards, G.; Robinson, D.; Cozzani, A.; Casarosa, G.
2012-07-01
The National Physical Laboratory (NPL) and The European Space Research and Technology Centre (ESTEC) have performed for the first time successful surface temperature measurements using infrared thermal imaging in the ESTEC Large Space Simulator (LSS) under vacuum and with the Sun Simulator (SUSI) switched on during thermal qualification tests of the GAIA Deployable Sunshield Assembly (DSA). The thermal imager temperature measurements, with radiosity model corrections, show good agreement with thermocouple readings on well characterised regions of the spacecraft. In addition, the thermal imaging measurements identified potentially misleading thermocouple temperature readings and provided qualitative real-time observations of the thermal and spatial evolution of surface structure changes and heat dissipation during hot test loadings, which may yield additional thermal and physical measurement information through further research.
Manufacture of Cryoshroud Surfaces for Space Simulation Chambers
NASA Technical Reports Server (NTRS)
Ash, Gary S.
2008-01-01
Environmental test chambers for space applications use internal shrouds to simulate temperature conditions encountered in space. Shroud temperatures may range from +150 C to -253 C (20 K), and internal surfaces are coated with special high emissivity/absorptivity paints. To obtain temperature uniformity over large areas, detailed thermal design is required for placement of tubing for gaseous or liquid nitrogen and helium and other exotic heat exchange fluids. The recent increase in space simulation activity related to the James Webb Space Telescope has led to the design of new cryogenic shrouds to meet critical needs in instrument package testing. This paper will review the design and manufacturing of shroud surfaces for several of these programs, including fabrication methods and the selection and application of paints for simulation chambers.
Parallel computing method for simulating hydrological processesof large rivers under climate change
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, Y.
2016-12-01
Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.
Gravity and thermal deformation of large primary mirror in space telescope
NASA Astrophysics Data System (ADS)
Wang, Xin; Jiang, Shouwang; Wan, Jinlong; Shu, Rong
2016-10-01
The technology of integrating mechanical FEA analysis with optical estimation is essential to simulate the gravity deformation of large main mirror and the thermal deformation such as static or temperature gradient of optical structure. We present the simulation results of FEA analysis, data processing, and image performance. Three kinds of support structure for large primary mirror which have the center holding structure, the edge glue fixation and back support, are designed and compared to get the optimal gravity deformation. Variable mirror materials Zerodur/SiC are chosen and analyzed to obtain the small thermal gradient distortion. The simulation accuracy is dependent on FEA mesh quality, the load definition of structure, the fitting error from discrete data to smooth surface. A main mirror with 1m diameter is designed as an example. The appropriate structure material to match mirror, the central supporting structure, and the key aspects of FEA simulation are optimized for space application.
Definition of ground test for verification of large space structure control
NASA Technical Reports Server (NTRS)
Doane, G. B., III; Glaese, J. R.; Tollison, D. K.; Howsman, T. G.; Curtis, S. (Editor); Banks, B.
1984-01-01
Control theory and design, dynamic system modelling, and simulation of test scenarios are the main ideas discussed. The overall effort is the achievement at Marshall Space Flight Center of a successful ground test experiment of a large space structure. A simplified planar model of ground test experiment of a large space structure. A simplified planar model of ground test verification was developed. The elimination from that model of the uncontrollable rigid body modes was also examined. Also studied was the hardware/software of computation speed.
A Novel Simulation Technician Laboratory Design: Results of a Survey-Based Study
Hughes, Patrick G; Friedl, Ed; Ortiz Figueroa, Fabiana; Cepeda Brito, Jose R; Frey, Jennifer; Birmingham, Lauren E; Atkinson, Steven Scott
2016-01-01
Objective The purpose of this study was to elicit feedback from simulation technicians prior to developing the first simulation technician-specific simulation laboratory in Akron, OH. Background Simulation technicians serve a vital role in simulation centers within hospitals/health centers around the world. The first simulation technician degree program in the US has been approved in Akron, OH. To satisfy the requirements of this program and to meet the needs of this special audience of learners, a customized simulation lab is essential. Method A web-based survey was circulated to simulation technicians prior to completion of the lab for the new program. The survey consisted of questions aimed at identifying structural and functional design elements of a novel simulation center for the training of simulation technicians. Quantitative methods were utilized to analyze data. Results Over 90% of technicians (n=65) think that a lab designed explicitly for the training of technicians is novel and beneficial. Approximately 75% of respondents think that the space provided appropriate audiovisual (AV) infrastructure and space to evaluate the ability of technicians to be independent. The respondents think that the lab needed more storage space, visualization space for a large number of students, and more space in the technical/repair area. Conclusions A space designed for the training of simulation technicians was considered to be beneficial. This laboratory requires distinct space for technical repair, adequate bench space for the maintenance and repair of simulators, an appropriate AV infrastructure, and space to evaluate the ability of technicians to be independent. PMID:27096134
A Novel Simulation Technician Laboratory Design: Results of a Survey-Based Study.
Ahmed, Rami; Hughes, Patrick G; Friedl, Ed; Ortiz Figueroa, Fabiana; Cepeda Brito, Jose R; Frey, Jennifer; Birmingham, Lauren E; Atkinson, Steven Scott
2016-03-16
OBJECTIVE : The purpose of this study was to elicit feedback from simulation technicians prior to developing the first simulation technician-specific simulation laboratory in Akron, OH. Simulation technicians serve a vital role in simulation centers within hospitals/health centers around the world. The first simulation technician degree program in the US has been approved in Akron, OH. To satisfy the requirements of this program and to meet the needs of this special audience of learners, a customized simulation lab is essential. A web-based survey was circulated to simulation technicians prior to completion of the lab for the new program. The survey consisted of questions aimed at identifying structural and functional design elements of a novel simulation center for the training of simulation technicians. Quantitative methods were utilized to analyze data. Over 90% of technicians (n=65) think that a lab designed explicitly for the training of technicians is novel and beneficial. Approximately 75% of respondents think that the space provided appropriate audiovisual (AV) infrastructure and space to evaluate the ability of technicians to be independent. The respondents think that the lab needed more storage space, visualization space for a large number of students, and more space in the technical/repair area. CONCLUSIONS : A space designed for the training of simulation technicians was considered to be beneficial. This laboratory requires distinct space for technical repair, adequate bench space for the maintenance and repair of simulators, an appropriate AV infrastructure, and space to evaluate the ability of technicians to be independent.
Time simulation of flutter with large stiffness changes
NASA Technical Reports Server (NTRS)
Karpel, M.; Wieseman, C. D.
1992-01-01
Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few a priori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.
Time simulation of flutter with large stiffness changes
NASA Technical Reports Server (NTRS)
Karpel, Mordechay; Wieseman, Carol D.
1992-01-01
Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for a basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness, and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few apriori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.
Simulation study of interactions of Space Shuttle-generated electron beams with ambient plasmas
NASA Technical Reports Server (NTRS)
Lin, Chin S.
1992-01-01
This report summarizes results obtained through the support of NASA Grant NAGW-1936. The objective of this report is to conduct large scale simulations of electron beams injected into space. The topics covered include the following: (1) simulation of radial expansion of an injected electron beam; (2) simulations of the active injections of electron beams; (3) parameter study of electron beam injection into an ionospheric plasma; and (4) magnetosheath-ionospheric plasma interactions in the cusp.
Computational methods and software systems for dynamics and control of large space structures
NASA Technical Reports Server (NTRS)
Park, K. C.; Felippa, C. A.; Farhat, C.; Pramono, E.
1990-01-01
Two key areas of crucial importance to the computer-based simulation of large space structures are discussed. The first area involves multibody dynamics (MBD) of flexible space structures, with applications directed to deployment, construction, and maneuvering. The second area deals with advanced software systems, with emphasis on parallel processing. The latest research thrust in the second area involves massively parallel computers.
Just-in-time connectivity for large spiking networks.
Lytton, William W; Omurtag, Ahmet; Neymotin, Samuel A; Hines, Michael L
2008-11-01
The scale of large neuronal network simulations is memory limited due to the need to store connectivity information: connectivity storage grows as the square of neuron number up to anatomically relevant limits. Using the NEURON simulator as a discrete-event simulator (no integration), we explored the consequences of avoiding the space costs of connectivity through regenerating connectivity parameters when needed: just in time after a presynaptic cell fires. We explored various strategies for automated generation of one or more of the basic static connectivity parameters: delays, postsynaptic cell identities, and weights, as well as run-time connectivity state: the event queue. Comparison of the JitCon implementation to NEURON's standard NetCon connectivity method showed substantial space savings, with associated run-time penalty. Although JitCon saved space by eliminating connectivity parameters, larger simulations were still memory limited due to growth of the synaptic event queue. We therefore designed a JitEvent algorithm that added items to the queue only when required: instead of alerting multiple postsynaptic cells, a spiking presynaptic cell posted a callback event at the shortest synaptic delay time. At the time of the callback, this same presynaptic cell directly notified the first postsynaptic cell and generated another self-callback for the next delay time. The JitEvent implementation yielded substantial additional time and space savings. We conclude that just-in-time strategies are necessary for very large network simulations but that a variety of alternative strategies should be considered whose optimality will depend on the characteristics of the simulation to be run.
Just in time connectivity for large spiking networks
Lytton, William W.; Omurtag, Ahmet; Neymotin, Samuel A; Hines, Michael L
2008-01-01
The scale of large neuronal network simulations is memory-limited due to the need to store connectivity information: connectivity storage grows as the square of neuron number up to anatomically-relevant limits. Using the NEURON simulator as a discrete-event simulator (no integration), we explored the consequences of avoiding the space costs of connectivity through regenerating connectivity parameters when needed – just-in-time after a presynaptic cell fires. We explored various strategies for automated generation of one or more of the basic static connectivity parameters: delays, postsynaptic cell identities and weights, as well as run-time connectivity state: the event queue. Comparison of the JitCon implementation to NEURON’s standard NetCon connectivity method showed substantial space savings, with associated run-time penalty. Although JitCon saved space by eliminating connectivity parameters, larger simulations were still memory-limited due to growth of the synaptic event queue. We therefore designed a JitEvent algorithm that only added items to the queue when required: instead of alerting multiple postsynaptic cells, a spiking presynaptic cell posted a callback event at the shortest synaptic delay time. At the time of the callback, this same presynaptic cell directly notified the first postsynaptic cell and generated another self-callback for the next delay time. The JitEvent implementation yielded substantial additional time and space savings. We conclude that just-in-time strategies are necessary for very large network simulations but that a variety of alternative strategies should be considered whose optimality will depend on the characteristics of the simulation to be run. PMID:18533821
Probabilistic load simulation: Code development status
NASA Astrophysics Data System (ADS)
Newell, J. F.; Ho, H.
1991-05-01
The objective of the Composite Load Spectra (CLS) project is to develop generic load models to simulate the composite load spectra that are included in space propulsion system components. The probabilistic loads thus generated are part of the probabilistic design analysis (PDA) of a space propulsion system that also includes probabilistic structural analyses, reliability, and risk evaluations. Probabilistic load simulation for space propulsion systems demands sophisticated probabilistic methodology and requires large amounts of load information and engineering data. The CLS approach is to implement a knowledge based system coupled with a probabilistic load simulation module. The knowledge base manages and furnishes load information and expertise and sets up the simulation runs. The load simulation module performs the numerical computation to generate the probabilistic loads with load information supplied from the CLS knowledge base.
NASA Technical Reports Server (NTRS)
Thorwald, Gregory; Mikulas, Martin M., Jr.
1992-01-01
The concept of a large-stroke adaptive stiffness cable-device for damping control of space structures with large mass is introduced. The cable is used to provide damping in several examples, and its performance is shown through numerical simulation results. Displacement and velocity information of how the structure moves is used to determine when to modify the cable's stiffness in order to provide a damping force.
High Order Numerical Simulation of Waves Using Regular Grids and Non-conforming Interfaces
2013-10-06
SECURITY CLASSIFICATION OF: We study the propagation of waves over large regions of space with smooth, but not necessarily constant, material...of space with smooth, but not necessarily constant, material characteristics, separated into sub-domains by interfaces of arbitrary shape. We...Abstract We study the propagation of waves over large regions of space with smooth, but not necessarily constant, material characteristics, separated into
NASA Astrophysics Data System (ADS)
Morikawa, Y.; Murata, K. T.; Watari, S.; Kato, H.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Shimojo, S.
2010-12-01
Main methodologies of Solar-Terrestrial Physics (STP) so far are theoretical, experimental and observational, and computer simulation approaches. Recently "informatics" is expected as a new (fourth) approach to the STP studies. Informatics is a methodology to analyze large-scale data (observation data and computer simulation data) to obtain new findings using a variety of data processing techniques. At NICT (National Institute of Information and Communications Technology, Japan) we are now developing a new research environment named "OneSpaceNet". The OneSpaceNet is a cloud-computing environment specialized for science works, which connects many researchers with high-speed network (JGN: Japan Gigabit Network). The JGN is a wide-area back-born network operated by NICT; it provides 10G network and many access points (AP) over Japan. The OneSpaceNet also provides with rich computer resources for research studies, such as super-computers, large-scale data storage area, licensed applications, visualization devices (like tiled display wall: TDW), database/DBMS, cluster computers (4-8 nodes) for data processing and communication devices. What is amazing in use of the science cloud is that a user simply prepares a terminal (low-cost PC). Once connecting the PC to JGN2plus, the user can make full use of the rich resources of the science cloud. Using communication devices, such as video-conference system, streaming and reflector servers, and media-players, the users on the OneSpaceNet can make research communications as if they belong to a same (one) laboratory: they are members of a virtual laboratory. The specification of the computer resources on the OneSpaceNet is as follows: The size of data storage we have developed so far is almost 1PB. The number of the data files managed on the cloud storage is getting larger and now more than 40,000,000. What is notable is that the disks forming the large-scale storage are distributed to 5 data centers over Japan (but the storage system performs as one disk). There are three supercomputers allocated on the cloud, one from Tokyo, one from Osaka and the other from Nagoya. One's simulation job data on any supercomputers are saved on the cloud data storage (same directory); it is a kind of virtual computing environment. The tiled display wall has 36 panels acting as one display; the pixel (resolution) size of it is as large as 18000x4300. This size is enough to preview or analyze the large-scale computer simulation data. It also allows us to take a look of multiple (e.g., 100 pictures) on one screen together with many researchers. In our talk we also present a brief report of the initial results using the OneSpaceNet for Global MHD simulations as an example of successful use of our science cloud; (i) Ultra-high time resolution visualization of Global MHD simulations on the large-scale storage and parallel processing system on the cloud, (ii) Database of real-time Global MHD simulation and statistic analyses of the data, and (iii) 3D Web service of Global MHD simulations.
26th Space Simulation Conference Proceedings. Environmental Testing: The Path Forward
NASA Technical Reports Server (NTRS)
Packard, Edward A.
2010-01-01
Topics covered include: A Multifunctional Space Environment Simulation Facility for Accelerated Spacecraft Materials Testing; Exposure of Spacecraft Surface Coatings in a Simulated GEO Radiation Environment; Gravity-Offloading System for Large-Displacement Ground Testing of Spacecraft Mechanisms; Microscopic Shutters Controlled by cRIO in Sounding Rocket; Application of a Physics-Based Stabilization Criterion to Flight System Thermal Testing; Upgrade of a Thermal Vacuum Chamber for 20 Kelvin Operations; A New Approach to Improve the Uniformity of Solar Simulator; A Perfect Space Simulation Storm; A Planetary Environmental Simulator/Test Facility; Collimation Mirror Segment Refurbishment inside ESA s Large Space; Space Simulation of the CBERS 3 and 4 Satellite Thermal Model in the New Brazilian 6x8m Thermal Vacuum Chamber; The Certification of Environmental Chambers for Testing Flight Hardware; Space Systems Environmental Test Facility Database (SSETFD), Website Development Status; Wallops Flight Facility: Current and Future Test Capabilities for Suborbital and Orbital Projects; Force Limited Vibration Testing of JWST NIRSpec Instrument Using Strain Gages; Investigation of Acoustic Field Uniformity in Direct Field Acoustic Testing; Recent Developments in Direct Field Acoustic Testing; Assembly, Integration and Test Centre in Malaysia: Integration between Building Construction Works and Equipment Installation; Complex Ground Support Equipment for Satellite Thermal Vacuum Test; Effect of Charging Electron Exposure on 1064nm Transmission through Bare Sapphire Optics and SiO2 over HfO2 AR-Coated Sapphire Optics; Environmental Testing Activities and Capabilities for Turkish Space Industry; Integrated Circuit Reliability Simulation in Space Environments; Micrometeoroid Impacts and Optical Scatter in Space Environment; Overcoming Unintended Consequences of Ambient Pressure Thermal Cycling Environmental Tests; Performance and Functionality Improvements to Next Generation Thermal Vacuum Control System; Robotic Lunar Lander Development Project: Three-Dimensional Dynamic Stability Testing and Analysis; Thermal Physical Properties of Thermal Coatings for Spacecraft in Wide Range of Environmental Conditions: Experimental and Theoretical Study; Molecular Contamination Generated in Thermal Vacuum Chambers; Preventing Cross Contamination of Hardware in Thermal Vacuum Chambers; Towards Validation of Particulate Transport Code; Updated Trends in Materials' Outgassing Technology; Electrical Power and Data Acquisition Setup for the CBER 3 and 4 Satellite TBT; Method of Obtaining High Resolution Intrinsic Wire Boom Damping Parameters for Multi-Body Dynamics Simulations; and Thermal Vacuum Testing with Scalable Software Developed In-House.
Free-decay time-domain modal identification for large space structures
NASA Technical Reports Server (NTRS)
Kim, Hyoung M.; Vanhorn, David A.; Doiron, Harold H.
1992-01-01
Concept definition studies for the Modal Identification Experiment (MIE), a proposed space flight experiment for the Space Station Freedom (SSF), have demonstrated advantages and compatibility of free-decay time-domain modal identification techniques with the on-orbit operational constraints of large space structures. Since practical experience with modal identification using actual free-decay responses of large space structures is very limited, several numerical and test data reduction studies were conducted. Major issues and solutions were addressed, including closely-spaced modes, wide frequency range of interest, data acquisition errors, sampling delay, excitation limitations, nonlinearities, and unknown disturbances during free-decay data acquisition. The data processing strategies developed in these studies were applied to numerical simulations of the MIE, test data from a deployable truss, and launch vehicle flight data. Results of these studies indicate free-decay time-domain modal identification methods can provide accurate modal parameters necessary to characterize the structural dynamics of large space structures.
Analysis of large space structures assembly: Man/machine assembly analysis
NASA Technical Reports Server (NTRS)
1983-01-01
Procedures for analyzing large space structures assembly via three primary modes: manual, remote and automated are outlined. Data bases on each of the assembly modes and a general data base on the shuttle capabilities to support structures assembly are presented. Task element times and structure assembly component costs are given to provide a basis for determining the comparative economics of assembly alternatives. The lessons learned from simulations of space structures assembly are detailed.
Large-aperture space optical system testing based on the scanning Hartmann.
Wei, Haisong; Yan, Feng; Chen, Xindong; Zhang, Hao; Cheng, Qiang; Xue, Donglin; Zeng, Xuefeng; Zhang, Xuejun
2017-03-10
Based on the Hartmann testing principle, this paper proposes a novel image quality testing technology which applies to a large-aperture space optical system. Compared with the traditional testing method through a large-aperture collimator, the scanning Hartmann testing technology has great advantages due to its simple structure, low cost, and ability to perform wavefront measurement of an optical system. The basic testing principle of the scanning Hartmann testing technology, data processing method, and simulation process are presented in this paper. Certain simulation results are also given to verify the feasibility of this technology. Furthermore, a measuring system is developed to conduct a wavefront measurement experiment for a 200 mm aperture optical system. The small deviation (6.3%) of root mean square (RMS) between experimental results and interferometric results indicates that the testing system can measure low-order aberration correctly, which means that the scanning Hartmann testing technology has the ability to test the imaging quality of a large-aperture space optical system.
Space construction base control system
NASA Technical Reports Server (NTRS)
1978-01-01
Aspects of an attitude control system were studied and developed for a large space base that is structurally flexible and whose mass properties change rather dramatically during its orbital lifetime. Topics of discussion include the following: (1) space base orbital pointing and maneuvering; (2) angular momentum sizing of actuators; (3) momentum desaturation selection and sizing; (4) multilevel control technique applied to configuration one; (5) one-dimensional model simulation; (6) N-body discrete coordinate simulation; (7) structural analysis math model formulation; and (8) discussion of control problems and control methods.
Large transient fault current test of an electrical roll ring
NASA Technical Reports Server (NTRS)
Yenni, Edward J.; Birchenough, Arthur G.
1992-01-01
The space station uses precision rotary gimbals to provide for sun tracking of its photoelectric arrays. Electrical power, command signals and data are transferred across the gimbals by roll rings. Roll rings have been shown to be capable of highly efficient electrical transmission and long life, through tests conducted at the NASA Lewis Research Center and Honeywell's Satellite and Space Systems Division in Phoenix, AZ. Large potential fault currents inherent to the power system's DC distribution architecture, have brought about the need to evaluate the effects of large transient fault currents on roll rings. A test recently conducted at Lewis subjected a roll ring to a simulated worst case space station electrical fault. The system model used to obtain the fault profile is described, along with details of the reduced order circuit that was used to simulate the fault. Test results comparing roll ring performance before and after the fault are also presented.
A k-space method for large-scale models of wave propagation in tissue.
Mast, T D; Souriau, L P; Liu, D L; Tabei, M; Nachman, A I; Waag, R C
2001-03-01
Large-scale simulation of ultrasonic pulse propagation in inhomogeneous tissue is important for the study of ultrasound-tissue interaction as well as for development of new imaging methods. Typical scales of interest span hundreds of wavelengths; most current two-dimensional methods, such as finite-difference and finite-element methods, are unable to compute propagation on this scale with the efficiency needed for imaging studies. Furthermore, for most available methods of simulating ultrasonic propagation, large-scale, three-dimensional computations of ultrasonic scattering are infeasible. Some of these difficulties have been overcome by previous pseudospectral and k-space methods, which allow substantial portions of the necessary computations to be executed using fast Fourier transforms. This paper presents a simplified derivation of the k-space method for a medium of variable sound speed and density; the derivation clearly shows the relationship of this k-space method to both past k-space methods and pseudospectral methods. In the present method, the spatial differential equations are solved by a simple Fourier transform method, and temporal iteration is performed using a k-t space propagator. The temporal iteration procedure is shown to be exact for homogeneous media, unconditionally stable for "slow" (c(x) < or = c0) media, and highly accurate for general weakly scattering media. The applicability of the k-space method to large-scale soft tissue modeling is shown by simulating two-dimensional propagation of an incident plane wave through several tissue-mimicking cylinders as well as a model chest wall cross section. A three-dimensional implementation of the k-space method is also employed for the example problem of propagation through a tissue-mimicking sphere. Numerical results indicate that the k-space method is accurate for large-scale soft tissue computations with much greater efficiency than that of an analogous leapfrog pseudospectral method or a 2-4 finite difference time-domain method. However, numerical results also indicate that the k-space method is less accurate than the finite-difference method for a high contrast scatterer with bone-like properties, although qualitative results can still be obtained by the k-space method with high efficiency. Possible extensions to the method, including representation of absorption effects, absorbing boundary conditions, elastic-wave propagation, and acoustic nonlinearity, are discussed.
Thermal/vacuum measurements of the Herschel space telescope by close-range photogrammetry
NASA Astrophysics Data System (ADS)
Parian, J. Amiri; Cozzani, A.; Appolloni, M.; Casarosa, G.
2017-11-01
In the frame of the development of a videogrammetric system to be used in thermal vacuum chambers at the European Space Research and Technology Centre (ESTEC) and other sites across Europe, the design of a network using micro-cameras was specified by the European Space agency (ESA)-ESTEC. The selected test set-up is the photogrammetric test of the Herschel Satellite Flight Model in the ESTEC Large Space Simulator. The photogrammetric system will be used to verify the Herschel Telescope alignment and Telescope positioning with respect to the Cryostat Vacuum Vessel (CVV) inside the Large Space Simulator during Thermal-Vacuum/Thermal-Balance test phases. We designed a close-range photogrammetric network by heuristic simulation and a videogrammetric system with an overall accuracy of 1:100,000. A semi-automated image acquisition system, which is able to work at low temperatures (-170°C) in order to acquire images according to the designed network has been constructed by ESA-ESTEC. In this paper we will present the videogrammetric system and sub-systems and the results of real measurements with a representative setup similar to the set-up of Herschel spacecraft which was realized in ESTEC Test Centre.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sunayama, Tomomi; Padmanabhan, Nikhil; Heitmann, Katrin
Precision measurements of the large scale structure of the Universe require large numbers of high fidelity mock catalogs to accurately assess, and account for, the presence of systematic effects. We introduce and test a scheme for generating mock catalogs rapidly using suitably derated N-body simulations. Our aim is to reproduce the large scale structure and the gross properties of dark matter halos with high accuracy, while sacrificing the details of the halo's internal structure. By adjusting global and local time-steps in an N-body code, we demonstrate that we recover halo masses to better than 0.5% and the power spectrum tomore » better than 1% both in real and redshift space for k =1 h Mpc{sup −1}, while requiring a factor of 4 less CPU time. We also calibrate the redshift spacing of outputs required to generate simulated light cones. We find that outputs separated by Δ z =0.05 allow us to interpolate particle positions and velocities to reproduce the real and redshift space power spectra to better than 1% (out to k =1 h Mpc{sup −1}). We apply these ideas to generate a suite of simulations spanning a range of cosmologies, motivated by the Baryon Oscillation Spectroscopic Survey (BOSS) but broadly applicable to future large scale structure surveys including eBOSS and DESI. As an initial demonstration of the utility of such simulations, we calibrate the shift in the baryonic acoustic oscillation peak position as a function of galaxy bias with higher precision than has been possible so far. This paper also serves to document the simulations, which we make publicly available.« less
Large-Scale NASA Science Applications on the Columbia Supercluster
NASA Technical Reports Server (NTRS)
Brooks, Walter
2005-01-01
Columbia, NASA's newest 61 teraflops supercomputer that became operational late last year, is a highly integrated Altix cluster of 10,240 processors, and was named to honor the crew of the Space Shuttle lost in early 2003. Constructed in just four months, Columbia increased NASA's computing capability ten-fold, and revitalized the Agency's high-end computing efforts. Significant cutting-edge science and engineering simulations in the areas of space and Earth sciences, as well as aeronautics and space operations, are already occurring on this largest operational Linux supercomputer, demonstrating its capacity and capability to accelerate NASA's space exploration vision. The presentation will describe how an integrated environment consisting not only of next-generation systems, but also modeling and simulation, high-speed networking, parallel performance optimization, and advanced data analysis and visualization, is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions. The talk will conclude by discussing how NAS partnered with various NASA centers, other government agencies, computer industry, and academia, to create a national resource in large-scale modeling and simulation.
Tether Impact Rate Simulation and Prediction with Orbiting Satellites
NASA Technical Reports Server (NTRS)
Harrison, Jim
2002-01-01
Space elevators and other large space structures have been studied and proposed as worthwhile by futuristic space planners for at least a couple of decades. In June 1999 the Marshall Space Flight Center sponsored a Space Elevator workshop in Huntsville, Alabama, to bring together technical experts and advanced planners to discuss the current status and to define the magnitude of the technical and programmatic problems connected with the development of these massive space systems. One obvious problem that was identified, although not for the first time, were the collision probabilities between space elevators and orbital debris. Debate and uncertainty presently exist about the extent of the threat to these large structures, one in this study as large in size as a space elevator. We have tentatively concluded that orbital debris although a major concern not sufficient justification to curtail the study and development of futuristic new millennium concepts like the space elevators.
NASA Astrophysics Data System (ADS)
Chooramun, N.; Lawrence, P. J.; Galea, E. R.
2017-08-01
In all evacuation simulation tools, the space through which agents navigate and interact is represented by one the following methods, namely Coarse regions, Fine nodes and Continuous regions. Each of the spatial representation methods has its benefits and limitations. For instance, the Coarse approach allows simulations to be processed very rapidly, but is unable to represent the interactions of the agents from an individual perspective; the Continuous approach provides a detailed representation of agent movement and interaction but suffers from relatively poor computational performance. The Fine nodal approach presents a compromise between the Continuous and Coarse approaches such that it allows agent interaction to be modelled while providing good computational performance. Our approach for representing space in an evacuation simulation tool differs such that it allows evacuation simulations to be run using a combination of Coarse regions, Fine nodes and Continuous regions. This approach, which we call Hybrid Spatial Discretisation (HSD), is implemented within the buildingEXODUS evacuation simulation software. The HSD incorporates the benefits of each of the spatial representation methods whilst providing an optimal environment for representing agent movement and interaction. In this work, we demonstrate the effectiveness of the HSD through its application to a moderately large case comprising of an underground rail tunnel station with a population of 2,000 agents.
X-ray simulations method for the large field of view
NASA Astrophysics Data System (ADS)
Schelokov, I. A.; Grigoriev, M. V.; Chukalina, M. V.; Asadchikov, V. E.
2018-03-01
In the standard approach, X-ray simulation is usually limited to the step of spatial sampling to calculate the convolution of integrals of the Fresnel type. Explicitly the sampling step is determined by the size of the last Fresnel zone in the beam aperture. In other words, the spatial sampling is determined by the precision of integral convolution calculations and is not connected with the space resolution of an optical scheme. In the developed approach the convolution in the normal space is replaced by computations of the shear strain of ambiguity function in the phase space. The spatial sampling is then determined by the space resolution of an optical scheme. The sampling step can differ in various directions because of the source anisotropy. The approach was used to simulate original images in the X-ray Talbot interferometry and showed that the simulation can be applied to optimize the methods of postprocessing.
Software for Engineering Simulations of a Spacecraft
NASA Technical Reports Server (NTRS)
Shireman, Kirk; McSwain, Gene; McCormick, Bernell; Fardelos, Panayiotis
2005-01-01
Spacecraft Engineering Simulation II (SES II) is a C-language computer program for simulating diverse aspects of operation of a spacecraft characterized by either three or six degrees of freedom. A functional model in SES can include a trajectory flight plan; a submodel of a flight computer running navigational and flight-control software; and submodels of the environment, the dynamics of the spacecraft, and sensor inputs and outputs. SES II features a modular, object-oriented programming style. SES II supports event-based simulations, which, in turn, create an easily adaptable simulation environment in which many different types of trajectories can be simulated by use of the same software. The simulation output consists largely of flight data. SES II can be used to perform optimization and Monte Carlo dispersion simulations. It can also be used to perform simulations for multiple spacecraft. In addition to its generic simulation capabilities, SES offers special capabilities for space-shuttle simulations: for this purpose, it incorporates submodels of the space-shuttle dynamics and a C-language version of the guidance, navigation, and control components of the space-shuttle flight software.
Large space system: Charged particle environment interaction technology
NASA Technical Reports Server (NTRS)
Stevens, N. J.; Roche, J. C.; Grier, N. T.
1979-01-01
Large, high voltage space power systems are proposed for future space missions. These systems must operate in the charged-particle environment of space and interactions between this environment and the high voltage surfaces are possible. Ground simulation testing indicated that dielectric surfaces that usually surround biased conductors can influence these interactions. For positive voltages greater than 100 volts, it has been found that the dielectrics contribute to the current collection area. For negative voltages greater than-500 volts, the data indicates that the dielectrics contribute to discharges. A large, high-voltage power system operating in geosynchronous orbit was analyzed. Results of this analysis indicate that very strong electric fields exist in these power systems.
The evolution of space simulation
NASA Technical Reports Server (NTRS)
Edwards, Arthur A.
1992-01-01
Thirty years have passed since the first large (more than 15 ft diameter) thermal vacuum space simulation chambers were built in this country. Many changes have been made since then, and the industry has learned a great deal as the designs have evolved in that time. I was fortunate to have been part of that beginning, and have participated in many of the changes that have occurred since. While talking with vacuum friends recently, I realized that many of the engineers working in the industry today may not be aware of the evolution of space simulation because they did not experience the changes that brought us today's technology. With that in mind, it seems to be appropriate to take a moment and review some of the events that were a big part of the past thirty years in the thermal vacuum business. Perhaps this review will help to understand a little of the 'why' as well as the 'how' of building and operating large thermal vacuum chambers.
Simulation study on dynamics model of two kinds of on-orbit soft-contact mechanism
NASA Astrophysics Data System (ADS)
Ye, X.; Dong, Z. H.; Yang, F.
2018-05-01
Aiming at the problem that the operating conditions of the space manipulator is harsh and the space manipulator could not bear the large collision momentum, this paper presents a new concept and technical method, namely soft contact technology. Based on ADAMS dynamics software, this paper compares and simulates the mechanism model of on-orbit soft-contact mechanism based on the bionic model and the integrated double joint model. The main purpose is to verify the path planning ability and the momentum buffering ability based on the different design concept mechanism. The simulation results show that both the two mechanism models have the path planning function before the space target contact, and also has the momentum buffer and controllability during the space target contact process.
ClustENM: ENM-Based Sampling of Essential Conformational Space at Full Atomic Resolution
Kurkcuoglu, Zeynep; Bahar, Ivet; Doruker, Pemra
2016-01-01
Accurate sampling of conformational space and, in particular, the transitions between functional substates has been a challenge in molecular dynamic (MD) simulations of large biomolecular systems. We developed an Elastic Network Model (ENM)-based computational method, ClustENM, for sampling large conformational changes of biomolecules with various sizes and oligomerization states. ClustENM is an iterative method that combines ENM with energy minimization and clustering steps. It is an unbiased technique, which requires only an initial structure as input, and no information about the target conformation. To test the performance of ClustENM, we applied it to six biomolecular systems: adenylate kinase (AK), calmodulin, p38 MAP kinase, HIV-1 reverse transcriptase (RT), triosephosphate isomerase (TIM), and the 70S ribosomal complex. The generated ensembles of conformers determined at atomic resolution show good agreement with experimental data (979 structures resolved by X-ray and/or NMR) and encompass the subspaces covered in independent MD simulations for TIM, p38, and RT. ClustENM emerges as a computationally efficient tool for characterizing the conformational space of large systems at atomic detail, in addition to generating a representative ensemble of conformers that can be advantageously used in simulating substrate/ligand-binding events. PMID:27494296
Large size space construction for space exploitation
NASA Astrophysics Data System (ADS)
Kondyurin, Alexey
2016-07-01
Space exploitation is impossible without large space structures. We need to make sufficient large volume of pressurized protecting frames for crew, passengers, space processing equipment, & etc. We have to be unlimited in space. Now the size and mass of space constructions are limited by possibility of a launch vehicle. It limits our future in exploitation of space by humans and in development of space industry. Large-size space construction can be made with using of the curing technology of the fibers-filled composites and a reactionable matrix applied directly in free space. For curing the fabric impregnated with a liquid matrix (prepreg) is prepared in terrestrial conditions and shipped in a container to orbit. In due time the prepreg is unfolded by inflating. After polymerization reaction, the durable construction can be fitted out with air, apparatus and life support systems. Our experimental studies of the curing processes in the simulated free space environment showed that the curing of composite in free space is possible. The large-size space construction can be developed. A project of space station, Moon base, Mars base, mining station, interplanet space ship, telecommunication station, space observatory, space factory, antenna dish, radiation shield, solar sail is proposed and overviewed. The study was supported by Humboldt Foundation, ESA (contract 17083/03/NL/SFe), NASA program of the stratospheric balloons and RFBR grants (05-08-18277, 12-08-00970 and 14-08-96011).
MSFC Three Point Docking Mechanism design review
NASA Technical Reports Server (NTRS)
Schaefer, Otto; Ambrosio, Anthony
1992-01-01
In the next few decades, we will be launching expensive satellites and space platforms that will require recovery for economic reasons, because of initial malfunction, servicing, repairs, or out of a concern for post lifetime debris removal. The planned availability of a Three Point Docking Mechanism (TPDM) is a positive step towards an operational satellite retrieval infrastructure. This study effort supports NASA/MSFC engineering work in developing an automated docking capability. The work was performed by the Grumman Space & Electronics Group as a concept evaluation/test for the Tumbling Satellite Retrieval Kit. Simulation of a TPDM capture was performed in Grumman's Large Amplitude Space Simulator (LASS) using mockups of both parts (the mechanism and payload). Similar TPDM simulation activities and more extensive hardware testing was performed at NASA/MSFC in the Flight Robotics Laboratory and Space Station/Space Operations Mechanism Test Bed (6-DOF Facility).
NASA Astrophysics Data System (ADS)
Lu, Wei; Sun, Jianfeng; Hou, Peipei; Xu, Qian; Xi, Yueli; Zhou, Yu; Zhu, Funan; Liu, Liren
2017-08-01
Performance of satellite laser communications between GEO and LEO satellites can be influenced by background light noise appeared in the field of view due to sunlight or planets and some comets. Such influences should be studied on the ground testing platform before the space application. In this paper, we introduce a simulator that can simulate the real case of background light noise in space environment during the data talking via laser beam between two lonely satellites. This simulator can not only simulate the effect of multi-wavelength spectrum, but also the effects of adjustable angles of field-of-view, large range of adjustable optical power and adjustable deflection speeds of light noise in space environment. We integrate these functions into a device with small and compact size for easily mobile use. Software control function is also achieved via personal computer to adjust these functions arbitrarily. Keywords:
Simulation studies using multibody dynamics code DART
NASA Technical Reports Server (NTRS)
Keat, James E.
1989-01-01
DART is a multibody dynamics code developed by Photon Research Associates for the Air Force Astronautics Laboratory (AFAL). The code is intended primarily to simulate the dynamics of large space structures, particularly during the deployment phase of their missions. DART integrates nonlinear equations of motion numerically. The number of bodies in the system being simulated is arbitrary. The bodies' interconnection joints can have an arbitrary number of degrees of freedom between 0 and 6. Motions across the joints can be large. Provision for simulating on-board control systems is provided. Conservation of energy and momentum, when applicable, are used to evaluate DART's performance. After a brief description of DART, studies made to test the program prior to its delivery to AFAL are described. The first is a large angle reorientating of a flexible spacecraft consisting of a rigid central hub and four flexible booms. Reorientation was accomplished by a single-cycle sine wave shape torque input. In the second study, an appendage, mounted on a spacecraft, was slewed through a large angle. Four closed-loop control systems provided control of this appendage and of the spacecraft's attitude. The third study simulated the deployment of the rim of a bicycle wheel configuration large space structure. This system contained 18 bodies. An interesting and unexpected feature of the dynamics was a pulsing phenomena experienced by the stays whole playout was used to control the deployment. A short description of the current status of DART is given.
Propagative selection of tilted array patterns in directional solidification
NASA Astrophysics Data System (ADS)
Song, Younggil; Akamatsu, Silvère; Bottin-Rousseau, Sabine; Karma, Alain
2018-05-01
We investigate the dynamics of tilted cellular/dendritic array patterns that form during directional solidification of a binary alloy when a preferred-growth crystal axis is misoriented with respect to the temperature gradient. In situ experimental observations and phase-field simulations in thin samples reveal the existence of a propagative source-sink mechanism of array spacing selection that operates on larger space and time scales than the competitive growth at play during the initial solidification transient. For tilted arrays, tertiary branching at the diverging edge of the sample acts as a source of new cells with a spacing that can be significantly larger than the initial average spacing. A spatial domain of large spacing then invades the sample propagatively. It thus yields a uniform spacing everywhere, selected independently of the initial conditions, except in a small region near the converging edge of the sample, which acts as a sink of cells. We propose a discrete geometrical model that describes the large-scale evolution of the spatial spacing profile based on the local dependence of the cell drift velocity on the spacing. We also derive a nonlinear advection equation that predicts the invasion velocity of the large-spacing domain, and sheds light on the fundamental nature of this process. The models also account for more complex spacing modulations produced by an irregular dynamics at the source, in good quantitative agreement with both phase-field simulations and experiments. This basic knowledge provides a theoretical basis to improve the processing of single crystals or textured polycrystals for advanced materials.
Concurrent processing simulation of the space station
NASA Technical Reports Server (NTRS)
Gluck, R.; Hale, A. L.; Sunkel, John W.
1989-01-01
The development of a new capability for the time-domain simulation of multibody dynamic systems and its application to the study of a large angle rotational maneuvers of the Space Station is described. The effort was divided into three sequential tasks, which required significant advancements of the state-of-the art to accomplish. These were: (1) the development of an explicit mathematical model via symbol manipulation of a flexible, multibody dynamic system; (2) the development of a methodology for balancing the computational load of an explicit mathematical model for concurrent processing; and (3) the implementation and successful simulation of the above on a prototype Custom Architectured Parallel Processing System (CAPPS) containing eight processors. The throughput rate achieved by the CAPPS operating at only 70 percent efficiency, was 3.9 times greater than that obtained sequentially by the IBM 3090 supercomputer simulating the same problem. More significantly, analysis of the results leads to the conclusion that the relative cost effectiveness of concurrent vs. sequential digital computation will grow substantially as the computational load is increased. This is a welcomed development in an era when very complex and cumbersome mathematical models of large space vehicles must be used as substitutes for full scale testing which has become impractical.
Computational methods and software systems for dynamics and control of large space structures
NASA Technical Reports Server (NTRS)
Park, K. C.; Felippa, C. A.; Farhat, C.; Pramono, E.
1990-01-01
This final report on computational methods and software systems for dynamics and control of large space structures covers progress to date, projected developments in the final months of the grant, and conclusions. Pertinent reports and papers that have not appeared in scientific journals (or have not yet appeared in final form) are enclosed. The grant has supported research in two key areas of crucial importance to the computer-based simulation of large space structure. The first area involves multibody dynamics (MBD) of flexible space structures, with applications directed to deployment, construction, and maneuvering. The second area deals with advanced software systems, with emphasis on parallel processing. The latest research thrust in the second area, as reported here, involves massively parallel computers.
Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji
2015-07-01
GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310-323. doi: 10.1002/wcms.1220.
Towards physics responsible for large-scale Lyman-α forest bias parameters
Agnieszka M. Cieplak; Slosar, Anze
2016-03-08
Using a series of carefully constructed numerical experiments based on hydrodynamic cosmological SPH simulations, we attempt to build an intuition for the relevant physics behind the large scale density (b δ) and velocity gradient (b η) biases of the Lyman-α forest. Starting with the fluctuating Gunn-Peterson approximation applied to the smoothed total density field in real-space, and progressing through redshift-space with no thermal broadening, redshift-space with thermal broadening and hydrodynamically simulated baryon fields, we investigate how approximations found in the literature fare. We find that Seljak's 2012 analytical formulae for these bias parameters work surprisingly well in the limit ofmore » no thermal broadening and linear redshift-space distortions. We also show that his b η formula is exact in the limit of no thermal broadening. Since introduction of thermal broadening significantly affects its value, we speculate that a combination of large-scale measurements of b η and the small scale flux PDF might be a sensitive probe of the thermal state of the IGM. Lastly, we find that large-scale biases derived from the smoothed total matter field are within 10–20% to those based on hydrodynamical quantities, in line with other measurements in the literature.« less
Towards physics responsible for large-scale Lyman-α forest bias parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agnieszka M. Cieplak; Slosar, Anze
Using a series of carefully constructed numerical experiments based on hydrodynamic cosmological SPH simulations, we attempt to build an intuition for the relevant physics behind the large scale density (b δ) and velocity gradient (b η) biases of the Lyman-α forest. Starting with the fluctuating Gunn-Peterson approximation applied to the smoothed total density field in real-space, and progressing through redshift-space with no thermal broadening, redshift-space with thermal broadening and hydrodynamically simulated baryon fields, we investigate how approximations found in the literature fare. We find that Seljak's 2012 analytical formulae for these bias parameters work surprisingly well in the limit ofmore » no thermal broadening and linear redshift-space distortions. We also show that his b η formula is exact in the limit of no thermal broadening. Since introduction of thermal broadening significantly affects its value, we speculate that a combination of large-scale measurements of b η and the small scale flux PDF might be a sensitive probe of the thermal state of the IGM. Lastly, we find that large-scale biases derived from the smoothed total matter field are within 10–20% to those based on hydrodynamical quantities, in line with other measurements in the literature.« less
Frank, Martin
2015-01-01
Complex carbohydrates usually have a large number of rotatable bonds and consequently a large number of theoretically possible conformations can be generated (combinatorial explosion). The application of systematic search methods for conformational analysis of carbohydrates is therefore limited to disaccharides and trisaccharides in a routine analysis. An alternative approach is to use Monte-Carlo methods or (high-temperature) molecular dynamics (MD) simulations to explore the conformational space of complex carbohydrates. This chapter describes how to use MD simulation data to perform a conformational analysis (conformational maps, hydrogen bonds) of oligosaccharides and how to build realistic 3D structures of large polysaccharides using Conformational Analysis Tools (CAT).
A hypervelocity launcher for simulated large fragment space debris impacts at 10 km/s
NASA Technical Reports Server (NTRS)
Tullos, R. J.; Gray, W. M.; Mullin, S. A.; Cour-Palais, B. G.
1989-01-01
The background, design, and testing of two explosive launchers for simulating large fragment space debris impacts are presented. The objective was to develop a launcher capable of launching one gram aluminum fragments at velocities of 10 km/s. The two launchers developed are based on modified versions of an explosive shaped charge, common in many military weapons. One launcher design has yielded a stable fragment launch of approximately one gram of aluminum at 8.93 km/s velocity. The other design yielded velocities in excess of 10 km/s, but failed to produce a cohesive fragment launch. This work is ongoing, and future plans are given.
Biofidelic Human Activity Modeling and Simulation with Large Variability
2014-11-25
A systematic approach was developed for biofidelic human activity modeling and simulation by using body scan data and motion capture data to...replicate a human activity in 3D space. Since technologies for simultaneously capturing human motion and dynamic shapes are not yet ready for practical use, a...that can replicate a human activity in 3D space with the true shape and true motion of a human. Using this approach, a model library was built to
Variations of cosmic large-scale structure covariance matrices across parameter space
NASA Astrophysics Data System (ADS)
Reischke, Robert; Kiessling, Alina; Schäfer, Björn Malte
2017-03-01
The likelihood function for cosmological parameters, given by e.g. weak lensing shear measurements, depends on contributions to the covariance induced by the non-linear evolution of the cosmic web. As highly non-linear clustering to date has only been described by numerical N-body simulations in a reliable and sufficiently precise way, the necessary computational costs for estimating those covariances at different points in parameter space are tremendous. In this work, we describe the change of the matter covariance and the weak lensing covariance matrix as a function of cosmological parameters by constructing a suitable basis, where we model the contribution to the covariance from non-linear structure formation using Eulerian perturbation theory at third order. We show that our formalism is capable of dealing with large matrices and reproduces expected degeneracies and scaling with cosmological parameters in a reliable way. Comparing our analytical results to numerical simulations, we find that the method describes the variation of the covariance matrix found in the SUNGLASS weak lensing simulation pipeline within the errors at one-loop and tree-level for the spectrum and the trispectrum, respectively, for multipoles up to ℓ ≤ 1300. We show that it is possible to optimize the sampling of parameter space where numerical simulations should be carried out by minimizing interpolation errors and propose a corresponding method to distribute points in parameter space in an economical way.
GCR Simulator Development Status at the NASA Space Radiation Laboratory
NASA Technical Reports Server (NTRS)
Slaba, T. C.; Norbury, J. W.; Blattnig, S. R.
2015-01-01
There are large uncertainties connected to the biological response for exposure to galactic cosmic rays (GCR) on long duration deep space missions. In order to reduce the uncertainties and gain understanding about the basic mechanisms through which space radiation initiates cancer and other endpoints, radiobiology experiments are performed with mono-energetic ions beams. Some of the accelerator facilities supporting such experiments have matured to a point where simulating the broad range of particles and energies characteristic of the GCR environment in a single experiment is feasible from a technology, usage, and cost perspective. In this work, several aspects of simulating the GCR environment at the NASA Space Radiation Laboratory (NSRL) are discussed. First, comparisons are made between direct simulation of the external, free space GCR field, and simulation of the induced tissue field behind shielding. It is found that upper energy constraints at NSRL limit the ability to simulate the external, free space field directly (i.e. shielding placed in the beam line in front of a biological target and exposed to a free space spectrum). Second, a reference environment for the GCR simulator and suitable for deep space missions is identified and described in terms of fluence and integrated dosimetric quantities. Analysis results are given to justify the use of a single reference field over a range of shielding conditions and solar activities. Third, an approach for simulating the reference field at NSRL is presented. The approach directly considers the hydrogen and helium energy spectra, and the heavier ions are collectively represented by considering the linear energy transfer (LET) spectrum. While many more aspects of the experimental setup need to be considered before final implementation of the GCR simulator, this preliminary study provides useful information that should aid the final design. Possible drawbacks of the proposed methodology are discussed and weighed against alternative simulation strategies.
A molecular simulation protocol to avoid sampling redundancy and discover new states.
Bacci, Marco; Vitalis, Andreas; Caflisch, Amedeo
2015-05-01
For biomacromolecules or their assemblies, experimental knowledge is often restricted to specific states. Ambiguity pervades simulations of these complex systems because there is no prior knowledge of relevant phase space domains, and sampling recurrence is difficult to achieve. In molecular dynamics methods, ruggedness of the free energy surface exacerbates this problem by slowing down the unbiased exploration of phase space. Sampling is inefficient if dwell times in metastable states are large. We suggest a heuristic algorithm to terminate and reseed trajectories run in multiple copies in parallel. It uses a recent method to order snapshots, which provides notions of "interesting" and "unique" for individual simulations. We define criteria to guide the reseeding of runs from more "interesting" points if they sample overlapping regions of phase space. Using a pedagogical example and an α-helical peptide, the approach is demonstrated to amplify the rate of exploration of phase space and to discover metastable states not found by conventional sampling schemes. Evidence is provided that accurate kinetics and pathways can be extracted from the simulations. The method, termed PIGS for Progress Index Guided Sampling, proceeds in unsupervised fashion, is scalable, and benefits synergistically from larger numbers of replicas. Results confirm that the underlying ideas are appropriate and sufficient to enhance sampling. In molecular simulations, errors caused by not exploring relevant domains in phase space are always unquantifiable and can be arbitrarily large. Our protocol adds to the toolkit available to researchers in reducing these types of errors. This article is part of a Special Issue entitled "Recent developments of molecular dynamics". Copyright © 2014 Elsevier B.V. All rights reserved.
A general-purpose development environment for intelligent computer-aided training systems
NASA Technical Reports Server (NTRS)
Savely, Robert T.
1990-01-01
Space station training will be a major task, requiring the creation of large numbers of simulation-based training systems for crew, flight controllers, and ground-based support personnel. Given the long duration of space station missions and the large number of activities supported by the space station, the extension of space shuttle training methods to space station training may prove to be impractical. The application of artificial intelligence technology to simulation training can provide the ability to deliver individualized training to large numbers of personnel in a distributed workstation environment. The principal objective of this project is the creation of a software development environment which can be used to build intelligent training systems for procedural tasks associated with the operation of the space station. Current NASA Johnson Space Center projects and joint projects with other NASA operational centers will result in specific training systems for existing space shuttle crew, ground support personnel, and flight controller tasks. Concurrently with the creation of these systems, a general-purpose development environment for intelligent computer-aided training systems will be built. Such an environment would permit the rapid production, delivery, and evolution of training systems for space station crew, flight controllers, and other support personnel. The widespread use of such systems will serve to preserve task and training expertise, support the training of many personnel in a distributed manner, and ensure the uniformity and verifiability of training experiences. As a result, significant reductions in training costs can be realized while safety and the probability of mission success can be enhanced.
NASA Astrophysics Data System (ADS)
Owolabi, Kolade M.
2018-03-01
In this work, we are concerned with the solution of non-integer space-fractional reaction-diffusion equations with the Riemann-Liouville space-fractional derivative in high dimensions. We approximate the Riemann-Liouville derivative with the Fourier transform method and advance the resulting system in time with any time-stepping solver. In the numerical experiments, we expect the travelling wave to arise from the given initial condition on the computational domain (-∞, ∞), which we terminate in the numerical experiments with a large but truncated value of L. It is necessary to choose L large enough to allow the waves to have enough space to distribute. Experimental results in high dimensions on the space-fractional reaction-diffusion models with applications to biological models (Fisher and Allen-Cahn equations) are considered. Simulation results reveal that fractional reaction-diffusion equations can give rise to a range of physical phenomena when compared to non-integer-order cases. As a result, most meaningful and practical situations are found to be modelled with the concept of fractional calculus.
NASA Technical Reports Server (NTRS)
Fletcher, Lauren E.; Aldridge, Ann M.; Wheelwright, Charles; Maida, James
1997-01-01
Task illumination has a major impact on human performance: What a person can perceive in his environment significantly affects his ability to perform tasks, especially in space's harsh environment. Training for lighting conditions in space has long depended on physical models and simulations to emulate the effect of lighting, but such tests are expensive and time-consuming. To evaluate lighting conditions not easily simulated on Earth, personnel at NASA Johnson Space Center's (JSC) Graphics Research and Analysis Facility (GRAF) have been developing computerized simulations of various illumination conditions using the ray-tracing program, Radiance, developed by Greg Ward at Lawrence Berkeley Laboratory. Because these computer simulations are only as accurate as the data used, accurate information about the reflectance properties of materials and light distributions is needed. JSC's Lighting Environment Test Facility (LETF) personnel gathered material reflectance properties for a large number of paints, metals, and cloths used in the Space Shuttle and Space Station programs, and processed these data into reflectance parameters needed for the computer simulations. They also gathered lamp distribution data for most of the light sources used, and validated the ability to accurately simulate lighting levels by comparing predictions with measurements for several ground-based tests. The result of this study is a database of material reflectance properties for a wide variety of materials, and lighting information for most of the standard light sources used in the Shuttle/Station programs. The combination of the Radiance program and GRAF's graphics capability form a validated computerized lighting simulation capability for NASA.
Polymerisation processes in expoy resins under influence of free space environment
NASA Astrophysics Data System (ADS)
Kondyurin, A.; Lauke, B.; Kondyurina, I.
A creation of large size constructions in space or on celestial bodies is possible by the way of chemical reactions of liquid viscous components under space environment conditions [1-2]. In particular, a new technology for large-size space module for electronic components, energy and materials production is developed on the basis of polymerisation technique. The factors of free space environment have a significant influence on the polymerisation processes. The polymerisation processes in active liquid components are sensitive to microgravitation, temperature variations (-150{ldots}+1500C), high vacuum (10-3{ldots}10-7 Pa), atomic oxygen flux (on LEO), UV and VUV irradiations, X-ray and γ -irradiations, high energy electron and ion fluxes. Experiments of polymerisation processes under simulated free space conditions were conducted. The influences of high vacuum, high energy ion beam and rf- and mw-plasma on polymerisation of epoxy resins were observed. The effects of low molecular components evaporations, free radical formations, additional chemical reactions and mixing processes during polymerisation were observed. Our results showed, that the space factors can initiate the polymerisation reaction in epoxy matrix of glass and carbon fibre composites. The result can be used for a technology for large size constructions on Earth orbit, in far space and on space bodies as for deployed antennas, solar sail stringers, solar shield stringers, frame for large-size space station, frame for Moon, Mars, asteroids bases, frame for space plant on Earth orbit and on other celestial bodies. The study was partially supported by Alexander von Humboldt Foundation (A. Kondyurin) and European Space Agency, ESTEC (contract 17083/03/NL/Sfe "Space Environmental Effects on the Polymerisation of Composite Structures"). 1. A.Kondyurin, B.Lauke, Polymerisation processes in simulated free space conditions, Proceedings of the 9th International Symposium on Materials in a Space Environment, Noordwijk, The Netherlands, 16-20 June, 2003, ESA SP-540, September 2003, pp.75-80. 2. V.A.Briskman, T.M.Yudina, K.G.Kostarev, A.V.Kondyurin, V.B.Leontyev, M.G.Levkovich, A.L.Mashinsky, G.S.Nechitailo, Polymerization in microgravity as a new process in space technology, Acta Astronautica, vol.48, N2-3, 2001, pp.169-180.
NASA Astrophysics Data System (ADS)
Hobson, T.; Clarkson, V.
2012-09-01
As a result of continual space activity since the 1950s, there are now a large number of man-made Resident Space Objects (RSOs) orbiting the Earth. Because of the large number of items and their relative speeds, the possibility of destructive collisions involving important space assets is now of significant concern to users and operators of space-borne technologies. As a result, a growing number of international agencies are researching methods for improving techniques to maintain Space Situational Awareness (SSA). Computer simulation is a method commonly used by many countries to validate competing methodologies prior to full scale adoption. The use of supercomputing and/or reduced scale testing is often necessary to effectively simulate such a complex problem on todays computers. Recently the authors presented a simulation aimed at reducing the computational burden by selecting the minimum level of fidelity necessary for contrasting methodologies and by utilising multi-core CPU parallelism for increased computational efficiency. The resulting simulation runs on a single PC while maintaining the ability to effectively evaluate competing methodologies. Nonetheless, the ability to control the scale and expand upon the computational demands of the sensor management system is limited. In this paper, we examine the advantages of increasing the parallelism of the simulation by means of General Purpose computing on Graphics Processing Units (GPGPU). As many sub-processes pertaining to SSA management are independent, we demonstrate how parallelisation via GPGPU has the potential to significantly enhance not only research into techniques for maintaining SSA, but also to enhance the level of sophistication of existing space surveillance sensors and sensor management systems. Nonetheless, the use of GPGPU imposes certain limitations and adds to the implementation complexity, both of which require consideration to achieve an effective system. We discuss these challenges and how they can be overcome. We further describe an application of the parallelised system where visibility prediction is used to enhance sensor management. This facilitates significant improvement in maximum catalogue error when RSOs become temporarily unobservable. The objective is to demonstrate the enhanced scalability and increased computational capability of the system.
Mixed-field GCR Simulations for Radiobiological Research using Ground Based Accelerators
NASA Astrophysics Data System (ADS)
Kim, Myung-Hee Y.; Rusek, Adam; Cucinotta, Francis
Space radiation is comprised of a large number of particle types and energies, which have differential ionization power from high energy protons to high charge and energy (HZE) particles and secondary neutrons produced by galactic cosmic rays (GCR). Ground based accelerators such as the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (BNL) are used to simulate space radiation for radiobiology research and dosimetry, electronics parts, and shielding testing using mono-energetic beams for single ion species. As a tool to support research on new risk assessment models, we have developed a stochastic model of heavy ion beams and space radiation effects, the GCR Event-based Risk Model computer code (GERMcode). For radiobiological research on mixed-field space radiation, a new GCR simulator at NSRL is proposed. The NSRL-GCR simulator, which implements the rapid switching mode and the higher energy beam extraction to 1.5 GeV/u, can integrate multiple ions into a single simulation to create GCR Z-spectrum in major energy bins. After considering the GCR environment and energy limitations of NSRL, a GCR reference field is proposed after extensive simulation studies using the GERMcode. The GCR reference field is shown to reproduce the Z and LET spectra of GCR behind shielding within 20 percents accuracy compared to simulated full GCR environments behind shielding. A major challenge for space radiobiology research is to consider chronic GCR exposure of up to 3-years in relation to simulations with cell and animal models of human risks. We discuss possible approaches to map important biological time scales in experimental models using ground-based simulation with extended exposure of up to a few weeks and fractionation approaches at a GCR simulator.
Mixed-field GCR Simulations for Radiobiological Research Using Ground Based Accelerators
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Rusek, Adam; Cucinotta, Francis A.
2014-01-01
Space radiation is comprised of a large number of particle types and energies, which have differential ionization power from high energy protons to high charge and energy (HZE) particles and secondary neutrons produced by galactic cosmic rays (GCR). Ground based accelerators such as the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (BNL) are used to simulate space radiation for radiobiology research and dosimetry, electronics parts, and shielding testing using mono-energetic beams for single ion species. As a tool to support research on new risk assessment models, we have developed a stochastic model of heavy ion beams and space radiation effects, the GCR Event-based Risk Model computer code (GERMcode). For radiobiological research on mixed-field space radiation, a new GCR simulator at NSRL is proposed. The NSRL-GCR simulator, which implements the rapid switching mode and the higher energy beam extraction to 1.5 GeV/u, can integrate multiple ions into a single simulation to create GCR Z-spectrum in major energy bins. After considering the GCR environment and energy limitations of NSRL, a GCR reference field is proposed after extensive simulation studies using the GERMcode. The GCR reference field is shown to reproduce the Z and LET spectra of GCR behind shielding within 20% accuracy compared to simulated full GCR environments behind shielding. A major challenge for space radiobiology research is to consider chronic GCR exposure of up to 3-years in relation to simulations with cell and animal models of human risks. We discuss possible approaches to map important biological time scales in experimental models using ground-based simulation with extended exposure of up to a few weeks and fractionation approaches at a GCR simulator.
Revealing the global map of protein folding space by large-scale simulations
NASA Astrophysics Data System (ADS)
Sinner, Claude; Lutz, Benjamin; Verma, Abhinav; Schug, Alexander
2015-12-01
The full characterization of protein folding is a remarkable long-standing challenge both for experiment and simulation. Working towards a complete understanding of this process, one needs to cover the full diversity of existing folds and identify the general principles driving the process. Here, we want to understand and quantify the diversity in folding routes for a large and representative set of protein topologies covering the full range from all alpha helical topologies towards beta barrels guided by the key question: Does the majority of the observed routes contribute to the folding process or only a particular route? We identified a set of two-state folders among non-homologous proteins with a sequence length of 40-120 residues. For each of these proteins, we ran native-structure based simulations both with homogeneous and heterogeneous contact potentials. For each protein, we simulated dozens of folding transitions in continuous uninterrupted simulations and constructed a large database of kinetic parameters. We investigate folding routes by tracking the formation of tertiary structure interfaces and discuss whether a single specific route exists for a topology or if all routes are equiprobable. These results permit us to characterize the complete folding space for small proteins in terms of folding barrier ΔG‡, number of routes, and the route specificity RT.
NASA Astrophysics Data System (ADS)
Matsuoka, Seikichi; Idomura, Yasuhiro; Satake, Shinsuke
2017-10-01
The neoclassical toroidal viscosity (NTV) caused by a non-axisymmetric magnetic field perturbation is numerically studied using two global kinetic simulations with different numerical approaches. Both simulations reproduce similar collisionality ( νb*) dependencies over wide νb * ranges. It is demonstrated that resonant structures in the velocity space predicted by the conventional superbanana-plateau theory exist in the small banana width limit, while the resonances diminish when the banana width becomes large. It is also found that fine scale structures are generated in the velocity space as νb* decreases in the large banana width simulations, leading to the νb* -dependency of the NTV. From the analyses of the particle orbit, it is found that the finite k∥ mode structure along the bounce motion appears owing to the finite orbit width, and it suffers from bounce phase mixing, suggesting the generation of the fine scale structures by the similar mechanism as the parallel phase mixing of passing particles.
NASA Astrophysics Data System (ADS)
Egron, Sylvain; Lajoie, Charles-Philippe; Michau, Vincent; Bonnefois, Aurélie; Escolle, Clément; Leboulleux, Lucie; N'Diaye, Mamadou; Pueyo, Laurent; Choquet, Elodie; Perrin, Marshall D.; Ygouf, Marie; Fusco, Thierry; Ferrari, Marc; Hugot, Emmanuel; Soummer, Rémi
2017-09-01
The current generation of terrestrial telescopes has large enough primary mirror diameters that active optical control based on wavefront sensing is necessary. Similarly, in space, while the Hubble Space Telescope (HST) has a mostly passive optical design, apart from focus control, its successor the James Webb Space Telescope (JWST) has active control of many degrees of freedom in its primary and secondary mirrors.
Assembly considerations for large reflectors
NASA Technical Reports Server (NTRS)
Bush, H.
1988-01-01
The technologies developed at LaRC in the area of erectable instructures are discussed. The information is of direct value to the Large Deployable Reflector (LDR) because an option for the LDR backup structure is to assemble it in space. The efforts in this area, which include development of joints, underwater assembly simulation tests, flight assembly/disassembly tests, and fabrication of 5-meter trusses, led to the use of the LaRC concept as the baseline configuration for the Space Station Structure. The Space Station joint is linear in the load and displacement range of interest to Space Station; the ability to manually assemble and disassemble a 45-foot truss structure was demonstrated by astronauts in space as part of the ACCESS Shuttle Flight Experiment. The structure was built in 26 minutes 46 seconds, and involved a total of 500 manipulations of untethered hardware. Also, the correlation of the space experience with the neutral buoyancy simulation was very good. Sections of the proposed 5-meter bay Space Station truss have been built on the ground. Activities at LaRC have included the development of mobile remote manipulator systems (which can traverse the Space Station 5-meter structure), preliminary LDR sun shield concepts, LDR construction scenarios, and activities in robotic assembly of truss-type structures.
A Reference Field for GCR Simulation and an LET-Based Implementation at NSRL
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Blattnig, Steve R.; Walker, Steven A.; Norbury, John W.
2015-01-01
Exposure to galactic cosmic rays (GCR) on long duration deep space missions presents a serious health risk to astronauts, with large uncertainties connected to the biological response. In order to reduce the uncertainties and gain understanding about the basic mechanisms through which space radiation initiates cancer and other endpoints, radiobiology experiments are performed. Some of the accelerator facilities supporting such experiments have matured to a point where simulating the broad range of particles and energies characteristic of the GCR environment in a single experiment is feasible from a technology, usage, and cost perspective. In this work, several aspects of simulating the GCR environment in the laboratory are discussed. First, comparisons are made between direct simulation of the external, free space GCR field and simulation of the induced tissue field behind shielding. It is found that upper energy constraints at the NASA Space Radiation Laboratory (NSRL) limit the ability to simulate the external, free space field directly (i.e. shielding placed in the beam line in front of a biological target and exposed to a free space spectrum). Second, variation in the induced tissue field associated with shielding configuration and solar activity is addressed. It is found that the observed variation is within physical uncertainties, allowing a single reference field for deep space missions to be defined. Third, an approach for simulating the reference field at NSRL is presented. The approach allows for the linear energy transfer (LET) spectrum of the reference field to be approximately represented with discrete ion and energy beams and implicitly maintains a reasonably accurate charge spectrum (or, average quality factor). Drawbacks of the proposed methodology are discussed and weighed against alternative simulation strategies. The neutron component and track structure characteristics of the proposed strategy are discussed in this context.
Architectural Large Constructed Environment. Modeling and Interaction Using Dynamic Simulations
NASA Astrophysics Data System (ADS)
Fiamma, P.
2011-09-01
How to use for the architectural design, the simulation coming from a large size data model? The topic is related to the phase coming usually after the acquisition of the data, during the construction of the model and especially after, when designers must have an interaction with the simulation, in order to develop and verify their idea. In the case of study, the concept of interaction includes the concept of real time "flows". The work develops contents and results that can be part of the large debate about the current connection between "architecture" and "movement". The focus of the work, is to realize a collaborative and participative virtual environment on which different specialist actors, client and final users can share knowledge, targets and constraints to better gain the aimed result. The goal is to have used a dynamic micro simulation digital resource that allows all the actors to explore the model in powerful and realistic way and to have a new type of interaction in a complex architectural scenario. On the one hand, the work represents a base of knowledge that can be implemented more and more; on the other hand the work represents a dealt to understand the large constructed architecture simulation as a way of life, a way of being in time and space. The architectural design before, and the architectural fact after, both happen in a sort of "Spatial Analysis System". The way is open to offer to this "system", knowledge and theories, that can support architectural design work for every application and scale. We think that the presented work represents a dealt to understand the large constructed architecture simulation as a way of life, a way of being in time and space. Architecture like a spatial configuration, that can be reconfigurable too through designing.
A method for data handling numerical results in parallel OpenFOAM simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anton, Alin; Muntean, Sebastian
Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.
Shape determination and control for large space structures
NASA Technical Reports Server (NTRS)
Weeks, C. J.
1981-01-01
An integral operator approach is used to derive solutions to static shape determination and control problems associated with large space structures. Problem assumptions include a linear self-adjoint system model, observations and control forces at discrete points, and performance criteria for the comparison of estimates or control forms. Results are illustrated by simulations in the one dimensional case with a flexible beam model, and in the multidimensional case with a finite model of a large space antenna. Modal expansions for terms in the solution algorithms are presented, using modes from the static or associated dynamic mode. These expansions provide approximated solutions in the event that a used form analytical solution to the system boundary value problem is not available.
Fractional Transport in Strongly Turbulent Plasmas.
Isliker, Heinz; Vlahos, Loukas; Constantinescu, Dana
2017-07-28
We analyze statistically the energization of particles in a large scale environment of strong turbulence that is fragmented into a large number of distributed current filaments. The turbulent environment is generated through strongly perturbed, 3D, resistive magnetohydrodynamics simulations, and it emerges naturally from the nonlinear evolution, without a specific reconnection geometry being set up. Based on test-particle simulations, we estimate the transport coefficients in energy space for use in the classical Fokker-Planck (FP) equation, and we show that the latter fails to reproduce the simulation results. The reason is that transport in energy space is highly anomalous (strange), the particles perform Levy flights, and the energy distributions show extended power-law tails. Newly then, we motivate the use and derive the specific form of a fractional transport equation (FTE), we determine its parameters and the order of the fractional derivatives from the simulation data, and we show that the FTE is able to reproduce the high energy part of the simulation data very well. The procedure for determining the FTE parameters also makes clear that it is the analysis of the simulation data that allows us to make the decision whether a classical FP equation or a FTE is appropriate.
Fractional Transport in Strongly Turbulent Plasmas
NASA Astrophysics Data System (ADS)
Isliker, Heinz; Vlahos, Loukas; Constantinescu, Dana
2017-07-01
We analyze statistically the energization of particles in a large scale environment of strong turbulence that is fragmented into a large number of distributed current filaments. The turbulent environment is generated through strongly perturbed, 3D, resistive magnetohydrodynamics simulations, and it emerges naturally from the nonlinear evolution, without a specific reconnection geometry being set up. Based on test-particle simulations, we estimate the transport coefficients in energy space for use in the classical Fokker-Planck (FP) equation, and we show that the latter fails to reproduce the simulation results. The reason is that transport in energy space is highly anomalous (strange), the particles perform Levy flights, and the energy distributions show extended power-law tails. Newly then, we motivate the use and derive the specific form of a fractional transport equation (FTE), we determine its parameters and the order of the fractional derivatives from the simulation data, and we show that the FTE is able to reproduce the high energy part of the simulation data very well. The procedure for determining the FTE parameters also makes clear that it is the analysis of the simulation data that allows us to make the decision whether a classical FP equation or a FTE is appropriate.
Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji
2015-01-01
GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310–323. doi: 10.1002/wcms.1220 PMID:26753008
Simulation of Deep Convective Clouds with the Dynamic Reconstruction Turbulence Closure
NASA Astrophysics Data System (ADS)
Shi, X.; Chow, F. K.; Street, R. L.; Bryan, G. H.
2017-12-01
The terra incognita (TI), or gray zone, in simulations is a range of grid spacing comparable to the most energetic eddy diameter. Spacing in mesoscale and simulations is much larger than the eddies, and turbulence is parameterized with one-dimensional vertical-mixing. Large eddy simulations (LES) have grid spacing much smaller than the energetic eddies, and use three-dimensional models of turbulence. Studies of convective weather use convection-permitting resolutions, which are in the TI. Neither mesoscale-turbulence nor LES models are designed for the TI, so TI turbulence parameterization needs to be discussed. Here, the effects of sub-filter scale (SFS) closure schemes on the simulation of deep tropical convection are evaluated by comparing three closures, i.e. Smagorinsky model, Deardorff-type TKE model and the dynamic reconstruction model (DRM), which partitions SFS turbulence into resolvable sub-filter scales (RSFS) and unresolved sub-grid scales (SGS). The RSFS are reconstructed, and the SGS are modeled with a dynamic eddy viscosity/diffusivity model. The RSFS stresses/fluxes allow backscatter of energy/variance via counter-gradient stresses/fluxes. In high-resolution (100m) simulations of tropical convection use of these turbulence models did not lead to significant differences in cloud water/ice distribution, precipitation flux, or vertical fluxes of momentum and heat. When model resolutions are coarsened, the Smagorinsky and TKE models overestimate cloud ice and produces large-amplitude downward heat flux in the middle troposphere (not found in the high-resolution simulations). This error is a result of unrealistically large eddy diffusivities, i.e., the eddy diffusivity of the DRM is on the order of 1 for the coarse resolution simulations, the eddy diffusivity of the Smagorinsky and TKE model is on the order of 100. Splitting the eddy viscosity/diffusivity scalars into vertical and horizontal components by using different length scales and strain rate components helps to reduce the errors, but does not completely remedy the problem. In contrast, the coarse resolution simulations using the DRM produce results that are more consistent with the high-resolution results, suggesting that the DRM is a more appropriate turbulence model for simulating convection in the TI.
Subgrid-scale models for large-eddy simulation of rotating turbulent flows
NASA Astrophysics Data System (ADS)
Silvis, Maurits; Trias, Xavier; Abkar, Mahdi; Bae, Hyunji Jane; Lozano-Duran, Adrian; Verstappen, Roel
2016-11-01
This paper discusses subgrid models for large-eddy simulation of anisotropic flows using anisotropic grids. In particular, we are looking into ways to model not only the subgrid dissipation, but also transport processes, since these are expected to play an important role in rotating turbulent flows. We therefore consider subgrid-scale models of the form τ = - 2νt S +μt (SΩ - ΩS) , where the eddy-viscosity νt is given by the minimum-dissipation model, μt represents a transport coefficient; S is the symmetric part of the velocity gradient and Ω the skew-symmetric part. To incorporate the effect of mesh anisotropy the filter length is taken in such a way that it minimizes the difference between the turbulent stress in physical and computational space, where the physical space is covered by an anisotropic mesh and the computational space is isotropic. The resulting model is successfully tested for rotating homogeneous isotropic turbulence and rotating plane-channel flows. The research was largely carried out during the CTR SP 2016. M.S, and R.V. acknowledge the financial support to attend this Summer Program.
NASA Technical Reports Server (NTRS)
Boland, J. S., III
1975-01-01
A general simulation program is presented (GSP) involving nonlinear state estimation for space vehicle flight navigation systems. A complete explanation of the iterative guidance mode guidance law, derivation of the dynamics, coordinate frames, and state estimation routines are given so as to fully clarify the assumptions and approximations involved so that simulation results can be placed in their proper perspective. A complete set of computer acronyms and their definitions as well as explanations of the subroutines used in the GSP simulator are included. To facilitate input/output, a complete set of compatable numbers, with units, are included to aid in data development. Format specifications, output data phrase meanings and purposes, and computer card data input are clearly spelled out. A large number of simulation and analytical studies were used to determine the validity of the simulator itself as well as various data runs.
NASA Astrophysics Data System (ADS)
Gao, Yang; Leung, L. Ruby; Zhao, Chun; Hagos, Samson
2017-03-01
Simulating summer precipitation is a significant challenge for climate models that rely on cumulus parameterizations to represent moist convection processes. Motivated by recent advances in computing that support very high-resolution modeling, this study aims to systematically evaluate the effects of model resolution and convective parameterizations across the gray zone resolutions. Simulations using the Weather Research and Forecasting model were conducted at grid spacings of 36 km, 12 km, and 4 km for two summers over the conterminous U.S. The convection-permitting simulations at 4 km grid spacing are most skillful in reproducing the observed precipitation spatial distributions and diurnal variability. Notable differences are found between simulations with the traditional Kain-Fritsch (KF) and the scale-aware Grell-Freitas (GF) convection schemes, with the latter more skillful in capturing the nocturnal timing in the Great Plains and North American monsoon regions. The GF scheme also simulates a smoother transition from convective to large-scale precipitation as resolution increases, resulting in reduced sensitivity to model resolution compared to the KF scheme. Nonhydrostatic dynamics has a positive impact on precipitation over complex terrain even at 12 km and 36 km grid spacings. With nudging of the winds toward observations, we show that the conspicuous warm biases in the Southern Great Plains are related to precipitation biases induced by large-scale circulation biases, which are insensitive to model resolution. Overall, notable improvements in simulating summer rainfall and its diurnal variability through convection-permitting modeling and scale-aware parameterizations suggest promising venues for improving climate simulations of water cycle processes.
Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing
Zhang, Qianghui; Wu, Junjie; Li, Wenchao; Huang, Yulin; Yang, Jianyu; Yang, Haiguang
2016-01-01
Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR) equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS), which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR) provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP) is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD) based on Stolt interpolation. Finally, a modified TSP (MTSP) is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application. PMID:27472341
Experimental simulation of space plasma interactions with high voltage solar arrays
NASA Technical Reports Server (NTRS)
Stillwell, R. P.; Kaufman, H. R.; Robinson, R. S.
1981-01-01
Operating high voltage solar arrays in the space environment can result in anomalously large currents being collected through small insulation defects. Tests of simulated defects have been conducted in a 45-cm vacuum chamber with plasma densities of 100,000 to 1,000,000/cu cm. Plasmas were generated using an argon hollow cathode. The solar array elements were simulated by placing a thin sheet of polyimide (Kapton) insulation with a small hole in it over a conductor. Parameters tested were: hole size, adhesive, surface roughening, sample temperature, insulator thickness, insulator area. These results are discussed along with some preliminary empirical correlations.
A large high vacuum, high pumping speed space simulation chamber for electric propulsion
NASA Technical Reports Server (NTRS)
Grisnik, Stanley P.; Parkes, James E.
1994-01-01
Testing high power electric propulsion devices poses unique requirements on space simulation facilities. Very high pumping speeds are required to maintain high vacuum levels while handling large volumes of exhaust products. These pumping speeds are significantly higher than those available in most existing vacuum facilities. There is also a requirement for relatively large vacuum chamber dimensions to minimize facility wall/thruster plume interactions and to accommodate far field plume diagnostic measurements. A 4.57 m (15 ft) diameter by 19.2 m (63 ft) long vacuum chamber at NASA Lewis Research Center is described. The chamber utilizes oil diffusion pumps in combination with cryopanels to achieve high vacuum pumping speeds at high vacuum levels. The facility is computer controlled for all phases of operation from start-up, through testing, to shutdown. The computer control system increases the utilization of the facility and reduces the manpower requirements needed for facility operations.
NASA Technical Reports Server (NTRS)
Menon, R. G.; Kurdila, A. J.
1992-01-01
This paper presents a concurrent methodology to simulate the dynamics of flexible multibody systems with a large number of degrees of freedom. A general class of open-loop structures is treated and a redundant coordinate formulation is adopted. A range space method is used in which the constraint forces are calculated using a preconditioned conjugate gradient method. By using a preconditioner motivated by the regular ordering of the directed graph of the structures, it is shown that the method is order N in the total number of coordinates of the system. The overall formulation has the advantage that it permits fine parallelization and does not rely on system topology to induce concurrency. It can be efficiently implemented on the present generation of parallel computers with a large number of processors. Validation of the method is presented via numerical simulations of space structures incorporating large number of flexible degrees of freedom.
Qin, Zong; Wang, Kai; Chen, Fei; Luo, Xiaobing; Liu, Sheng
2010-08-02
In this research, the condition for uniform lighting generated by array of LEDs with large view angle was studied. The luminous intensity distribution of LED is not monotone decreasing with view angle. A LED with freeform lens was designed as an example for analysis. In a system based on LEDs designed in house with a thickness of 20mm and rectangular arrangement, the condition for uniform lighting was derived and the analytical results demonstrated that the uniformity was not decreasing monotonously with the increasing of LED-to-LED spacing. The illuminance uniformities were calculated with Monte Carlo ray tracing simulations and the uniformity was found to increase with the increasing of certain LED-to-LED spacings anomalously. Another type of large view angle LED and different arrangements were discussed in addition. Both analysis and simulation results showed that the method is available for LED array lighting system design on the basis of large view angle LED..
RF Systems in Space. Volume I. Space Antennas Frequency (SARF) Simulation.
1983-04-01
lens SBR designs were investigated. The survivability of an SBR system was analyzed. The design of ground based SBR validation experiments for large...aperture SBR concepts were investigated. SBR designs were investigated for ground target detection. N1’IS GRAMI DTIC TAB E Unannounced E Justificat... designs :~~.~...: .-..:. ->.. - . *.* . ..- . . .. . -. . ..- . .4. To analyze the survivability of space radar 5. To design ground-based validation
Simulation of MEMS for the Next Generation Space Telescope
NASA Technical Reports Server (NTRS)
Mott, Brent; Kuhn, Jonathan; Broduer, Steve (Technical Monitor)
2001-01-01
The NASA Goddard Space Flight Center (GSFC) is developing optical micro-electromechanical system (MEMS) components for potential application in Next Generation Space Telescope (NGST) science instruments. In this work, we present an overview of the electro-mechanical simulation of three MEMS components for NGST, which include a reflective micro-mirror array and transmissive microshutter array for aperture control for a near infrared (NIR) multi-object spectrometer and a large aperture MEMS Fabry-Perot tunable filter for a NIR wide field camera. In all cases the device must operate at cryogenic temperatures with low power consumption and low, complementary metal oxide semiconductor (CMOS) compatible, voltages. The goal of our simulation efforts is to adequately predict both the performance and the reliability of the devices during ground handling, launch, and operation to prevent failures late in the development process and during flight. This goal requires detailed modeling and validation of complex electro-thermal-mechanical interactions and very large non-linear deformations, often involving surface contact. Various parameters such as spatial dimensions and device response are often difficult to measure reliably at these small scales. In addition, these devices are fabricated from a wide variety of materials including surface micro-machined aluminum, reactive ion etched (RIE) silicon nitride, and deep reactive ion etched (DRIE) bulk single crystal silicon. The above broad set of conditions combine to be a formidable challenge for space flight qualification analysis. These simulations represent NASA/GSFC's first attempts at implementing a comprehensive strategy to address complex MEMS structures.
Simulating Vibrations in a Complex Loaded Structure
NASA Technical Reports Server (NTRS)
Cao, Tim T.
2005-01-01
The Dynamic Response Computation (DIRECT) computer program simulates vibrations induced in a complex structure by applied dynamic loads. Developed to enable rapid analysis of launch- and landing- induced vibrations and stresses in a space shuttle, DIRECT also can be used to analyze dynamic responses of other structures - for example, the response of a building to an earthquake, or the response of an oil-drilling platform and attached tanks to large ocean waves. For a space-shuttle simulation, the required input to DIRECT includes mathematical models of the space shuttle and its payloads, and a set of forcing functions that simulates launch and landing loads. DIRECT can accommodate multiple levels of payload attachment and substructure as well as nonlinear dynamic responses of structural interfaces. DIRECT combines the shuttle and payload models into a single structural model, to which the forcing functions are then applied. The resulting equations of motion are reduced to an optimum set and decoupled into a unique format for simulating dynamics. During the simulation, maximum vibrations, loads, and stresses are monitored and recorded for subsequent analysis to identify structural deficiencies in the shuttle and/or payloads.
Accurate low-cost methods for performance evaluation of cache memory systems
NASA Technical Reports Server (NTRS)
Laha, Subhasis; Patel, Janak H.; Iyer, Ravishankar K.
1988-01-01
Methods of simulation based on statistical techniques are proposed to decrease the need for large trace measurements and for predicting true program behavior. Sampling techniques are applied while the address trace is collected from a workload. This drastically reduces the space and time needed to collect the trace. Simulation techniques are developed to use the sampled data not only to predict the mean miss rate of the cache, but also to provide an empirical estimate of its actual distribution. Finally, a concept of primed cache is introduced to simulate large caches by the sampling-based method.
Large-scale large eddy simulation of nuclear reactor flows: Issues and perspectives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merzari, Elia; Obabko, Aleks; Fischer, Paul
Numerical simulation has been an intrinsic part of nuclear engineering research since its inception. In recent years a transition is occurring toward predictive, first-principle-based tools such as computational fluid dynamics. Even with the advent of petascale computing, however, such tools still have significant limitations. In the present work some of these issues, and in particular the presence of massive multiscale separation, are discussed, as well as some of the research conducted to mitigate them. Petascale simulations at high fidelity (large eddy simulation/direct numerical simulation) were conducted with the massively parallel spectral element code Nek5000 on a series of representative problems.more » These simulations shed light on the requirements of several types of simulation: (1) axial flow around fuel rods, with particular attention to wall effects; (2) natural convection in the primary vessel; and (3) flow in a rod bundle in the presence of spacing devices. Finally, the focus of the work presented here is on the lessons learned and the requirements to perform these simulations at exascale. Additional physical insight gained from these simulations is also emphasized.« less
Large-scale large eddy simulation of nuclear reactor flows: Issues and perspectives
Merzari, Elia; Obabko, Aleks; Fischer, Paul; ...
2016-11-03
Numerical simulation has been an intrinsic part of nuclear engineering research since its inception. In recent years a transition is occurring toward predictive, first-principle-based tools such as computational fluid dynamics. Even with the advent of petascale computing, however, such tools still have significant limitations. In the present work some of these issues, and in particular the presence of massive multiscale separation, are discussed, as well as some of the research conducted to mitigate them. Petascale simulations at high fidelity (large eddy simulation/direct numerical simulation) were conducted with the massively parallel spectral element code Nek5000 on a series of representative problems.more » These simulations shed light on the requirements of several types of simulation: (1) axial flow around fuel rods, with particular attention to wall effects; (2) natural convection in the primary vessel; and (3) flow in a rod bundle in the presence of spacing devices. Finally, the focus of the work presented here is on the lessons learned and the requirements to perform these simulations at exascale. Additional physical insight gained from these simulations is also emphasized.« less
A space systems perspective of graphics simulation integration
NASA Technical Reports Server (NTRS)
Brown, R.; Gott, C.; Sabionski, G.; Bochsler, D.
1987-01-01
Creation of an interactive display environment can expose issues in system design and operation not apparent from nongraphics development approaches. Large amounts of information can be presented in a short period of time. Processes can be simulated and observed before committing resources. In addition, changes in the economics of computing have enabled broader graphics usage beyond traditional engineering and design into integrated telerobotics and Artificial Intelligence (AI) applications. The highly integrated nature of space operations often tend to rely upon visually intensive man-machine communication to ensure success. Graphics simulation activities at the Mission Planning and Analysis Division (MPAD) of NASA's Johnson Space Center are focusing on the evaluation of a wide variety of graphical analysis within the context of present and future space operations. Several telerobotics and AI applications studies utilizing graphical simulation are described. The presentation includes portions of videotape illustrating technology developments involving: (1) coordinated manned maneuvering unit and remote manipulator system operations, (2) a helmet mounted display system, and (3) an automated rendezous application utilizing expert system and voice input/output technology.
NASA Technical Reports Server (NTRS)
Scholl, R. E. (Editor)
1979-01-01
Earthquake engineering research capabilities of the National Aeronautics and Space Administration (NASA) facilities at George C. Marshall Space Flight Center (MSFC), Alabama, were evaluated. The results indicate that the NASA/MSFC facilities and supporting capabilities offer unique opportunities for conducting earthquake engineering research. Specific features that are particularly attractive for large scale static and dynamic testing of natural and man-made structures include the following: large physical dimensions of buildings and test bays; high loading capacity; wide range and large number of test equipment and instrumentation devices; multichannel data acquisition and processing systems; technical expertise for conducting large-scale static and dynamic testing; sophisticated techniques for systems dynamics analysis, simulation, and control; and capability for managing large-size and technologically complex programs. Potential uses of the facilities for near and long term test programs to supplement current earthquake research activities are suggested.
Heating of large format filters in sub-mm and fir space optics
NASA Astrophysics Data System (ADS)
Baccichet, N.; Savini, G.
2017-11-01
Most FIR and sub-mm space borne observatories use polymer-based quasi-optical elements like filters and lenses, due to their high transparency and low absorption in such wavelength ranges. Nevertheless, data from those missions have proven that thermal imbalances in the instrument (not caused by filters) can complicate the data analysis. Consequently, for future, higher precision instrumentation, further investigation is required on any thermal imbalances embedded in such polymer-based filters. Particularly, in this paper the heating of polymers when operating at cryogenic temperature in space will be studied. Such phenomenon is an important aspect of their functioning since the transient emission of unwanted thermal radiation may affect the scientific measurements. To assess this effect, a computer model was developed for polypropylene based filters and PTFE-based coatings. Specifically, a theoretical model of their thermal properties was created and used into a multi-physics simulation that accounts for conductive and radiative heating effects of large optical elements, the geometry of which was suggested by the large format array instruments designed for future space missions. It was found that in the simulated conditions, the filters temperature was characterized by a time-dependent behaviour, modulated by a small scale fluctuation. Moreover, it was noticed that thermalization was reached only when a low power input was present.
A mobile work station concept for mechanically aided astronaut assembly of large space trusses
NASA Technical Reports Server (NTRS)
Heard, W. L., Jr.; Bush, H. G.; Wallson, R. E.; Jensen, J. K.
1983-01-01
This report presents results of a series of truss assembly tests conducted to evaluate a mobile work station concept intended to mechanically assist astronaut manual assembly of erectable space trusses. The tests involved assembly of a tetrahedral truss beam by a pair of test subjects with and without pressure (space) suits, both in Earth gravity and in simulated zero gravity (neutral buoyancy in water). The beam was assembled from 38 identical graphite-epoxy nestable struts, 5.4 m in length with aluminum quick-attachment structural joints. Struts and joints were designed to closely simulate flight hardware. The assembled beam was approximately 16.5 m long and 4.5 m on each of the four sides of its diamond-shaped cross section. The results show that average in-space assembly rates of approximately 38 seconds per strut can be expected for struts of comparable size. This result is virtually independent of the overall size of the structure being assembled. The mobile work station concept would improve astronaut efficiency for on-orbit manual assembly of truss structures, and also this assembly-line method is highly competitive with other construction methods being considered for large space structures.
Time-Domain Filtering for Spatial Large-Eddy Simulation
NASA Technical Reports Server (NTRS)
Pruett, C. David
1997-01-01
An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.
Laboratory simulation of space plasma phenomena*
NASA Astrophysics Data System (ADS)
Amatucci, B.; Tejero, E. M.; Ganguli, G.; Blackwell, D.; Enloe, C. L.; Gillman, E.; Walker, D.; Gatling, G.
2017-12-01
Laboratory devices, such as the Naval Research Laboratory's Space Physics Simulation Chamber, are large-scale experiments dedicated to the creation of large-volume plasmas with parameters realistically scaled to those found in various regions of the near-Earth space plasma environment. Such devices make valuable contributions to the understanding of space plasmas by investigating phenomena under carefully controlled, reproducible conditions, allowing for the validation of theoretical models being applied to space data. By working in collaboration with in situ experimentalists to create realistic conditions scaled to those found during the observations of interest, the microphysics responsible for the observed events can be investigated in detail not possible in space. To date, numerous investigations of phenomena such as plasma waves, wave-particle interactions, and particle energization have been successfully performed in the laboratory. In addition to investigations such as plasma wave and instability studies, the laboratory devices can also make valuable contributions to the development and testing of space plasma diagnostics. One example is the plasma impedance probe developed at NRL. Originally developed as a laboratory diagnostic, the sensor has now been flown on a sounding rocket, is included on a CubeSat experiment, and will be included on the DoD Space Test Program's STP-H6 experiment on the International Space Station. In this presentation, we will describe several examples of the laboratory investigation of space plasma waves and instabilities and diagnostic development. *This work supported by the NRL Base Program.
Brownian dynamics simulations on a hypersphere in 4-space
NASA Astrophysics Data System (ADS)
Nissfolk, Jarl; Ekholm, Tobias; Elvingson, Christer
2003-10-01
We describe an algorithm for performing Brownian dynamics simulations of particles diffusing on S3, a hypersphere in four dimensions. The system is chosen due to recent interest in doing computer simulations in a closed space where periodic boundary conditions can be avoided. We specifically address the question how to generate a random walk on the 3-sphere, starting from the solution of the corresponding diffusion equation, and we also discuss an efficient implementation based on controlled approximations. Since S3 is a closed manifold (space), the average square displacement during a random walk is no longer proportional to the elapsed time, as in R3. Instead, its time rate of change is continuously decreasing, and approaches zero as time becomes large. We show, however, that the effective diffusion coefficient can still be obtained from the time dependence of the square displacement.
Dshell++: A Component Based, Reusable Space System Simulation Framework
NASA Technical Reports Server (NTRS)
Lim, Christopher S.; Jain, Abhinandan
2009-01-01
This paper describes the multi-mission Dshell++ simulation framework for high fidelity, physics-based simulation of spacecraft, robotic manipulation and mobility systems. Dshell++ is a C++/Python library which uses modern script driven object-oriented techniques to allow component reuse and a dynamic run-time interface for complex, high-fidelity simulation of spacecraft and robotic systems. The goal of the Dshell++ architecture is to manage the inherent complexity of physicsbased simulations while supporting component model reuse across missions. The framework provides several features that support a large degree of simulation configurability and usability.
Tool Support for Parametric Analysis of Large Software Simulation Systems
NASA Technical Reports Server (NTRS)
Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony
2008-01-01
The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.
A Novel Multi-scale Simulation Strategy for Turbulent Reacting Flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, Sutherland C.
In this project, a new methodology was proposed to bridge the gap between Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES). This novel methodology, titled Lattice-Based Multiscale Simulation (LBMS), creates a lattice structure of One-Dimensional Turbulence (ODT) models. This model has been shown to capture turbulent combustion with high fidelity by fully resolving interactions between turbulence and diffusion. By creating a lattice of ODT models, which are then coupled, LBMS overcomes the shortcomings of ODT, which are its inability to capture large scale three dimensional flow structures. However, by spacing these lattices significantly apart, LBMS can avoid the cursemore » of dimensionality that creates untenable computational costs associated with DNS. This project has shown that LBMS is capable of reproducing statistics of isotropic turbulent flows while coarsening the spacing between lines significantly. It also investigates and resolves issues that arise when coupling ODT lines, such as flux reconstruction perpendicular to a given ODT line, preservation of conserved quantities when eddies cross a course cell volume and boundary condition application. Robust parallelization is also investigated.« less
Large Terrain Modeling and Visualization for Planets
NASA Technical Reports Server (NTRS)
Myint, Steven; Jain, Abhinandan; Cameron, Jonathan; Lim, Christopher
2011-01-01
Physics-based simulations are actively used in the design, testing, and operations phases of surface and near-surface planetary space missions. One of the challenges in realtime simulations is the ability to handle large multi-resolution terrain data sets within models as well as for visualization. In this paper, we describe special techniques that we have developed for visualization, paging, and data storage for dealing with these large data sets. The visualization technique uses a real-time GPU-based continuous level-of-detail technique that delivers multiple frames a second performance even for planetary scale terrain model sizes.
A Case Study: Using Delmia at Kennedy Space Center to Support NASA's Constellation Program
NASA Technical Reports Server (NTRS)
Kickbusch, Tracey; Humeniuk, Bob
2010-01-01
The presentation examines the use of Delmia (Digital Enterprise Lean Manufacturing Interactive Application) for digital simulation in NASA's Constellation Program. Topics include an overview of the Kennedy Space Center (KSC) Design Visualization Group tasks, NASA's Constellation Program, Ares 1 ground processing preliminary design review, and challenges and how Delmia is used at KSC, Challenges include dealing with large data sets, creating and maintaining KSC's infrastructure, gathering customer requirements and meeting objectives, creating life-like simulations, and providing quick turn-around on varied products,
Space construction base control system
NASA Technical Reports Server (NTRS)
Kaczynski, R. F.
1979-01-01
Several approaches for an attitude control system are studied and developed for a large space construction base that is structurally flexible. Digital simulations were obtained using the following techniques: (1) the multivariable Nyquist array method combined with closed loop pole allocation, (2) the linear quadratic regulator method. Equations for the three-axis simulation using the multilevel control method were generated and are presented. Several alternate control approaches are also described. A technique is demonstrated for obtaining the dynamic structural properties of a vehicle which is constructed of two or more submodules of known dynamic characteristics.
NASA Technical Reports Server (NTRS)
D'Souza, Christopher; Milenkovich, Zoran; Wilson, Zachary; Huich, David; Bendle, John; Kibler, Angela
2011-01-01
The Space Operations Simulation Center (SOSC) at the Lockheed Martin (LM) Waterton Campus in Littleton, Colorado is a dynamic test environment focused on Autonomous Rendezvous and Docking (AR&D) development testing and risk reduction activities. The SOSC supports multiple program pursuits and accommodates testing Guidance, Navigation, and Control (GN&C) algorithms for relative navigation, hardware testing and characterization, as well as software and test process development. The SOSC consists of a high bay (60 meters long by 15.2 meters wide by 15.2 meters tall) with dual six degree-of-freedom (6DOF) motion simulators and a single fixed base 6DOF robot. The large testing area (maximum sensor-to-target effective range of 60 meters) allows for large-scale, flight-like simulations of proximity maneuvers and docking events. The facility also has two apertures for access to external extended-range outdoor target test operations. In addition, the facility contains four Mission Operations Centers (MOCs) with connectivity to dual high bay control rooms and a data/video interface room. The high bay is rated at Class 300,000 (. 0.5 m maximum particles/m3) cleanliness and includes orbital lighting simulation capabilities.
Calculations of High-Temperature Jet Flow Using Hybrid Reynolds-Average Navier-Stokes Formulations
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Elmiligui, Alaa; Giriamaji, Sharath S.
2008-01-01
Two multiscale-type turbulence models are implemented in the PAB3D solver. The models are based on modifying the Reynolds-averaged Navier Stokes equations. The first scheme is a hybrid Reynolds-averaged- Navier Stokes/large-eddy-simulation model using the two-equation k(epsilon) model with a Reynolds-averaged-Navier Stokes/large-eddy-simulation transition function dependent on grid spacing and the computed turbulence length scale. The second scheme is a modified version of the partially averaged Navier Stokes model in which the unresolved kinetic energy parameter f(sub k) is allowed to vary as a function of grid spacing and the turbulence length scale. This parameter is estimated based on a novel two-stage procedure to efficiently estimate the level of scale resolution possible for a given flow on a given grid for partially averaged Navier Stokes. It has been found that the prescribed scale resolution can play a major role in obtaining accurate flow solutions. The parameter f(sub k) varies between zero and one and is equal to one in the viscous sublayer and when the Reynolds-averaged Navier Stokes turbulent viscosity becomes smaller than the large-eddy-simulation viscosity. The formulation, usage methodology, and validation examples are presented to demonstrate the enhancement of PAB3D's time-accurate turbulence modeling capabilities. The accurate simulations of flow and turbulent quantities will provide a valuable tool for accurate jet noise predictions. Solutions from these models are compared with Reynolds-averaged Navier Stokes results and experimental data for high-temperature jet flows. The current results show promise for the capability of hybrid Reynolds-averaged Navier Stokes and large eddy simulation and partially averaged Navier Stokes in simulating such flow phenomena.
Numerical simulation of the geodynamo reaches Earth's core dynamical regime
NASA Astrophysics Data System (ADS)
Aubert, J.; Gastine, T.; Fournier, A.
2016-12-01
Numerical simulations of the geodynamo have been successful at reproducing a number of static (field morphology) and kinematic (secular variation patterns, core surface flows and westward drift) features of Earth's magnetic field, making them a tool of choice for the analysis and retrieval of geophysical information on Earth's core. However, classical numerical models have been run in a parameter regime far from that of the real system, prompting the question of whether we do get "the right answers for the wrong reasons", i.e. whether the agreement between models and nature simply occurs by chance and without physical relevance in the dynamics. In this presentation, we show that classical models succeed in describing the geodynamo because their large-scale spatial structure is essentially invariant as one progresses along a well-chosen path in parameter space to Earth's core conditions. This path is constrained by the need to enforce the relevant force balance (MAC or Magneto-Archimedes-Coriolis) and preserve the ratio of the convective overturn and magnetic diffusion times. Numerical simulations performed along this path are shown to be spatially invariant at scales larger than that where the magnetic energy is ohmically dissipated. This property enables the definition of large-eddy simulations that show good agreement with direct numerical simulations in the range where both are feasible, and that can be computed at unprecedented values of the control parameters, such as an Ekman number E=10-8. Combining direct and large-eddy simulations, large-scale invariance is observed over half the logarithmic distance in parameter space between classical models and Earth. The conditions reached at this mid-point of the path are furthermore shown to be representative of the rapidly-rotating, asymptotic dynamical regime in which Earth's core resides, with a MAC force balance undisturbed by viscosity or inertia, the enforcement of a Taylor state and strong-field dynamo action. We conclude that numerical modelling has advanced to a stage where it is possible to use models correctly representing the statics, kinematics and now the dynamics of the geodynamo. This opens the way to a better analysis of the geomagnetic field in the time and space domains.
Large-Eddy Simulation of the Base Flow of a Cylindrical Space Vehicle Configuration
NASA Astrophysics Data System (ADS)
Meiß, J.-H.; Schröder, W.
2009-01-01
A Large-Eddy Simulation (LES) is performed out to in- vestigate high Reynolds number base flow of an axisymmetric rocket-like configuration having an underex- panded nozzle flow. The subsonic base region of low pressure levels is characterized and bounded by the interaction of the freestream of Mach 5.3 and the wide plume of the hot exhaust jet of Mach 3.8. An analysis of the base flow shows that the system of base area vortices determines the highly time-dependent pressure distribution and causes an upstream convection of hot exhaust gas. A comparison of the results with experiments conducted at the German Aerospace Center (DLR) Cologne shows good agreement. The investigation is part of the German RESPACE Pro- gram, which focuses on Key Technologies for Reusable Space Systems.
Spatial adaptive sampling in multiscale simulation
NASA Astrophysics Data System (ADS)
Rouet-Leduc, Bertrand; Barros, Kipton; Cieren, Emmanuel; Elango, Venmugil; Junghans, Christoph; Lookman, Turab; Mohd-Yusof, Jamaludin; Pavel, Robert S.; Rivera, Axel Y.; Roehm, Dominic; McPherson, Allen L.; Germann, Timothy C.
2014-07-01
In a common approach to multiscale simulation, an incomplete set of macroscale equations must be supplemented with constitutive data provided by fine-scale simulation. Collecting statistics from these fine-scale simulations is typically the overwhelming computational cost. We reduce this cost by interpolating the results of fine-scale simulation over the spatial domain of the macro-solver. Unlike previous adaptive sampling strategies, we do not interpolate on the potentially very high dimensional space of inputs to the fine-scale simulation. Our approach is local in space and time, avoids the need for a central database, and is designed to parallelize well on large computer clusters. To demonstrate our method, we simulate one-dimensional elastodynamic shock propagation using the Heterogeneous Multiscale Method (HMM); we find that spatial adaptive sampling requires only ≈ 50 ×N0.14 fine-scale simulations to reconstruct the stress field at all N grid points. Related multiscale approaches, such as Equation Free methods, may also benefit from spatial adaptive sampling.
Large-Eddy Simulation of Wind-Plant Aerodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Churchfield, M. J.; Lee, S.; Moriarty, P. J.
In this work, we present results of a large-eddy simulation of the 48 multi-megawatt turbines composing the Lillgrund wind plant. Turbulent inflow wind is created by performing an atmospheric boundary layer precursor simulation, and turbines are modeled using a rotating, variable-speed actuator line representation. The motivation for this work is that few others have done large-eddy simulations of wind plants with a substantial number of turbines, and the methods for carrying out the simulations are varied. We wish to draw upon the strengths of the existing simulations and our growing atmospheric large-eddy simulation capability to create a sound methodology formore » performing this type of simulation. We used the OpenFOAM CFD toolbox to create our solver. The simulated time-averaged power production of the turbines in the plant agrees well with field observations, except with the sixth turbine and beyond in each wind-aligned. The power produced by each of those turbines is overpredicted by 25-40%. A direct comparison between simulated and field data is difficult because we simulate one wind direction with a speed and turbulence intensity characteristic of Lillgrund, but the field observations were taken over a year of varying conditions. The simulation shows the significant 60-70% decrease in the performance of the turbines behind the front row in this plant that has a spacing of 4.3 rotor diameters in this direction. The overall plant efficiency is well predicted. This work shows the importance of using local grid refinement to simultaneously capture the meter-scale details of the turbine wake and the kilometer-scale turbulent atmospheric structures. Although this work illustrates the power of large-eddy simulation in producing a time-accurate solution, it required about one million processor-hours, showing the significant cost of large-eddy simulation.« less
2-kW Solar Dynamic Space Power System Tested in Lewis' Thermal Vacuum Facility
NASA Technical Reports Server (NTRS)
1995-01-01
Working together, a NASA/industry team successfully operated and tested a complete solar dynamic space power system in a large thermal vacuum facility with a simulated sun. This NASA Lewis Research Center facility, known as Tank 6 in building 301, accurately simulates the temperatures, high vacuum, and solar flux encountered in low-Earth orbit. The solar dynamic space power system shown in the photo in the Lewis facility, includes the solar concentrator and the solar receiver with thermal energy storage integrated with the power conversion unit. Initial testing in December 1994 resulted in the world's first operation of an integrated solar dynamic system in a relevant environment.
Turbulence and entrainment length scales in large wind farms.
Andersen, Søren J; Sørensen, Jens N; Mikkelsen, Robert F
2017-04-13
A number of large wind farms are modelled using large eddy simulations to elucidate the entrainment process. A reference simulation without turbines and three farm simulations with different degrees of imposed atmospheric turbulence are presented. The entrainment process is assessed using proper orthogonal decomposition, which is employed to detect the largest and most energetic coherent turbulent structures. The dominant length scales responsible for the entrainment process are shown to grow further into the wind farm, but to be limited in extent by the streamwise turbine spacing, which could be taken into account when developing farm layouts. The self-organized motion or large coherent structures also yield high correlations between the power productions of consecutive turbines, which can be exploited through dynamic farm control.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).
Turbulence and entrainment length scales in large wind farms
2017-01-01
A number of large wind farms are modelled using large eddy simulations to elucidate the entrainment process. A reference simulation without turbines and three farm simulations with different degrees of imposed atmospheric turbulence are presented. The entrainment process is assessed using proper orthogonal decomposition, which is employed to detect the largest and most energetic coherent turbulent structures. The dominant length scales responsible for the entrainment process are shown to grow further into the wind farm, but to be limited in extent by the streamwise turbine spacing, which could be taken into account when developing farm layouts. The self-organized motion or large coherent structures also yield high correlations between the power productions of consecutive turbines, which can be exploited through dynamic farm control. This article is part of the themed issue ‘Wind energy in complex terrains’. PMID:28265028
NASA Astrophysics Data System (ADS)
Sidles, John A.; Garbini, Joseph L.; Harrell, Lee E.; Hero, Alfred O.; Jacky, Jonathan P.; Malcomb, Joseph R.; Norman, Anthony G.; Williamson, Austin M.
2009-06-01
Practical recipes are presented for simulating high-temperature and nonequilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto state-space manifolds having reduced dimensionality and possessing a Kähler potential of multilinear algebraic form. These state-spaces can be regarded as ruled algebraic varieties upon which a projective quantum model order reduction (MOR) is performed. The Riemannian sectional curvature of ruled Kählerian varieties is analyzed, and proved to be non-positive upon all sections that contain a rule. These manifolds are shown to contain Slater determinants as a special case and their identity with Grassmannian varieties is demonstrated. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low dimensionality Kähler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candès-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given and methods for quantum state optimization by Dantzig selection are given.
Experimental methods for studying microbial survival in extraterrestrial environments.
Olsson-Francis, Karen; Cockell, Charles S
2010-01-01
Microorganisms can be used as model systems for studying biological responses to extraterrestrial conditions; however, the methods for studying their response are extremely challenging. Since the first high altitude microbiological experiment in 1935 a large number of facilities have been developed for short- and long-term microbial exposure experiments. Examples are the BIOPAN facility, used for short-term exposure, and the EXPOSE facility aboard the International Space Station, used for long-term exposure. Furthermore, simulation facilities have been developed to conduct microbiological experiments in the laboratory environment. A large number of microorganisms have been used for exposure experiments; these include pure cultures and microbial communities. Analyses of these experiments have involved both culture-dependent and independent methods. This review highlights and discusses the facilities available for microbiology experiments, both in space and in simulation environments. A description of the microorganisms and the techniques used to analyse survival is included. Finally we discuss the implications of microbiological studies for future missions and for space applications. Copyright 2009 Elsevier B.V. All rights reserved.
Modeling extreme (Carrington-type) space weather events using three-dimensional MHD code simulations
NASA Astrophysics Data System (ADS)
Ngwira, C. M.; Pulkkinen, A. A.; Kuznetsova, M. M.; Glocer, A.
2013-12-01
There is growing concern over possible severe societal consequences related to adverse space weather impacts on man-made technological infrastructure and systems. In the last two decades, significant progress has been made towards the modeling of space weather events. Three-dimensional (3-D) global magnetohydrodynamics (MHD) models have been at the forefront of this transition, and have played a critical role in advancing our understanding of space weather. However, the modeling of extreme space weather events is still a major challenge even for existing global MHD models. In this study, we introduce a specially adapted University of Michigan 3-D global MHD model for simulating extreme space weather events that have a ground footprint comparable (or larger) to the Carrington superstorm. Results are presented for an initial simulation run with ``very extreme'' constructed/idealized solar wind boundary conditions driving the magnetosphere. In particular, we describe the reaction of the magnetosphere-ionosphere system and the associated ground induced geoelectric field to such extreme driving conditions. We also discuss the results and what they might mean for the accuracy of the simulations. The model is further tested using input data for an observed space weather event to verify the MHD model consistence and to draw guidance for future work. This extreme space weather MHD model is designed specifically for practical application to the modeling of extreme geomagnetically induced electric fields, which can drive large currents in earth conductors such as power transmission grids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewandowski, Matthew; Senatore, Leonardo; Prada, Francisco
Here, we further develop the description of redshift-space distortions within the effective field theory of large scale structures. First, we generalize the counterterms to include the effect of baryonic physics and primordial non-Gaussianity. Second, we evaluate the IR resummation of the dark matter power spectrum in redshift space. This requires us to identify a controlled approximation that makes the numerical evaluation straightforward and efficient. Third, we compare the predictions of the theory at one loop with the power spectrum from numerical simulations up to ℓ = 6. We find that the IR resummation allows us to correctly reproduce the baryonmore » acoustic oscillation peak. The k reach—or, equivalently, the precision for a given k—depends on additional counterterms that need to be matched to simulations. Since the nonlinear scale for the velocity is expected to be longer than the one for the overdensity, we consider a minimal and a nonminimal set of counterterms. The quality of our numerical data makes it hard to firmly establish the performance of the theory at high wave numbers. Within this limitation, we find that the theory at redshift z = 0.56 and up to ℓ = 2 matches the data at the percent level approximately up to k~0.13 hMpc –1 or k~0.18 hMpc –1, depending on the number of counterterms used, with a potentially large improvement over former analytical techniques.« less
Lewandowski, Matthew; Senatore, Leonardo; Prada, Francisco; ...
2018-03-15
Here, we further develop the description of redshift-space distortions within the effective field theory of large scale structures. First, we generalize the counterterms to include the effect of baryonic physics and primordial non-Gaussianity. Second, we evaluate the IR resummation of the dark matter power spectrum in redshift space. This requires us to identify a controlled approximation that makes the numerical evaluation straightforward and efficient. Third, we compare the predictions of the theory at one loop with the power spectrum from numerical simulations up to ℓ = 6. We find that the IR resummation allows us to correctly reproduce the baryonmore » acoustic oscillation peak. The k reach—or, equivalently, the precision for a given k—depends on additional counterterms that need to be matched to simulations. Since the nonlinear scale for the velocity is expected to be longer than the one for the overdensity, we consider a minimal and a nonminimal set of counterterms. The quality of our numerical data makes it hard to firmly establish the performance of the theory at high wave numbers. Within this limitation, we find that the theory at redshift z = 0.56 and up to ℓ = 2 matches the data at the percent level approximately up to k~0.13 hMpc –1 or k~0.18 hMpc –1, depending on the number of counterterms used, with a potentially large improvement over former analytical techniques.« less
Zerze, Gül H; Miller, Cayla M; Granata, Daniele; Mittal, Jeetain
2015-06-09
Intrinsically disordered proteins (IDPs), which are expected to be largely unstructured under physiological conditions, make up a large fraction of eukaryotic proteins. Molecular dynamics simulations have been utilized to probe structural characteristics of these proteins, which are not always easily accessible to experiments. However, exploration of the conformational space by brute force molecular dynamics simulations is often limited by short time scales. Present literature provides a number of enhanced sampling methods to explore protein conformational space in molecular simulations more efficiently. In this work, we present a comparison of two enhanced sampling methods: temperature replica exchange molecular dynamics and bias exchange metadynamics. By investigating both the free energy landscape as a function of pertinent order parameters and the per-residue secondary structures of an IDP, namely, human islet amyloid polypeptide, we found that the two methods yield similar results as expected. We also highlight the practical difference between the two methods by describing the path that we followed to obtain both sets of data.
NASA Astrophysics Data System (ADS)
Hahn, Oliver; Angulo, Raul E.
2016-01-01
N-body simulations are essential for understanding the formation and evolution of structure in the Universe. However, the discrete nature of these simulations affects their accuracy when modelling collisionless systems. We introduce a new approach to simulate the gravitational evolution of cold collisionless fluids by solving the Vlasov-Poisson equations in terms of adaptively refineable `Lagrangian phase-space elements'. These geometrical elements are piecewise smooth maps between Lagrangian space and Eulerian phase-space and approximate the continuum structure of the distribution function. They allow for dynamical adaptive splitting to accurately follow the evolution even in regions of very strong mixing. We discuss in detail various one-, two- and three-dimensional test problems to demonstrate the performance of our method. Its advantages compared to N-body algorithms are: (I) explicit tracking of the fine-grained distribution function, (II) natural representation of caustics, (III) intrinsically smooth gravitational potential fields, thus (IV) eliminating the need for any type of ad hoc force softening. We show the potential of our method by simulating structure formation in a warm dark matter scenario. We discuss how spurious collisionality and large-scale discreteness noise of N-body methods are both strongly suppressed, which eliminates the artificial fragmentation of filaments. Therefore, we argue that our new approach improves on the N-body method when simulating self-gravitating cold and collisionless fluids, and is the first method that allows us to explicitly follow the fine-grained evolution in six-dimensional phase-space.
NASA Technical Reports Server (NTRS)
1979-01-01
At Valley Forge, Pennsylvania, General Electric Company's Space Division has a large environmental chamber for simulating the conditions under which an orbiting spacecraft operates. Normally it is used to test company-built space systems, such as NASA's Landsat and Nimbus satellites. It is also being used in a novel spinoff application-restoring water-damaged books and other paper products and textiles.
IMPETUS - Interactive MultiPhysics Environment for Unified Simulations.
Ha, Vi Q; Lykotrafitis, George
2016-12-08
We introduce IMPETUS - Interactive MultiPhysics Environment for Unified Simulations, an object oriented, easy-to-use, high performance, C++ program for three-dimensional simulations of complex physical systems that can benefit a large variety of research areas, especially in cell mechanics. The program implements cross-communication between locally interacting particles and continuum models residing in the same physical space while a network facilitates long-range particle interactions. Message Passing Interface is used for inter-processor communication for all simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kostal, Hubert; Kreysar, Douglas; Rykowski, Ronald
2009-08-01
The color and luminance distributions of large light sources are difficult to measure because of the size of the source and the physical space required for the measurement. We describe a method for the measurement of large light sources in a limited space that efficiently overcomes the physical limitations of traditional far-field measurement techniques. This method uses a calibrated, high dynamic range imaging colorimeter and a goniometric system to move the light source through an automated measurement sequence in the imaging colorimeter's field-of-view. The measurement is performed from within the near-field of the light source, enabling a compact measurement set-up. This method generates a detailed near-field color and luminance distribution model that can be directly converted to ray sets for optical design and that can be extrapolated to far-field distributions for illumination design. The measurements obtained show excellent correlation to traditional imaging colorimeter and photogoniometer measurement methods. The near-field goniometer approach that we describe is broadly applicable to general lighting systems, can be deployed in a compact laboratory space, and provides full near-field data for optical design and simulation.
On the apparent insignificance of the randomness of flexible joints on large space truss dynamics
NASA Technical Reports Server (NTRS)
Koch, R. M.; Klosner, J. M.
1993-01-01
Deployable periodic large space structures have been shown to exhibit high dynamic sensitivity to period-breaking imperfections and uncertainties. These can be brought on by manufacturing or assembly errors, structural imperfections, as well as nonlinear and/or nonconservative joint behavior. In addition, the necessity of precise pointing and position capability can require the consideration of these usually negligible and unknown parametric uncertainties and their effect on the overall dynamic response of large space structures. This work describes the use of a new design approach for the global dynamic solution of beam-like periodic space structures possessing parametric uncertainties. Specifically, the effect of random flexible joints on the free vibrations of simply-supported periodic large space trusses is considered. The formulation is a hybrid approach in terms of an extended Timoshenko beam continuum model, Monte Carlo simulation scheme, and first-order perturbation methods. The mean and mean-square response statistics for a variety of free random vibration problems are derived for various input random joint stiffness probability distributions. The results of this effort show that, although joint flexibility has a substantial effect on the modal dynamic response of periodic large space trusses, the effect of any reasonable uncertainty or randomness associated with these joint flexibilities is insignificant.
3DView: Space physics data visualizer
NASA Astrophysics Data System (ADS)
Génot, V.; Beigbeder, L.; Popescu, D.; Dufourg, N.; Gangloff, M.; Bouchemit, M.; Caussarieu, S.; Toniutti, J.-P.; Durand, J.; Modolo, R.; André, N.; Cecconi, B.; Jacquey, C.; Pitout, F.; Rouillard, A.; Pinto, R.; Erard, S.; Jourdane, N.; Leclercq, L.; Hess, S.; Khodachenko, M.; Al-Ubaidi, T.; Scherf, M.; Budnik, E.
2018-04-01
3DView creates visualizations of space physics data in their original 3D context. Time series, vectors, dynamic spectra, celestial body maps, magnetic field or flow lines, and 2D cuts in simulation cubes are among the variety of data representation enabled by 3DView. It offers direct connections to several large databases and uses VO standards; it also allows the user to upload data. 3DView's versatility covers a wide range of space physics contexts.
NASA Astrophysics Data System (ADS)
Straus, D. M.
2006-12-01
The transitions between portions of the state space of the large-scale flow is studied from daily wintertime data over the Pacific North America region using the NCEP reanalysis data set (54 winters) and very large suites of hindcasts made with the COLA atmospheric GCM with observed SST (55 members for each of 18 winters). The partition of the large-scale state space is guided by cluster analysis, whose statistical significance and relationship to SST is reviewed (Straus and Molteni, 2004; Straus, Corti and Molteni, 2006). The determination of the global nature of the flow through state space is studied using Markov Chains (Crommelin, 2004). In particular the non-diffusive part of the flow is contrasted in nature (small data sample) and the AGCM (large data sample). The intrinsic error growth associated with different portions of the state space is studied through sets of identical twin AGCM simulations. The goal is to obtain realistic estimates of predictability times for large-scale transitions that should be useful in long-range forecasting.
Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S
2014-12-09
Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.
PLATSIM: A Simulation and Analysis Package for Large-Order Flexible Systems. Version 2.0
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Kenny, Sean P.; Giesy, Daniel P.
1997-01-01
The software package PLATSIM provides efficient time and frequency domain analysis of large-order generic space platforms. PLATSIM can perform open-loop analysis or closed-loop analysis with linear or nonlinear control system models. PLATSIM exploits the particular form of sparsity of the plant matrices for very efficient linear and nonlinear time domain analysis, as well as frequency domain analysis. A new, original algorithm for the efficient computation of open-loop and closed-loop frequency response functions for large-order systems has been developed and is implemented within the package. Furthermore, a novel and efficient jitter analysis routine which determines jitter and stability values from time simulations in a very efficient manner has been developed and is incorporated in the PLATSIM package. In the time domain analysis, PLATSIM simulates the response of the space platform to disturbances and calculates the jitter and stability values from the response time histories. In the frequency domain analysis, PLATSIM calculates frequency response function matrices and provides the corresponding Bode plots. The PLATSIM software package is written in MATLAB script language. A graphical user interface is developed in the package to provide convenient access to its various features.
Parametric Analysis of a Hover Test Vehicle using Advanced Test Generation and Data Analysis
NASA Technical Reports Server (NTRS)
Gundy-Burlet, Karen; Schumann, Johann; Menzies, Tim; Barrett, Tony
2009-01-01
Large complex aerospace systems are generally validated in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. This is due to the large parameter space, and complex, highly coupled nonlinear nature of the different systems that contribute to the performance of the aerospace system. We have addressed the factors deterring such an analysis by applying a combination of technologies to the area of flight envelop assessment. We utilize n-factor (2,3) combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. The data generated is automatically analyzed through a combination of unsupervised learning using a Bayesian multivariate clustering technique (AutoBayes) and supervised learning of critical parameter ranges using the machine-learning tool TAR3, a treatment learner. Covariance analysis with scatter plots and likelihood contours are used to visualize correlations between simulation parameters and simulation results, a task that requires tool support, especially for large and complex models. We present results of simulation experiments for a cold-gas-powered hover test vehicle.
To Create Space on Earth: The Space Environment Simulation Laboratory and Project Apollo
NASA Technical Reports Server (NTRS)
Walters, Lori C.
2003-01-01
Few undertakings in the history of humanity can compare to the great technological achievement known as Project Apollo. Among those who witnessed Armstrong#s flickering television image were thousands of people who had directly contributed to this historic moment. Amongst those in this vast anonymous cadre were the personnel of the Space Environment Simulation Laboratory (SESL) at the Manned Spacecraft Center (MSC) in Houston, Texas. SESL houses two large thermal-vacuum chambers with solar simulation capabilities. At a time when NASA engineers had a limited understanding of the effects of extremes of space on hardware and crews, SESL was designed to literally create the conditions of space on Earth. With interior dimensions of 90 feet in height and a 55-foot diameter, Chamber A dwarfed the Apollo command/service module (CSM) it was constructed to test. The chamber#s vacuum pumping capacity of 1 x 10(exp -6) torr can simulate an altitude greater than 130 miles above the Earth. A "lunar plane" capable of rotating a 150,000-pound test vehicle 180 deg replicates the revolution of a craft in space. To reproduce the temperature extremes of space, interior chamber walls cool to -280F as two banks of carbon arc modules simulate the unfiltered solar light/heat of the Sun. With capabilities similar to that of Chamber A, early Chamber B tests included the Gemini modular maneuvering unit, Apollo EVA mobility unit and the lunar module. Since Gemini astronaut Charles Bassett first ventured into the chamber in 1966, Chamber B has assisted astronauts in testing hardware and preparing them for work in the harsh extremes of space.
Dynamic Analysis of a Two Member Manipulator Arm
NASA Technical Reports Server (NTRS)
McGinley, Mark; Shen, Ji Y.
1997-01-01
Attenuating start-up and stopping vibrations when maneuvering large payloads attached to flexible manipulator systems is a great concern for many space missions. To address this concern, it was proposed that the use of smart materials, and their applications in smart structures, may provide an effective method of control for aerospace structures. In this paper, a modified finite element model has been developed to simulate the performance of piezoelectric ceramic actuators, and was applied to a flexible two-arm manipulator system. Connected to a control voltage, the piezoelectric actuators produce control moments based on the optimal control theory. The computer simulation modeled the end-effector vibration suppression of the NASA manipulator testbed for berthing operations of the Space Shuttle to the Space Station. The results of the simulation show that the bonded piezoelectric actuators can effectively suppress follow-up vibrations of the end-effector, stimulated by some external disturbance.
Development of a Korean Lunar Simulant(KLS-1) and its Possible Further Recommendations
NASA Astrophysics Data System (ADS)
Chang, I.; Ryn, B. H.; Cho, G. C.
2014-12-01
The rapid development on space exploration finally found that water exists on the moon according to NASA's recent studies. This becomes a turning point in lunar science and surface development because the existence of water raises the possibility of human survival on the moon. In this case, advanced space construction technology against the distinctive lunar environment (i.e., atmosphereless, subgravity, different geology) becomes a key issue for consistent and reliable settlement of human beings. Thus, understandings on the lunar surface and its composition must be secured as an important role in lunar development. During project Apollo (1961~1972), only 320 kg of real lunar soils were collected and sent to the Earth. Due to the lack of samples, many space agencies are attempting to simulate the lunar soil using Earth materials to be used in large and massive practical studies and simulations. In the same vein, we developed a Korean lunar simulant from a specific basalt type Cenozoic Erathem in Korea. The simulated regolith sample shows a high similarity to the Apollo average samples in mineral composition, density, and particle shape aspects. Therefore, the developed regolith simulant is expected to be used in various lunar exploration purposes.
Discharge transient coupling in large space power systems
NASA Technical Reports Server (NTRS)
Stevens, N. John; Stillwell, R. P.
1990-01-01
Experiments have shown that plasma environments can induce discharges in solar arrays. These plasmas simulate the environments found in low earth orbits where current plans call for operation of very large power systems. The discharges could be large enough to couple into the power system and possibly disrupt operations. Here, the general concepts of the discharge mechanism and the techniques of coupling are discussed. Data from both ground and flight experiments are reviewed to obtain an expected basis for the interactions. These concepts were applied to the Space Station solar array and distribution system as an example of the large space power system. The effect of discharges was found to be a function of the discharge site. For most sites in the array discharges would not seriously impact performance. One location at the negative end of the array was identified as a position where discharges could couple to charge stored in system capacitors. This latter case could impact performance.
UCLA IGPP Space Plasma Simulation Group
NASA Technical Reports Server (NTRS)
1998-01-01
During the past 10 years the UCLA IGPP Space Plasma Simulation Group has pursued its theoretical effort to develop a Mission Oriented Theory (MOT) for the International Solar Terrestrial Physics (ISTP) program. This effort has been based on a combination of approaches: analytical theory, large scale kinetic (LSK) calculations, global magnetohydrodynamic (MHD) simulations and self-consistent plasma kinetic (SCK) simulations. These models have been used to formulate a global interpretation of local measurements made by the ISTP spacecraft. The regions of applications of the MOT cover most of the magnetosphere: the solar wind, the low- and high-latitude magnetospheric boundary, the near-Earth and distant magnetotail, and the auroral region. Most recent investigations include: plasma processes in the electron foreshock, response of the magnetospheric cusp, particle entry in the magnetosphere, sources of observed distribution functions in the magnetotail, transport of oxygen ions, self-consistent evolution of the magnetotail, substorm studies, effects of explosive reconnection, and auroral acceleration simulations.
StePS: Stereographically Projected Cosmological Simulations
NASA Astrophysics Data System (ADS)
Rácz, Gábor; Szapudi, István; Csabai, István; Dobos, László
2018-05-01
StePS (Stereographically Projected Cosmological Simulations) compactifies the infinite spatial extent of the Universe into a finite sphere with isotropic boundary conditions to simulate the evolution of the large-scale structure. This eliminates the need for periodic boundary conditions, which are a numerical convenience unsupported by observation and which modifies the law of force on large scales in an unrealistic fashion. StePS uses stereographic projection for space compactification and naive O(N2) force calculation; this arrives at a correlation function of the same quality more quickly than standard (tree or P3M) algorithms with similar spatial and mass resolution. The N2 force calculation is easy to adapt to modern graphics cards, hence StePS can function as a high-speed prediction tool for modern large-scale surveys.
Estimating free-body modal parameters from tests of a constrained structure
NASA Technical Reports Server (NTRS)
Cooley, Victor M.
1993-01-01
Hardware advances in suspension technology for ground tests of large space structures provide near on-orbit boundary conditions for modal testing. Further advances in determining free-body modal properties of constrained large space structures have been made, on the analysis side, by using time domain parameter estimation and perturbing the stiffness of the constraints over multiple sub-tests. In this manner, passive suspension constraint forces, which are fully correlated and therefore not usable for spectral averaging techniques, are made effectively uncorrelated. The technique is demonstrated with simulated test data.
Large space structures testing
NASA Technical Reports Server (NTRS)
Waites, Henry; Worley, H. Eugene
1987-01-01
There is considerable interest in the development of testing concepts and facilities that accurately simulate the pathologies believed to exist in future spacecraft. Both the Government and Industry have participated in the development of facilities over the past several years. The progress and problems associated with the development of the Large Space Structure Test Facility at the Marshall Flight Center are presented. This facility was in existence for a number of years and its utilization has run the gamut from total in-house involvement, third party contractor testing, to the mutual participation of other goverment agencies in joint endeavors.
Dynamic analysis of space structures including elastic, multibody, and control behavior
NASA Technical Reports Server (NTRS)
Pinson, Larry; Soosaar, Keto
1989-01-01
The problem is to develop analysis methods, modeling stategies, and simulation tools to predict with assurance the on-orbit performance and integrity of large complex space structures that cannot be verified on the ground. The problem must incorporate large reliable structural models, multi-body flexible dynamics, multi-tier controller interaction, environmental models including 1g and atmosphere, various on-board disturbances, and linkage to mission-level performance codes. All areas are in serious need of work, but the weakest link is multi-body flexible dynamics.
Laboratory development and testing of spacecraft diagnostics
NASA Astrophysics Data System (ADS)
Amatucci, William; Tejero, Erik; Blackwell, Dave; Walker, Dave; Gatling, George; Enloe, Lon; Gillman, Eric
2017-10-01
The Naval Research Laboratory's Space Chamber experiment is a large-scale laboratory device dedicated to the creation of large-volume plasmas with parameters scaled to realistic space plasmas. Such devices make valuable contributions to the investigation of space plasma phenomena under controlled, reproducible conditions, allowing for the validation of theoretical models being applied to space data. However, in addition to investigations such as plasma wave and instability studies, such devices can also make valuable contributions to the development and testing of space plasma diagnostics. One example is the plasma impedance probe developed at NRL. Originally developed as a laboratory diagnostic, the sensor has now been flown on a sounding rocket, is included on a CubeSat experiment, and will be included on the DoD Space Test Program's STP-H6 experiment on the International Space Station. In this talk, we will describe how the laboratory simulation of space plasmas made this development path possible. Work sponsored by the US Naval Research Laboratory Base Program.
Center for Plasma Edge Simulation (CPES) -- Rutgers University Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parashar, Manish
2014-03-06
The CPES scientific simulations run at scale on leadership class machines, collaborate at runtime and produce and exchange large data sizes, which present multiple I/O and data management challenges. During the CPES project, the Rutgers team worked with the rest of the CPES team to address these challenges at different levels, and specifically (1) at the data transport and communication level through the DART (Decoupled and Asynchronous Remote Data Transfers) framework, and (2) at the data management and services level through the DataSpaces and ActiveSpaces frameworks. These frameworks and their impact are briefly described.
Simulation of Space Charge Dynamic in Polyethylene Under DC Continuous Electrical Stress
NASA Astrophysics Data System (ADS)
Boukhari, Hamed; Rogti, Fatiha
2016-10-01
The space charge dynamic plays a very important role in the aging and breakdown of polymeric insulation materials under high voltage. This is due to the intensification of the local electric field and the attendant chemical-mechanical effects in the vicinity around the trapped charge. In this paper, we have investigated the space charge dynamic in low-density polyethylene under high direct-current voltage, which is evaluated by experimental conditions. The evaluation is on the basis of simulation using a bipolar charge transport model consisting of charge injection, transports, trapping, detrapping, and recombination phenomena. The theoretical formulation of the physical problem is based on the Poisson, the continuity, and the transport equations. Numerical results provide temporal and local distributions of the electric field, the space charge density for the different kinds of charges (net charge density, mobile and trapped of electron density, mobile hole density), conduction and displacement current densities, and the external current. The result shows the appearance of the negative packet-like space charge with a large amount of the bulk under the dc electric field of 100 kV/mm, and the induced distortion of the electric field is largely near to the anode, about 39% higher than the initial electric field applied.
NASA Technical Reports Server (NTRS)
Moiseev, Alexander A.; Ormes, Jonathan F.; Hartman, Robert C.; Johnson, Thomas E.; Mitchell, John W.; Thompson, David J.
1999-01-01
Beam test and simulation results are presented for a study of the backsplash effects produced in a high-energy gamma-ray detector containing a massive calorimeter. An empirical formula is developed to estimate the probability (per unit area) of backsplash for different calorimeter materials and thicknesses, different incident particle energies, and at different distances from the calorimeter. The results obtained are applied to the design of Anti-Coincidence Detector (ACD) for the Large Area Telescope (LAT) on the Gamma-ray Large Area Space Telescope (GLAST).
NASA Astrophysics Data System (ADS)
Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; Gibbs, Paul J.; Gibbs, John W.; Karma, Alain
2015-08-01
We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. We focus on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues for investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.
A solar simulator-pumped gas laser for the direct conversion of solar energy
NASA Technical Reports Server (NTRS)
Weaver, W. R.; Lee, J. H.
1981-01-01
Most proposed space power systems are comprised of three general stages, including the collection of the solar radiation, the conversion to a useful form, and the transmission to a receiver. The solar-pumped laser, however, effectively eliminates the middle stage and offers direct photon-to-photon conversion. The laser is especially suited for space-to-space power transmission and communication because of minimal beam spread, low power loss over large distances, and extreme energy densities. A description is presented of the first gas laser pumped by a solar simulator that is scalable to high power levels. The lasant is an iodide C3F7I that as a laser-fusion driver has produced terawatt peak power levels.
On the Fringe Field of Wide Angle LC Optical Phased Array
NASA Technical Reports Server (NTRS)
Wang, Xighua; Wang, Bin; Bos, Philip J.; Anderson, James E.; Pouch, John; Miranda, Felix; McManamon, Paul F.
2004-01-01
For free space laser communication, light weighted large deployable optics is a critical component for the transmitter. However, such an optical element will introduce large aberrations due to the fact that the surface figure of the large optics is susceptable to deformation in the space environment. We propose to use a high-resolution liquid crystal spatial light modulator to correct for wavefront aberrations introduced by the primary optical element, and to achieve very fine beam steering and shaping at the same time. A 2-D optical phased array (OPA) antenna based on a Liquid Crystal on Silicon (LCOS) spatial light modulator is described. This device offers a combination of low cost, high resolution, high accuracy, high diffraction efficiency at video speed. To quantitatively understand the influence factor of the different design parameters, a computer simulation of the device is given by the 2-D director simulation and the Finite Difference Time domain (FDTD) simulation. For the 1-D OPA, we define the maximum steering angle to have a grating period of 8 pixel/reset scheme; as for larger steering angles than this criterion, the diffraction efficiency drops dramatically. In this case, the diffraction efficiency of 0.86 and the Strehl ratio of 0.9 are obtained in the simulation. The performance of the device in achieving high resolution wavefront correction and beam steering is also characterized experimentally.
DNS of Flows over Periodic Hills using a Discontinuous-Galerkin Spectral-Element Method
NASA Technical Reports Server (NTRS)
Diosady, Laslo T.; Murman, Scott M.
2014-01-01
Direct numerical simulation (DNS) of turbulent compressible flows is performed using a higher-order space-time discontinuous-Galerkin finite-element method. The numerical scheme is validated by performing DNS of the evolution of the Taylor-Green vortex and turbulent flow in a channel. The higher-order method is shown to provide increased accuracy relative to low-order methods at a given number of degrees of freedom. The turbulent flow over a periodic array of hills in a channel is simulated at Reynolds number 10,595 using an 8th-order scheme in space and a 4th-order scheme in time. These results are validated against previous large eddy simulation (LES) results. A preliminary analysis provides insight into how these detailed simulations can be used to improve Reynoldsaveraged Navier-Stokes (RANS) modeling
Design and testing of a magnetic suspension and damping system for a space telescope
NASA Technical Reports Server (NTRS)
Ockman, N. J.
1972-01-01
The basic equations of motion are derived for a two dimensional, three degree of freedom simulation of a space telescope coupled to a spacecraft by means of a magnetic suspension and isolation system. The system consists of paramagnetic or ferromagnetic discs confined to the magnetic field between two Helmholtz coils. Damping is introduced by varying the magnetic field in proportion to a velocity signal derived from the telescope. The equations of motion are nonlinear, similar in behavior to the one-dimensional Van der Pol equation. The computer simulation was verified by testing a 264-kilogram air bearing platform which simulates the telescope in a frictionless environment. The simulation demonstrated effective isolation capabilities for disturbance frequencies above resonance. Damping in the system improved the response near resonance and prevented the build-up of large oscillatory amplitudes.
A Multi-agent Simulation Tool for Micro-scale Contagion Spread Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koch, Daniel B
2016-01-01
Within the disaster preparedness and emergency response community, there is interest in how contagions spread person-to-person at large gatherings and if mitigation strategies can be employed to reduce new infections. A contagion spread simulation module was developed for the Incident Management Preparedness and Coordination Toolkit that allows a user to see how a geographically accurate layout of the gathering space helps or hinders the spread of a contagion. The results can inform mitigation strategies based on changing the physical layout of an event space. A case study was conducted for a particular event to calibrate the underlying simulation model. Thismore » paper presents implementation details of the simulation code that incorporates agent movement and disease propagation. Elements of the case study are presented to show how the tool can be used.« less
Exploring theory space with Monte Carlo reweighting
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; ...
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less
Visualizing Structure and Dynamics of Disaccharide Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthews, J. F.; Beckham, G. T.; Himmel, M. E.
2012-01-01
We examine the effect of several solvent models on the conformational properties and dynamics of disaccharides such as cellobiose and lactose. Significant variation in timescale for large scale conformational transformations are observed. Molecular dynamics simulation provides enough detail to enable insight through visualization of multidimensional data sets. We present a new way to visualize conformational space for disaccharides with Ramachandran plots.
Robert E. Keane; Rachel A. Loehman; Lisa M. Holsinger
2011-01-01
Fire management faces important emergent issues in the coming years such as climate change, fire exclusion impacts, and wildland-urban development, so new, innovative means are needed to address these challenges. Field studies, while preferable and reliable, will be problematic because of the large time and space scales involved. Therefore, landscape simulation...
Systematic simulations of modified gravity: chameleon models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brax, Philippe; Davis, Anne-Christine; Li, Baojiu
2013-04-01
In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference withmore » the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc{sup −1}, since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future.« less
Nonlinear relaxation algorithms for circuit simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saleh, R.A.
Circuit simulation is an important Computer-Aided Design (CAD) tool in the design of Integrated Circuits (IC). However, the standard techniques used in programs such as SPICE result in very long computer-run times when applied to large problems. In order to reduce the overall run time, a number of new approaches to circuit simulation were developed and are described. These methods are based on nonlinear relaxation techniques and exploit the relative inactivity of large circuits. Simple waveform-processing techniques are described to determine the maximum possible speed improvement that can be obtained by exploiting this property of large circuits. Three simulation algorithmsmore » are described, two of which are based on the Iterated Timing Analysis (ITA) method and a third based on the Waveform-Relaxation Newton (WRN) method. New programs that incorporate these techniques were developed and used to simulate a variety of industrial circuits. The results from these simulations are provided. The techniques are shown to be much faster than the standard approach. In addition, a number of parallel aspects of these algorithms are described, and a general space-time model of parallel-task scheduling is developed.« less
User modeling techniques for enhanced usability of OPSMODEL operations simulation software
NASA Technical Reports Server (NTRS)
Davis, William T.
1991-01-01
The PC based OPSMODEL operations software for modeling and simulation of space station crew activities supports engineering and cost analyses and operations planning. Using top-down modeling, the level of detail required in the data base can be limited to being commensurate with the results required of any particular analysis. To perform a simulation, a resource environment consisting of locations, crew definition, equipment, and consumables is first defined. Activities to be simulated are then defined as operations and scheduled as desired. These operations are defined within a 1000 level priority structure. The simulation on OPSMODEL, then, consists of the following: user defined, user scheduled operations executing within an environment of user defined resource and priority constraints. Techniques for prioritizing operations to realistically model a representative daily scenario of on-orbit space station crew activities are discussed. The large number of priority levels allows priorities to be assigned commensurate with the detail necessary for a given simulation. Several techniques for realistic modeling of day-to-day work carryover are also addressed.
Multi-scale simulations of space problems with iPIC3D
NASA Astrophysics Data System (ADS)
Lapenta, Giovanni; Bettarini, Lapo; Markidis, Stefano
The implicit Particle-in-Cell method for the computer simulation of space plasma, and its im-plementation in a three-dimensional parallel code, called iPIC3D, are presented. The implicit integration in time of the Vlasov-Maxwell system removes the numerical stability constraints and enables kinetic plasma simulations at magnetohydrodynamics scales. Simulations of mag-netic reconnection in plasma are presented to show the effectiveness of the algorithm. In particular we will show a number of simulations done for large scale 3D systems using the physical mass ratio for Hydrogen. Most notably one simulation treats kinetically a box of tens of Earth radii in each direction and was conducted using about 16000 processors of the Pleiades NASA computer. The work is conducted in collaboration with the MMS-IDS theory team from University of Colorado (M. Goldman, D. Newman and L. Andersson). Reference: Stefano Markidis, Giovanni Lapenta, Rizwan-uddin Multi-scale simulations of plasma with iPIC3D Mathematics and Computers in Simulation, Available online 17 October 2009, http://dx.doi.org/10.1016/j.matcom.2009.08.038
Automation and Robotics for Space-Based Systems, 1991
NASA Technical Reports Server (NTRS)
Williams, Robert L., II (Editor)
1992-01-01
The purpose of this in-house workshop was to assess the state-of-the-art of automation and robotics for space operations from an LaRC perspective and to identify areas of opportunity for future research. Over half of the presentations came from the Automation Technology Branch, covering telerobotic control, extravehicular activity (EVA) and intra-vehicular activity (IVA) robotics, hand controllers for teleoperation, sensors, neural networks, and automated structural assembly, all applied to space missions. Other talks covered the Remote Manipulator System (RMS) active damping augmentation, space crane work, modeling, simulation, and control of large, flexible space manipulators, and virtual passive controller designs for space robots.
The development and testing of the Lens Antenna Deployment Demonstration (LADD) test article
NASA Technical Reports Server (NTRS)
Pugh, Mark L.; Denton, Robert J., Jr.; Strange, Timothy J.
1993-01-01
The USAF Rome Laboratory and NASA Marshall Space Flight Center, through contract to Grumman Corporation, have developed a space-qualifiable test article for the Strategic Defense Initiative Organization to demonstrate the critical structural and mechanical elements of single-axis roll-out membrane deployment for Space Based Radar (SBR) applications. The Lens Antenna Deployment Demonstration (LADD) test article, originally designed as a shuttle-attached flight experiment, is a large precision space structure which is representative of operational designs for space-fed lens antennas. Although the flight experiment was cancelled due to funding constraints and major revisions in the Strategic Defense System (SDS) architecture, development of this test article was completed in June 1989. To take full advantage of the existence of this unique structure, a series of ground tests are proposed which include static, dynamic, and thermal measurements in a simulated space environment. An equally important objective of these tests is the verification of the analytical tools used to design and develop large precision space structures.
Deflagrations, Detonations, and the Deflagration-to-Detonation Transition in Methane-Air Mixtures
2011-04-27
we attempt to answer the question: Given a large enough volume of flammable mixture of NG and air, can a weak spark ignition develop into a...detonation? Large -scale numerical simulations, in conjunction with experimental work conducted at the National Institute for Occupational Safety and...12 2.3.3. Flame Acceleration and DDT in Channels with Obstacles . . . . . . . . . . . . . 14 2.3.4. DDT in Large Spaces
NASA Astrophysics Data System (ADS)
Dib, Alain; Kavvas, M. Levent
2018-03-01
The characteristic form of the Saint-Venant equations is solved in a stochastic setting by using a newly proposed Fokker-Planck Equation (FPE) methodology. This methodology computes the ensemble behavior and variability of the unsteady flow in open channels by directly solving for the flow variables' time-space evolutionary probability distribution. The new methodology is tested on a stochastic unsteady open-channel flow problem, with an uncertainty arising from the channel's roughness coefficient. The computed statistical descriptions of the flow variables are compared to the results obtained through Monte Carlo (MC) simulations in order to evaluate the performance of the FPE methodology. The comparisons show that the proposed methodology can adequately predict the results of the considered stochastic flow problem, including the ensemble averages, variances, and probability density functions in time and space. Unlike the large number of simulations performed by the MC approach, only one simulation is required by the FPE methodology. Moreover, the total computational time of the FPE methodology is smaller than that of the MC approach, which could prove to be a particularly crucial advantage in systems with a large number of uncertain parameters. As such, the results obtained in this study indicate that the proposed FPE methodology is a powerful and time-efficient approach for predicting the ensemble average and variance behavior, in both space and time, for an open-channel flow process under an uncertain roughness coefficient.
A mechanical adapter for installing mission equipment on large space structures
NASA Technical Reports Server (NTRS)
Lefever, A. E.; Totah, R. S.
1980-01-01
A mechanical attachment adapter was designed, constructed, and tested. The adapter was was included in a simulation program that investigated techniques for assembling erectable structures under simulated zero-g conditions by pressure-suited subjects in a simulated EVA mode. The adapter was utilized as an interface attachment between a simulated equipment module and one node point of a tetrahedral structural cell. The mating performance of the adapter, a self-energized mechanism, was easily and quickly demonstrated and required little effort on the part of the test subjects.
NASA Technical Reports Server (NTRS)
Howe, Christina L.; Weller, Robert A.; Reed, Robert A.; Sierawski, Brian D.; Marshall, Paul W.; Marshall, Cheryl J.; Mendenhall, Marcus H.; Schrimpf, Ronald D.
2007-01-01
The proton induced charge deposition in a well characterized silicon P-i-N focal plane array is analyzed with Monte Carlo based simulations. These simulations include all physical processes, together with pile up, to accurately describe the experimental data. Simulation results reveal important high energy events not easily detected through experiment due to low statistics. The effects of each physical mechanism on the device response is shown for a single proton energy as well as a full proton space flux.
NASA Astrophysics Data System (ADS)
Kotik, A.; Usyukin, V.; Vinogradov, I.; Arkhipov, M.
2017-11-01
he realization of astrophysical researches requires the development of high-sensitive centimeterband parabolic space radiotelescopes (SRT) with the large-size mirrors. Constructively such SRT with the mirror size more than 10 m can be realized as deployable rigid structures. Mesh-structures of such size do not provide the reflector reflecting surface accuracy which is necessary for the centimeter band observations. Now such telescope with the 10 m diameter mirror is developed in Russia in the frame of "SPECTR - R" program. External dimensions of the telescope is more than the size of existing thermo-vacuum chambers used to prove SRT reflecting surface accuracy parameters under the action of space environment factors. That's why the numerical simulation turns out to be the basis required to accept the taken designs. Such modeling should be based on experimental working of the basic constructive materials and elements of the future reflector. In the article computational modeling of reflecting surface deviations of a centimeter-band of a large-sized deployable space reflector at a stage of his orbital functioning is considered. The analysis of the factors that determines the deviations - both determined (temperatures fields) and not-determined (telescope manufacturing and installation faults; the deformations caused by features of composite materials behavior in space) is carried out. The finite-element model and complex of methods are developed. They allow to carry out computational modeling of reflecting surface deviations caused by influence of all factors and to take into account the deviations correction by space vehicle orientation system. The results of modeling for two modes of functioning (orientation at the Sun) SRT are presented.
Modeling, simulation, and concept design for hybrid-electric medium-size military trucks
NASA Astrophysics Data System (ADS)
Rizzoni, Giorgio; Josephson, John R.; Soliman, Ahmed; Hubert, Christopher; Cantemir, Codrin-Gruie; Dembski, Nicholas; Pisu, Pierluigi; Mikesell, David; Serrao, Lorenzo; Russell, James; Carroll, Mark
2005-05-01
A large scale design space exploration can provide valuable insight into vehicle design tradeoffs being considered for the U.S. Army"s FMTV (Family of Medium Tactical Vehicles). Through a grant from TACOM (Tank-automotive and Armaments Command), researchers have generated detailed road, surface, and grade conditions representative of the performance criteria of this medium-sized truck and constructed a virtual powertrain simulator for both conventional and hybrid variants. The simulator incorporates the latest technology among vehicle design options, including scalable ultracapacitor and NiMH battery packs as well as a variety of generator and traction motor configurations. An energy management control strategy has also been developed to provide efficiency and performance. A design space exploration for the family of vehicles involves running a large number of simulations with systematically varied vehicle design parameters, where each variant is paced through several different mission profiles and multiple attributes of performance are measured. The resulting designs are filtered to remove dominated designs, exposing the multi-criterial surface of optimality (Pareto optimal designs), and revealing the design tradeoffs as they impact vehicle performance and economy. The results are not yet definitive because ride and drivability measures were not included, and work is not finished on fine-tuning the modeled dynamics of some powertrain components. However, the work so far completed demonstrates the effectiveness of the approach to design space exploration, and the results to date suggest the powertrain configuration best suited to the FMTV mission.
Environmentally-induced voltage limitations in large space power systems
NASA Technical Reports Server (NTRS)
Stevens, N. J.
1984-01-01
Large power systems proposed for future space missions imply higher operating voltage requirements which, in turn, will interact with the space plasma environment. The effects of these interactions can only be inferred because of the limited data base of ground simulations, small test samples, and two space flight experiments. This report evaluates floating potentials for a 100 kW power system operating at 300, 500, 750, and 1000 volts in relation to this data base. Of primary concern is the possibility of discharging to space. The implications of such discharges were studied at the 500 volt operational setting. It was found that discharging can shut down the power system if the discharge current exceeds the array short circuit current. Otherwise, a power oscillation can result that ranges from 2 to 20 percent, depending upon the solar array area involved in the discharge. Means of reducing the effect are discussed.
Geant4 hadronic physics for space radiation environment.
Ivantchenko, Anton V; Ivanchenko, Vladimir N; Molina, Jose-Manuel Quesada; Incerti, Sebastien L
2012-01-01
To test and to develop Geant4 (Geometry And Tracking version 4) Monte Carlo hadronic models with focus on applications in a space radiation environment. The Monte Carlo simulations have been performed using the Geant4 toolkit. Binary (BIC), its extension for incident light ions (BIC-ion) and Bertini (BERT) cascades were used as main Monte Carlo generators. For comparisons purposes, some other models were tested too. The hadronic testing suite has been used as a primary tool for model development and validation against experimental data. The Geant4 pre-compound (PRECO) and de-excitation (DEE) models were revised and improved. Proton, neutron, pion, and ion nuclear interactions were simulated with the recent version of Geant4 9.4 and were compared with experimental data from thin and thick target experiments. The Geant4 toolkit offers a large set of models allowing effective simulation of interactions of particles with matter. We have tested different Monte Carlo generators with our hadronic testing suite and accordingly we can propose an optimal configuration of Geant4 models for the simulation of the space radiation environment.
Hurdles to Overcome to Model Carrington Class Events
NASA Astrophysics Data System (ADS)
Engel, M.; Henderson, M. G.; Jordanova, V. K.; Morley, S.
2017-12-01
Large geomagnetic storms pose a threat to both space and ground based infrastructure. In order to help mitigate that threat a better understanding of the specifics of these storms is required. Various computer models are being used around the world to analyze the magnetospheric environment, however they are largely inadequate for analyzing the large and extreme storm time environments. Here we report on the first steps towards expanding and robustifying the RAM-SCB inner magnetospheric model, used in conjunction with BATS-R-US and the Space Weather Modeling Framework, in order to simulate storms with Dst > -400. These results will then be used to help expand our modelling capabilities towards including Carrington-class events.
NASA Astrophysics Data System (ADS)
van Stratum, Bart J. H.; Stevens, Bjorn
2015-06-01
The influence of poorly resolving mixing processes in the nocturnal boundary layer (NBL) on the development of the convective boundary layer the following day is studied using large-eddy simulation (LES). Guided by measurement data from meteorological sites in Cabauw (Netherlands) and Hamburg (Germany), the typical summertime NBL conditions for Western Europe are characterized, and used to design idealized (absence of moisture and large-scale forcings) numerical experiments of the diel cycle. Using the UCLA-LES code with a traditional Smagorinsky-Lilly subgrid model and a simplified land-surface scheme, a sensitivity study to grid spacing is performed. At horizontal grid spacings ranging from 3.125 m in which we are capable of resolving most turbulence in the cases of interest to grid a spacing of 100 m which is clearly insufficient to resolve the NBL, the ability of LES to represent the NBL and the influence of NBL biases on the subsequent daytime development of the convective boundary layer are examined. Although the low-resolution experiments produce substantial biases in the NBL, the influence on daytime convection is shown to be small, with biases in the afternoon boundary layer depth and temperature of approximately 100 m and 0.5 K, which partially cancel each other in terms of the mixed-layer top relative humidity.
Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations
NASA Astrophysics Data System (ADS)
Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto
2018-04-01
Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.
Largescale Long-term particle Simulations of Runaway electrons in Tokamaks
NASA Astrophysics Data System (ADS)
Liu, Jian; Qin, Hong; Wang, Yulei
2016-10-01
To understand runaway dynamical behavior is crucial to assess the safety of tokamaks. Though many important analytical and numerical results have been achieved, the overall dynamic behaviors of runaway electrons in a realistic tokamak configuration is still rather vague. In this work, the secular full-orbit simulations of runaway electrons are carried out based on a relativistic volume-preserving algorithm. Detailed phase-space behaviors of runaway electrons are investigated in different timescales spanning 11 orders. A detailed analysis of the collisionless neoclassical scattering is provided when considering the coupling between the rotation of momentum vector and the background field. In large timescale, the initial condition of runaway electrons in phase space globally influences the runaway distribution. It is discovered that parameters and field configuration of tokamaks can modify the runaway electron dynamics significantly. Simulations on 10 million cores of supercomputer using the APT code have been completed. A resolution of 107 in phase space is used, and simulations are performed for 1011 time steps. Largescale simulations show that in a realistic fusion reactor, the concern of runaway electrons is not as serious as previously thought. This research was supported by National Magnetic Connement Fusion Energy Research Project (2015GB111003, 2014GB124005), the National Natural Science Foundation of China (NSFC-11575185, 11575186) and the GeoAlgorithmic Plasma Simulator (GAPS) Project.
Doll, J.; Dupuis, P.; Nyquist, P.
2017-02-08
Parallel tempering, or replica exchange, is a popular method for simulating complex systems. The idea is to run parallel simulations at different temperatures, and at a given swap rate exchange configurations between the parallel simulations. From the perspective of large deviations it is optimal to let the swap rate tend to infinity and it is possible to construct a corresponding simulation scheme, known as infinite swapping. In this paper we propose a novel use of large deviations for empirical measures for a more detailed analysis of the infinite swapping limit in the setting of continuous time jump Markov processes. Usingmore » the large deviations rate function and associated stochastic control problems we consider a diagnostic based on temperature assignments, which can be easily computed during a simulation. We show that the convergence of this diagnostic to its a priori known limit is a necessary condition for the convergence of infinite swapping. The rate function is also used to investigate the impact of asymmetries in the underlying potential landscape, and where in the state space poor sampling is most likely to occur.« less
An integrate-over-temperature approach for enhanced sampling.
Gao, Yi Qin
2008-02-14
A simple method is introduced to achieve efficient random walking in the energy space in molecular dynamics simulations which thus enhances the sampling over a large energy range. The approach is closely related to multicanonical and replica exchange simulation methods in that it allows configurations of the system to be sampled in a wide energy range by making use of Boltzmann distribution functions at multiple temperatures. A biased potential is quickly generated using this method and is then used in accelerated molecular dynamics simulations.
Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; ...
2015-05-27
We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. The focus is on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues formore » investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.« less
Astronauts Sullivan and Leestma perform in-space simulation of refueling
1984-10-14
S84-43432 (11 Oct. 1984) --- Appearing small in the center background of this image, astronauts Kathryn D. Sullivan, left, and David C. Leestma, both 41-G mission specialists, perform an in-space simulation of refueling another spacecraft in orbit. Their station on the space shuttle Challenger is the orbital refueling system (ORS), positioned on the mission peculiar support structure (MPR ESS). The Large Format Camera (LFC) is left of the two mission specialists. In the left foreground is the antenna for the shuttle imaging radar (SIR-B) system onboard. The Canadian-built remote manipulator system (RMS) is positioned to allow close-up recording capability of the busy scene. A 50mm lens on a 70mm camera was used to photograph this scene. Photo credit: NASA
Initial results from the Solar Dynamic (SD) Ground Test Demonstration (GTD) project at NASA Lewis
NASA Technical Reports Server (NTRS)
Shaltens, Richard K.; Boyle, Robert V.
1995-01-01
A government/industry team designed, built, and tested a 2 kWe solar dynamic space power system in a large thermal/vacuum facility with a simulated sun at the NASA Lewis Research Center. The Lewis facility provides an accurate simulation of temperatures, high vacuum, and solar flux as encountered in low earth orbit. This paper reviews the goals and status of the Solar Dynamic (SD) Ground Test Demonstration (GTD) program and describes the initial testing, including both operational and performance data. This SD technology has the potential as a future power source for the International Space Station Alpha.
NASA Technical Reports Server (NTRS)
Buggele, A. E.
1973-01-01
The complex problem why large space simulation chambers do not realize the true ultimate vacuum was investigated. Some contaminating factors affecting diffusion pump performance have been identified and some advances in vacuum/distillation/fractionation technology have been achieved which resulted in a two decade or more lower ultimate pressure. Data are presented to show the overall or individual contaminating effect of commonly used phthalate ester plasticizers of 390 to 530 molecular weight on diffusion pump performance. Methods for removing contaminants from diffusion pump silicone oil during operation and reclaiming contaminated oil by high vacuum molecular distillation are described.
Development of life sciences equipment for microgravity and hypergravity simulation
NASA Technical Reports Server (NTRS)
Mulenburg, G. M.; Evans, J.; Vasques, M.; Gundo, D. P.; Griffith, J. B.; Harper, J.; Skundberg, T.
1994-01-01
The mission of the Life Science Division at the NASA Ames Research Center is to investigate the effects of gravity on living systems in the spectrum from cells to humans. The range of these investigations is from microgravity, as experienced in space, to Earth's gravity, and hypergravity. Exposure to microgravity causes many physiological changes in humans and other mammals including a headward shift of body fluids, atrophy of muscles - especially the large muscles of the legs - and changes in bone and mineral metabolism. The high cost and limited opportunity for research experiments in space create a need to perform ground based simulation experiments on Earth. Models that simulate microgravity are used to help identify and quantify these changes, to investigate the mechanisms causing these changes and, in some cases, to develop countermeasures.
Antarctic analogs as a testbed for regenerative life support technologies
NASA Technical Reports Server (NTRS)
Roberts, D. R.; Andersen, D. T.; Mckay, C. P.; Wharton, R. A., Jr.; Rummel, J. D.
1991-01-01
The feasibility of using Antarctica as a platform for creating earth-based simulations of regenerative life support systems (LSSs) for future space missions is discussed. The requirements for a bioregenerative LSS and the types of technologies that may be used in such a system are examined. Special attention is given to the objectives and the organization of the NASA's CELSS program for the development of regenerative LSSs to support long-duration human missions in space, largely independent of resupply, in a safe and reliable manner. There are two types of locations on the continent of Antarctica suitable for the placement of simulation facilities: the polar plateau and the ice-free dry valleys. The unique attributes that lend each type of location to very different functions as simulation facilities are discussed.
Efficient design of nanoplasmonic waveguide devices using the space mapping algorithm.
Dastmalchi, Pouya; Veronis, Georgios
2013-12-30
We show that the space mapping algorithm, originally developed for microwave circuit optimization, can enable the efficient design of nanoplasmonic waveguide devices which satisfy a set of desired specifications. Space mapping utilizes a physics-based coarse model to approximate a fine model accurately describing a device. Here the fine model is a full-wave finite-difference frequency-domain (FDFD) simulation of the device, while the coarse model is based on transmission line theory. We demonstrate that simply optimizing the transmission line model of the device is not enough to obtain a device which satisfies all the required design specifications. On the other hand, when the iterative space mapping algorithm is used, it converges fast to a design which meets all the specifications. In addition, full-wave FDFD simulations of only a few candidate structures are required before the iterative process is terminated. Use of the space mapping algorithm therefore results in large reductions in the required computation time when compared to any direct optimization method of the fine FDFD model.
Space environment simulation and sensor calibration facility
NASA Astrophysics Data System (ADS)
Engelhart, Daniel P.; Patton, James; Plis, Elena; Cooper, Russell; Hoffmann, Ryan; Ferguson, Dale; Hilmer, Robert V.; McGarity, John; Holeman, Ernest
2018-02-01
The Mumbo space environment simulation chamber discussed here comprises a set of tools to calibrate a variety of low flux, low energy electron and ion detectors used in satellite-mounted particle sensors. The chamber features electron and ion beam sources, a Lyman-alpha ultraviolet lamp, a gimbal table sensor mounting system, cryogenic sample mount and chamber shroud, and beam characterization hardware and software. The design of the electron and ion sources presented here offers a number of unique capabilities for space weather sensor calibration. Both sources create particle beams with narrow, well-characterized energetic and angular distributions with beam diameters that are larger than most space sensor apertures. The electron and ion sources can produce consistently low fluxes that are representative of quiescent space conditions. The particle beams are characterized by 2D beam mapping with several co-located pinhole aperture electron multipliers to capture relative variation in beam intensity and a large aperture Faraday cup to measure absolute current density.
Space environment simulation and sensor calibration facility.
Engelhart, Daniel P; Patton, James; Plis, Elena; Cooper, Russell; Hoffmann, Ryan; Ferguson, Dale; Hilmer, Robert V; McGarity, John; Holeman, Ernest
2018-02-01
The Mumbo space environment simulation chamber discussed here comprises a set of tools to calibrate a variety of low flux, low energy electron and ion detectors used in satellite-mounted particle sensors. The chamber features electron and ion beam sources, a Lyman-alpha ultraviolet lamp, a gimbal table sensor mounting system, cryogenic sample mount and chamber shroud, and beam characterization hardware and software. The design of the electron and ion sources presented here offers a number of unique capabilities for space weather sensor calibration. Both sources create particle beams with narrow, well-characterized energetic and angular distributions with beam diameters that are larger than most space sensor apertures. The electron and ion sources can produce consistently low fluxes that are representative of quiescent space conditions. The particle beams are characterized by 2D beam mapping with several co-located pinhole aperture electron multipliers to capture relative variation in beam intensity and a large aperture Faraday cup to measure absolute current density.
2000 Numerical Propulsion System Simulation Review
NASA Technical Reports Server (NTRS)
Lytle, John; Follen, Greg; Naiman, Cynthia; Veres, Joseph; Owen, Karl; Lopez, Isaac
2001-01-01
The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with industry, academia, and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This concept is called the Numerical Propulsion System Simulation (NPSS). NPSS consists of three main elements: (1) engineering models that enable multidisciplinary analysis of large subsystems and systems at various levels of detail, (2) a simulation environment that maximizes designer productivity, and (3) a cost-effective. high-performance computing platform. A fundamental requirement of the concept is that the simulations must be capable of overnight execution on easily accessible computing platforms. This will greatly facilitate the use of large-scale simulations in a design environment. This paper describes the current status of the NPSS with specific emphasis on the progress made over the past year on air breathing propulsion applications. Major accomplishments include the first formal release of the NPSS object-oriented architecture (NPSS Version 1) and the demonstration of a one order of magnitude reduction in computing cost-to-performance ratio using a cluster of personal computers. The paper also describes the future NPSS milestones, which include the simulation of space transportation propulsion systems in response to increased emphasis on safe, low cost access to space within NASA'S Aerospace Technology Enterprise. In addition, the paper contains a summary of the feedback received from industry partners on the fiscal year 1999 effort and the actions taken over the past year to respond to that feedback. NPSS was supported in fiscal year 2000 by the High Performance Computing and Communications Program.
2001 Numerical Propulsion System Simulation Review
NASA Technical Reports Server (NTRS)
Lytle, John; Follen, Gregory; Naiman, Cynthia; Veres, Joseph; Owen, Karl; Lopez, Isaac
2002-01-01
The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with industry, academia and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This concept is called the Numerical Propulsion System Simulation (NPSS). NPSS consists of three main elements: (1) engineering models that enable multidisciplinary analysis of large subsystems and systems at various levels of detail, (2) a simulation environment that maximizes designer productivity, and (3) a cost-effective, high-performance computing platform. A fundamental requirement of the concept is that the simulations must be capable of overnight execution on easily accessible computing platforms. This will greatly facilitate the use of large-scale simulations in a design environment. This paper describes the current status of the NPSS with specific emphasis on the progress made over the past year on air breathing propulsion applications. Major accomplishments include the first formal release of the NPSS object-oriented architecture (NPSS Version 1) and the demonstration of a one order of magnitude reduction in computing cost-to-performance ratio using a cluster of personal computers. The paper also describes the future NPSS milestones, which include the simulation of space transportation propulsion systems in response to increased emphasis on safe, low cost access to space within NASA's Aerospace Technology Enterprise. In addition, the paper contains a summary of the feedback received from industry partners on the fiscal year 2000 effort and the actions taken over the past year to respond to that feedback. NPSS was supported in fiscal year 2001 by the High Performance Computing and Communications Program.
Large-Eddy Simulation of Internal Flow through Human Vocal Folds
NASA Astrophysics Data System (ADS)
Lasota, Martin; Šidlof, Petr
2018-06-01
The phonatory process occurs when air is expelled from the lungs through the glottis and the pressure drop causes flow-induced oscillations of the vocal folds. The flow fields created in phonation are highly unsteady and the coherent vortex structures are also generated. For accuracy it is essential to compute on humanlike computational domain and appropriate mathematical model. The work deals with numerical simulation of air flow within the space between plicae vocales and plicae vestibulares. In addition to the dynamic width of the rima glottidis, where the sound is generated, there are lateral ventriculus laryngis and sacculus laryngis included in the computational domain as well. The paper presents the results from OpenFOAM which are obtained with a large-eddy simulation using second-order finite volume discretization of incompressible Navier-Stokes equations. Large-eddy simulations with different subgrid scale models are executed on structured mesh. In these cases are used only the subgrid scale models which model turbulence via turbulent viscosity and Boussinesq approximation in subglottal and supraglottal area in larynx.
NASA Astrophysics Data System (ADS)
Jones, Scott B.; Or, Dani
1999-04-01
Plants grown in porous media are part of a bioregenerative life support system designed for long-duration space missions. Reduced gravity conditions of orbiting spacecraft (microgravity) alter several aspects of liquid flow and distribution within partially saturated porous media. The objectives of this study were to evaluate the suitability of conventional capillary flow theory in simulating water distribution in porous media measured in a microgravity environment. Data from experiments aboard the Russian space station Mir and a U.S. space shuttle were simulated by elimination of the gravitational term from the Richards equation. Qualitative comparisons with media hydraulic parameters measured on Earth suggest narrower pore size distributions and inactive or nonparticipating large pores in microgravity. Evidence of accentuated hysteresis, altered soil-water characteristic, and reduced unsaturated hydraulic conductivity from microgravity simulations may be attributable to a number of proposed secondary mechanisms. These are likely spawned by enhanced and modified paths of interfacial flows and an altered force ratio of capillary to body forces in microgravity.
NASA Astrophysics Data System (ADS)
Chu, Zhongyi; Di, Jingnan; Cui, Jing
2017-10-01
Space debris occupies a valuable orbital resource and is an inevitable and urgent problem, especially for large space debris because of its high risk and the possible crippling effects of a collision. Space debris has attracted much attention in recent years. A tethered system used in an active debris removal scenario is a promising method to de-orbit large debris in a safe manner. In a tethered system, the flexibility of the tether used in debris removal can possibly induce tangling, which is dangerous and should be avoided. In particular, attachment point bias due to capture error can significantly affect the motion of debris relative to the tether and increase the tangling risk. Hence, in this paper, the effect of attachment point bias on the tethered system is studied based on a dynamic model established based on a Newtonian approach. Next, a safety metric of avoiding a tangle when a tether is tensioned with attachment point bias is designed to analyse the tangling risk of the tethered system. Finally, several numerical cases are established and simulated to validate the effects of attachment point bias on a space tethered system.
Numerical Simulations of Homogeneous Turbulence Using Lagrangian-Averaged Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Mohseni, Kamran; Shkoller, Steve; Kosovic, Branko; Marsden, Jerrold E.; Carati, Daniele; Wray, Alan; Rogallo, Robert
2000-01-01
The Lagrangian-averaged Navier-Stokes (LANS) equations are numerically evaluated as a turbulence closure. They are derived from a novel Lagrangian averaging procedure on the space of all volume-preserving maps and can be viewed as a numerical algorithm which removes the energy content from the small scales (smaller than some a priori fixed spatial scale alpha) using a dispersive rather than dissipative mechanism, thus maintaining the crucial features of the large scale flow. We examine the modeling capabilities of the LANS equations for decaying homogeneous turbulence, ascertain their ability to track the energy spectrum of fully resolved direct numerical simulations (DNS), compare the relative energy decay rates, and compare LANS with well-accepted large eddy simulation (LES) models.
NASA Technical Reports Server (NTRS)
Chen, CHIEN-C.; Hui, Elliot; Okamoto, Garret
1992-01-01
Spatial acquisition using the sun-lit Earth as a beacon source provides several advantages over active beacon-based systems for deep-space optical communication systems. However, since the angular extend of the Earth image is large compared to the laser beam divergence, the acquisition subsystem must be capable of resolving the image to derive the proper pointing orientation. The algorithms used must be capable of deducing the receiver location given the blurring introduced by the imaging optics and the large Earth albedo fluctuation. Furthermore, because of the complexity of modelling the Earth and the tracking algorithms, an accurate estimate of the algorithm accuracy can only be made via simulation using realistic Earth images. An image simulator was constructed for this purpose, and the results of the simulation runs are reported.
Atomic displacements in the charge ice pyrochlore Bi2Ti2O6O' studied by neutron total scattering
NASA Astrophysics Data System (ADS)
Shoemaker, Daniel P.; Seshadri, Ram; Hector, Andrew L.; Llobet, Anna; Proffen, Thomas; Fennie, Craig J.
2010-04-01
The oxide pyrochlore Bi2Ti2O6O' is known to be associated with large displacements of Bi and O' atoms from their ideal crystallographic positions. Neutron total scattering, analyzed in both reciprocal and real space, is employed here to understand the nature of these displacements. Rietveld analysis and maximum entropy methods are used to produce an average picture of the structural nonideality. Local structure is modeled via large-box reverse Monte Carlo simulations constrained simultaneously by the Bragg profile and real-space pair distribution function. Direct visualization and statistical analyses of these models show the precise nature of the static Bi and O' displacements. Correlations between neighboring Bi displacements are analyzed using coordinates from the large-box simulations. The framework of continuous symmetry measures has been applied to distributions of O'Bi4 tetrahedra to examine deviations from ideality. Bi displacements from ideal positions appear correlated over local length scales. The results are consistent with the idea that these nonmagnetic lone-pair containing pyrochlore compounds can be regarded as highly structurally frustrated systems.
NASA Technical Reports Server (NTRS)
Salama, Farid; Tan, Xiaofeng; Cami, Jan; Biennier, Ludovic; Remy, Jerome
2006-01-01
Polycyclic Aromatic Hydrocarbons (PAHs) are an important and ubiquitous component of carbon-bearing materials in space. A long-standing and major challenge for laboratory astrophysics has been to measure the spectra of large carbon molecules in laboratory environments that mimic (in a realistic way) the physical conditions that are associated with the interstellar emission and absorption regions [1]. This objective has been identified as one of the critical Laboratory Astrophysics objectives to optimize the data return from space missions [2]. An extensive laboratory program has been developed to assess the properties of PAHs in such environments and to describe how they influence the radiation and energy balance in space. We present and discuss the gas-phase electronic absorption spectra of neutral and ionized PAHs measured in the UV-Visible-NIR range in astrophysically relevant environments and discuss the implications for astrophysics [1]. The harsh physical conditions of the interstellar medium characterized by a low temperature, an absence of collisions and strong VUV radiation fields - have been simulated in the laboratory by associating a pulsed cavity ringdown spectrometer (CRDS) with a supersonic slit jet seeded with PAHs and an ionizing, penning-type, electronic discharge. We have measured for the {\\it first time} the spectra of a series of neutral [3,4] and ionized [5,6] interstellar PAHs analogs in the laboratory. An effort has also been attempted to quantify the mechanisms of ion and carbon nanoparticles production in the free jet expansion and to model our simulation of the diffuse interstellar medium in the laboratory [7]. These experiments provide {\\it unique} information on the spectra of free, large carbon-containing molecules and ions in the gas phase. We are now, for the first time, in the position to directly compare laboratory spectral data on free, cold, PAH ions and carbon nano-sized carbon particles with astronomical observations in the UV-NIR range (interstellar UV extinction, DIBs in the NUV-NIR range). This new phase offers tremendous opportunities for the data analysis of current and upcoming space missions geared toward the detection of large aromatic systems Le., the "new frontier space missions" (Spitzer, HST, COS, JWST, SOFIA,...).
Quasi-equilibria in reduced Liouville spaces.
Halse, Meghan E; Dumez, Jean-Nicolas; Emsley, Lyndon
2012-06-14
The quasi-equilibrium behaviour of isolated nuclear spin systems in full and reduced Liouville spaces is discussed. We focus in particular on the reduced Liouville spaces used in the low-order correlations in Liouville space (LCL) simulation method, a restricted-spin-space approach to efficiently modelling the dynamics of large networks of strongly coupled spins. General numerical methods for the calculation of quasi-equilibrium expectation values of observables in Liouville space are presented. In particular, we treat the cases of a time-independent Hamiltonian, a time-periodic Hamiltonian (with and without stroboscopic sampling) and powder averaging. These quasi-equilibrium calculation methods are applied to the example case of spin diffusion in solid-state nuclear magnetic resonance. We show that there are marked differences between the quasi-equilibrium behaviour of spin systems in the full and reduced spaces. These differences are particularly interesting in the time-periodic-Hamiltonian case, where simulations carried out in the reduced space demonstrate ergodic behaviour even for small spins systems (as few as five homonuclei). The implications of this ergodic property on the success of the LCL method in modelling the dynamics of spin diffusion in magic-angle spinning experiments of powders is discussed.
NASA Astrophysics Data System (ADS)
Cardall, Christian Y.; Budiardja, Reuben D.
2018-01-01
The large-scale computer simulation of a system of physical fields governed by partial differential equations requires some means of approximating the mathematical limit of continuity. For example, conservation laws are often treated with a 'finite-volume' approach in which space is partitioned into a large number of small 'cells,' with fluxes through cell faces providing an intuitive discretization modeled on the mathematical definition of the divergence operator. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of simple meshes and the evolution of generic conserved currents thereon, along with individual 'unit test' programs and larger example problems demonstrating their use. These classes inaugurate the Mathematics division of our developing astrophysics simulation code GENASIS (Gen eral A strophysical Si mulation S ystem), which will be expanded over time to include additional meshing options, mathematical operations, solver types, and solver variations appropriate for many multiphysics applications.
Summarizing Simulation Results using Causally-relevant States
Parikh, Nidhi; Marathe, Madhav; Swarup, Samarth
2016-01-01
As increasingly large-scale multiagent simulations are being implemented, new methods are becoming necessary to make sense of the results of these simulations. Even concisely summarizing the results of a given simulation run is a challenge. Here we pose this as the problem of simulation summarization: how to extract the causally-relevant descriptions of the trajectories of the agents in the simulation. We present a simple algorithm to compress agent trajectories through state space by identifying the state transitions which are relevant to determining the distribution of outcomes at the end of the simulation. We present a toy-example to illustrate the working of the algorithm, and then apply it to a complex simulation of a major disaster in an urban area. PMID:28042620
Validating the simulation of large-scale parallel applications using statistical characteristics
Zhang, Deli; Wilke, Jeremiah; Hendry, Gilbert; ...
2016-03-01
Simulation is a widely adopted method to analyze and predict the performance of large-scale parallel applications. Validating the hardware model is highly important for complex simulations with a large number of parameters. Common practice involves calculating the percent error between the projected and the real execution time of a benchmark program. However, in a high-dimensional parameter space, this coarse-grained approach often suffers from parameter insensitivity, which may not be known a priori. Moreover, the traditional approach cannot be applied to the validation of software models, such as application skeletons used in online simulations. In this work, we present a methodologymore » and a toolset for validating both hardware and software models by quantitatively comparing fine-grained statistical characteristics obtained from execution traces. Although statistical information has been used in tasks like performance optimization, this is the first attempt to apply it to simulation validation. Lastly, our experimental results show that the proposed evaluation approach offers significant improvement in fidelity when compared to evaluation using total execution time, and the proposed metrics serve as reliable criteria that progress toward automating the simulation tuning process.« less
Hydrothermal fluid flow and deformation in large calderas: Inferences from numerical simulations
Hurwitz, S.; Christiansen, L.B.; Hsieh, P.A.
2007-01-01
Inflation and deflation of large calderas is traditionally interpreted as being induced by volume change of a discrete source embedded in an elastic or viscoelastic half-space, though it has also been suggested that hydrothermal fluids may play a role. To test the latter hypothesis, we carry out numerical simulations of hydrothermal fluid flow and poroelastic deformation in calderas by coupling two numerical codes: (1) TOUGH2 [Pruess et al., 1999], which simulates flow in porous or fractured media, and (2) BIOT2 [Hsieh, 1996], which simulates fluid flow and deformation in a linearly elastic porous medium. In the simulations, high-temperature water (350??C) is injected at variable rates into a cylinder (radius 50 km, height 3-5 km). A sensitivity analysis indicates that small differences in the values of permeability and its anisotropy, the depth and rate of hydrothermal injection, and the values of the shear modulus may lead to significant variations in the magnitude, rate, and geometry of ground surface displacement, or uplift. Some of the simulated uplift rates are similar to observed uplift rates in large calderas, suggesting that the injection of aqueous fluids into the shallow crust may explain some of the deformation observed in calderas.
GAPD: a GPU-accelerated atom-based polychromatic diffraction simulation code.
E, J C; Wang, L; Chen, S; Zhang, Y Y; Luo, S N
2018-03-01
GAPD, a graphics-processing-unit (GPU)-accelerated atom-based polychromatic diffraction simulation code for direct, kinematics-based, simulations of X-ray/electron diffraction of large-scale atomic systems with mono-/polychromatic beams and arbitrary plane detector geometries, is presented. This code implements GPU parallel computation via both real- and reciprocal-space decompositions. With GAPD, direct simulations are performed of the reciprocal lattice node of ultralarge systems (∼5 billion atoms) and diffraction patterns of single-crystal and polycrystalline configurations with mono- and polychromatic X-ray beams (including synchrotron undulator sources), and validation, benchmark and application cases are presented.
The CH2O column as a possible constraint on methane oxidation
NASA Astrophysics Data System (ADS)
Valin, L. C.; Fiore, A. M.; Lin, M.
2013-12-01
We explore the potential for space-based measurements of the CH2O column to quantify variations of methane oxidation in the remote atmosphere due to changes in climate (e.g., T, H2O, stratospheric O3) and atmospheric composition (e.g., NOxO, O3, CO, CH4). We investigate the variability of methane oxidation and the formaldehyde column using available global simulations (MOZART-2 chemistry-transport model, GFDL AM3 climate-chemistry model). Over a large region (135° - 175° W; 0° - 16° S), the rate of methane oxidation simulated in the models varies intraseasonally (×10%), seasonally (×20%) and interannually (×5%), and is well correlated with the simulated variability of the CH2O column (R2 = 0.75; ~1x1015 molecules cm-2). The precision of a single space-based measurement is approximately 1×1016 molecules cm-2, an order of magnitude larger than the simulated variability of the CH2O column. However, in a large region such as the tropical Pacific, UV/Vis spectrometers are capable of making thousands of measurements daily, enough sampling to theoretically increase the precision by √N, such that variations on the order of 1×1015 molecules cm-2 should be observable on intraseasonal and interannual timescales.
Modified optical fiber daylighting system with sunlight transportation in free space.
Vu, Ngoc-Hai; Pham, Thanh-Tuan; Shin, Seoyong
2016-12-26
We present the design, optical simulation, and experiment of a modified optical fiber daylighting system (M-OFDS) for indoor lighting. The M-OFDS is comprised of three sub-systems: concentration, collimation, and distribution. The concentration part is formed by coupling a Fresnel lens with a large-core plastic optical fiber. The sunlight collected by the concentration sub-system is propagated in a plastic optical fiber and then collimated by the collimator, which is a combination of a parabolic mirror and a convex lens. The collimated beam of sunlight travels in free space and is guided to the interior by directing flat mirrors, where it is diffused uniformly by a distributor. All parameters of the system are calculated theoretically. Based on the designed system, our simulation results demonstrated a maximum optical efficiency of 71%. The simulation results also showed that sunlight could be delivered to the illumination destination at distance of 30 m. A prototype of the M-OFDS was fabricated, and preliminary experiments were performed outdoors. The simulation results and experimental results confirmed that the M-OFDS was designed effectively. A large-scale system constructed by several M-OFDSs is also proposed. The results showed that the presented optical fiber daylighting system is a strong candidate for an inexpensive and highly efficient application of solar energy in buildings.
Improving parallel I/O autotuning with performance modeling
Behzad, Babak; Byna, Surendra; Wild, Stefan M.; ...
2014-01-01
Various layers of the parallel I/O subsystem offer tunable parameters for improving I/O performance on large-scale computers. However, searching through a large parameter space is challenging. We are working towards an autotuning framework for determining the parallel I/O parameters that can achieve good I/O performance for different data write patterns. In this paper, we characterize parallel I/O and discuss the development of predictive models for use in effectively reducing the parameter space. Furthermore, applying our technique on tuning an I/O kernel derived from a large-scale simulation code shows that the search time can be reduced from 12 hours to 2more » hours, while achieving 54X I/O performance speedup.« less
Efficient Maintenance and Update of Nonbonded Lists in Macromolecular Simulations.
Chowdhury, Rezaul; Beglov, Dmitri; Moghadasi, Mohammad; Paschalidis, Ioannis Ch; Vakili, Pirooz; Vajda, Sandor; Bajaj, Chandrajit; Kozakov, Dima
2014-10-14
Molecular mechanics and dynamics simulations use distance based cutoff approximations for faster computation of pairwise van der Waals and electrostatic energy terms. These approximations traditionally use a precalculated and periodically updated list of interacting atom pairs, known as the "nonbonded neighborhood lists" or nblists, in order to reduce the overhead of finding atom pairs that are within distance cutoff. The size of nblists grows linearly with the number of atoms in the system and superlinearly with the distance cutoff, and as a result, they require significant amount of memory for large molecular systems. The high space usage leads to poor cache performance, which slows computation for large distance cutoffs. Also, the high cost of updates means that one cannot afford to keep the data structure always synchronized with the configuration of the molecules when efficiency is at stake. We propose a dynamic octree data structure for implicit maintenance of nblists using space linear in the number of atoms but independent of the distance cutoff. The list can be updated very efficiently as the coordinates of atoms change during the simulation. Unlike explicit nblists, a single octree works for all distance cutoffs. In addition, octree is a cache-friendly data structure, and hence, it is less prone to cache miss slowdowns on modern memory hierarchies than nblists. Octrees use almost 2 orders of magnitude less memory, which is crucial for simulation of large systems, and while they are comparable in performance to nblists when the distance cutoff is small, they outperform nblists for larger systems and large cutoffs. Our tests show that octree implementation is approximately 1.5 times faster in practical use case scenarios as compared to nblists.
Design by Dragging: An Interface for Creative Forward and Inverse Design with Simulation Ensembles
Coffey, Dane; Lin, Chi-Lun; Erdman, Arthur G.; Keefe, Daniel F.
2014-01-01
We present an interface for exploring large design spaces as encountered in simulation-based engineering, design of visual effects, and other tasks that require tuning parameters of computationally-intensive simulations and visually evaluating results. The goal is to enable a style of design with simulations that feels as-direct-as-possible so users can concentrate on creative design tasks. The approach integrates forward design via direct manipulation of simulation inputs (e.g., geometric properties, applied forces) in the same visual space with inverse design via “tugging” and reshaping simulation outputs (e.g., scalar fields from finite element analysis (FEA) or computational fluid dynamics (CFD)). The interface includes algorithms for interpreting the intent of users’ drag operations relative to parameterized models, morphing arbitrary scalar fields output from FEA and CFD simulations, and in-place interactive ensemble visualization. The inverse design strategy can be extended to use multi-touch input in combination with an as-rigid-as-possible shape manipulation to support rich visual queries. The potential of this new design approach is confirmed via two applications: medical device engineering of a vacuum-assisted biopsy device and visual effects design using a physically based flame simulation. PMID:24051845
Relativistic initial conditions for N-body simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fidler, Christian; Tram, Thomas; Crittenden, Robert
2017-06-01
Initial conditions for (Newtonian) cosmological N-body simulations are usually set by re-scaling the present-day power spectrum obtained from linear (relativistic) Boltzmann codes to the desired initial redshift of the simulation. This back-scaling method can account for the effect of inhomogeneous residual thermal radiation at early times, which is absent in the Newtonian simulations. We analyse this procedure from a fully relativistic perspective, employing the recently-proposed Newtonian motion gauge framework. We find that N-body simulations for ΛCDM cosmology starting from back-scaled initial conditions can be self-consistently embedded in a relativistic space-time with first-order metric potentials calculated using a linear Boltzmann code.more » This space-time coincides with a simple ''N-body gauge'' for z < 50 for all observable modes. Care must be taken, however, when simulating non-standard cosmologies. As an example, we analyse the back-scaling method in a cosmology with decaying dark matter, and show that metric perturbations become large at early times in the back-scaling approach, indicating a breakdown of the perturbative description. We suggest a suitable ''forwards approach' for such cases.« less
Modelling and simulation of Space Station Freedom berthing dynamics and control
NASA Technical Reports Server (NTRS)
Cooper, Paul A.; Garrison, James L., Jr.; Montgomery, Raymond C.; Wu, Shih-Chin; Stockwell, Alan E.; Demeo, Martha E.
1994-01-01
A large-angle, flexible, multibody, dynamic modeling capability has been developed to help validate numerical simulations of the dynamic motion and control forces which occur during berthing of Space Station Freedom to the Shuttle Orbiter in the early assembly flights. This paper outlines the dynamics and control of the station, the attached Shuttle Remote Manipulator System, and the orbiter. The simulation tool developed for the analysis is described and the results of two simulations are presented. The first is a simulated maneuver from a gravity-gradient attitude to a torque equilibrium attitude using the station reaction control jets. The second simulation is the berthing of the station to the orbiter with the station control moment gyros actively maintaining an estimated torque equilibrium attitude. The influence of the elastic dynamic behavior of the station and of the Remote Manipulator System on the attitude control of the station/orbiter system during each maneuver was investigated. The flexibility of the station and the arm were found to have only a minor influence on the attitude control of the system during the maneuvers.
Analysis of Waves in Space Plasma (WISP) near field simulation and experiment
NASA Technical Reports Server (NTRS)
Richie, James E.
1992-01-01
The WISP payload scheduler for a 1995 space transportation system (shuttle flight) will include a large power transmitter on board at a wide range of frequencies. The levels of electromagnetic interference/electromagnetic compatibility (EMI/EMC) must be addressed to insure the safety of the shuttle crew. This report is concerned with the simulation and experimental verification of EMI/EMC for the WISP payload in the shuttle cargo bay. The simulations have been carried out using the method of moments for both thin wires and patches to stimulate closed solids. Data obtained from simulation is compared with experimental results. An investigation of the accuracy of the modeling approach is also included. The report begins with a description of the WISP experiment. A description of the model used to simulate the cargo bay follows. The results of the simulation are compared to experimental data on the input impedance of the WISP antenna with the cargo bay present. A discussion of the methods used to verify the accuracy of the model is shown to illustrate appropriate methods for obtaining this information. Finally, suggestions for future work are provided.
A transformed path integral approach for solution of the Fokker-Planck equation
NASA Astrophysics Data System (ADS)
Subramaniam, Gnana M.; Vedula, Prakash
2017-10-01
A novel path integral (PI) based method for solution of the Fokker-Planck equation is presented. The proposed method, termed the transformed path integral (TPI) method, utilizes a new formulation for the underlying short-time propagator to perform the evolution of the probability density function (PDF) in a transformed computational domain where a more accurate representation of the PDF can be ensured. The new formulation, based on a dynamic transformation of the original state space with the statistics of the PDF as parameters, preserves the non-negativity of the PDF and incorporates short-time properties of the underlying stochastic process. New update equations for the state PDF in a transformed space and the parameters of the transformation (including mean and covariance) that better accommodate nonlinearities in drift and non-Gaussian behavior in distributions are proposed (based on properties of the SDE). Owing to the choice of transformation considered, the proposed method maps a fixed grid in transformed space to a dynamically adaptive grid in the original state space. The TPI method, in contrast to conventional methods such as Monte Carlo simulations and fixed grid approaches, is able to better represent the distributions (especially the tail information) and better address challenges in processes with large diffusion, large drift and large concentration of PDF. Additionally, in the proposed TPI method, error bounds on the probability in the computational domain can be obtained using the Chebyshev's inequality. The benefits of the TPI method over conventional methods are illustrated through simulations of linear and nonlinear drift processes in one-dimensional and multidimensional state spaces. The effects of spatial and temporal grid resolutions as well as that of the diffusion coefficient on the error in the PDF are also characterized.
Hazard assessment of long-period ground motions for the Nankai Trough earthquakes
NASA Astrophysics Data System (ADS)
Maeda, T.; Morikawa, N.; Aoi, S.; Fujiwara, H.
2013-12-01
We evaluate a seismic hazard for long-period ground motions associated with the Nankai Trough earthquakes (M8~9) in southwest Japan. Large interplate earthquakes occurring around the Nankai Trough have caused serious damages due to strong ground motions and tsunami; most recent events were in 1944 and 1946. Such large interplate earthquake potentially causes damages to high-rise and large-scale structures due to long-period ground motions (e.g., 1985 Michoacan earthquake in Mexico, 2003 Tokachi-oki earthquake in Japan). The long-period ground motions are amplified particularly on basins. Because major cities along the Nankai Trough have developed on alluvial plains, it is therefore important to evaluate long-period ground motions as well as strong motions and tsunami for the anticipated Nankai Trough earthquakes. The long-period ground motions are evaluated by the finite difference method (FDM) using 'characterized source models' and the 3-D underground structure model. The 'characterized source model' refers to a source model including the source parameters necessary for reproducing the strong ground motions. The parameters are determined based on a 'recipe' for predicting strong ground motion (Earthquake Research Committee (ERC), 2009). We construct various source models (~100 scenarios) giving the various case of source parameters such as source region, asperity configuration, and hypocenter location. Each source region is determined by 'the long-term evaluation of earthquakes in the Nankai Trough' published by ERC. The asperity configuration and hypocenter location control the rupture directivity effects. These parameters are important because our preliminary simulations are strongly affected by the rupture directivity. We apply the system called GMS (Ground Motion Simulator) for simulating the seismic wave propagation based on 3-D FDM scheme using discontinuous grids (Aoi and Fujiwara, 1999) to our study. The grid spacing for the shallow region is 200 m and 100 m in horizontal and vertical, respectively. The grid spacing for the deep region is three times coarser. The total number of grid points is about three billion. The 3-D underground structure model used in the FD simulation is the Japan integrated velocity structure model (ERC, 2012). Our simulation is valid for period more than two seconds due to the lowest S-wave velocity and grid spacing. However, because the characterized source model may not sufficiently support short period components, we should be interpreted the reliable period of this simulation with caution. Therefore, we consider the period more than five seconds instead of two seconds for further analysis. We evaluate the long-period ground motions using the velocity response spectra for the period range between five and 20 second. The preliminary simulation shows a large variation of response spectra at a site. This large variation implies that the ground motion is very sensitive to different scenarios. And it requires studying the large variation to understand the seismic hazard. Our further study will obtain the hazard curves for the Nankai Trough earthquake (M 8~9) by applying the probabilistic seismic hazard analysis to the simulation results.
Holter, Karl Erik; Kehlet, Benjamin; Devor, Anna; Sejnowski, Terrence J; Dale, Anders M; Omholt, Stig W; Ottersen, Ole Petter; Nagelhus, Erlend Arnulf; Mardal, Kent-André; Pettersen, Klas H
2017-09-12
The brain lacks lymph vessels and must rely on other mechanisms for clearance of waste products, including amyloid [Formula: see text] that may form pathological aggregates if not effectively cleared. It has been proposed that flow of interstitial fluid through the brain's interstitial space provides a mechanism for waste clearance. Here we compute the permeability and simulate pressure-mediated bulk flow through 3D electron microscope (EM) reconstructions of interstitial space. The space was divided into sheets (i.e., space between two parallel membranes) and tunnels (where three or more membranes meet). Simulation results indicate that even for larger extracellular volume fractions than what is reported for sleep and for geometries with a high tunnel volume fraction, the permeability was too low to allow for any substantial bulk flow at physiological hydrostatic pressure gradients. For two different geometries with the same extracellular volume fraction the geometry with the most tunnel volume had [Formula: see text] higher permeability, but the bulk flow was still insignificant. These simulation results suggest that even large molecule solutes would be more easily cleared from the brain interstitium by diffusion than by bulk flow. Thus, diffusion within the interstitial space combined with advection along vessels is likely to substitute for the lymphatic drainage system in other organs.
Simulating Space Capsule Water Landing with Explicit Finite Element Method
NASA Technical Reports Server (NTRS)
Wang, John T.; Lyle, Karen H.
2007-01-01
A study of using an explicit nonlinear dynamic finite element code for simulating the water landing of a space capsule was performed. The finite element model contains Lagrangian shell elements for the space capsule and Eulerian solid elements for the water and air. An Arbitrary Lagrangian Eulerian (ALE) solver and a penalty coupling method were used for predicting the fluid and structure interaction forces. The space capsule was first assumed to be rigid, so the numerical results could be correlated with closed form solutions. The water and air meshes were continuously refined until the solution was converged. The converged maximum deceleration predicted is bounded by the classical von Karman and Wagner solutions and is considered to be an adequate solution. The refined water and air meshes were then used in the models for simulating the water landing of a capsule model that has a flexible bottom. For small pitch angle cases, the maximum deceleration from the flexible capsule model was found to be significantly greater than the maximum deceleration obtained from the corresponding rigid model. For large pitch angle cases, the difference between the maximum deceleration of the flexible model and that of its corresponding rigid model is smaller. Test data of Apollo space capsules with a flexible heat shield qualitatively support the findings presented in this paper.
NASA Astrophysics Data System (ADS)
Johannes, Bernd; Salnitski, Vyacheslav; Soll, Henning; Rauch, Melina; Hoermann, Hans-Juergen
For the evaluation of an operator's skill reliability indicators of work quality as well as of psychophysiological states during the work have to be considered. The herein presented methodology and measurement equipment were developed and tested in numerous terrestrial and space experiments using a simulation of a spacecraft docking on a space station. However, in this study the method was applied to a comparable terrestrial task—the flight simulator test (FST) used in the DLR selection procedure for ab initio pilot applicants for passenger airlines. This provided a large amount of data for a statistical verification of the space methodology. For the evaluation of the strain level of applicants during the FST psychophysiological measurements were used to construct a "psychophysiological arousal vector" (PAV) which is sensitive to various individual reaction patterns of the autonomic nervous system to mental load. Its changes and increases will be interpreted as "strain". In the first evaluation study, 614 subjects were analyzed. The subjects first underwent a calibration procedure for the assessment of their autonomic outlet type (AOT) and on the following day they performed the FST, which included three tasks and was evaluated by instructors applying well-established and standardized rating scales. This new method will possibly promote a wide range of other future applications in aviation and space psychology.
An Integrated Analysis of the Physiological Effects of Space Flight: Executive Summary
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1985-01-01
A large array of models were applied in a unified manner to solve problems in space flight physiology. Mathematical simulation was used as an alternative way of looking at physiological systems and maximizing the yield from previous space flight experiments. A medical data analysis system was created which consist of an automated data base, a computerized biostatistical and data analysis system, and a set of simulation models of physiological systems. Five basic models were employed: (1) a pulsatile cardiovascular model; (2) a respiratory model; (3) a thermoregulatory model; (4) a circulatory, fluid, and electrolyte balance model; and (5) an erythropoiesis regulatory model. Algorithms were provided to perform routine statistical tests, multivariate analysis, nonlinear regression analysis, and autocorrelation analysis. Special purpose programs were prepared for rank correlation, factor analysis, and the integration of the metabolic balance data.
Policy model for space economy infrastructure
NASA Astrophysics Data System (ADS)
Komerath, Narayanan; Nally, James; Zilin Tang, Elizabeth
2007-12-01
Extraterrestrial infrastructure is key to the development of a space economy. Means for accelerating transition from today's isolated projects to a broad-based economy are considered. A large system integration approach is proposed. The beginnings of an economic simulation model are presented, along with examples of how interactions and coordination bring down costs. A global organization focused on space infrastructure and economic expansion is proposed to plan, coordinate, fund and implement infrastructure construction. This entity also opens a way to raise low-cost capital and solve the legal and public policy issues of access to extraterrestrial resources.
Optimal control of large space structures via generalized inverse matrix
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.; Fang, Xiaowen
1987-01-01
Independent Modal Space Control (IMSC) is a control scheme that decouples the space structure into n independent second-order subsystems according to n controlled modes and controls each mode independently. It is well-known that the IMSC eliminates control and observation spillover caused when the conventional coupled modal control scheme is employed. The independent control of each mode requires that the number of actuators be equal to the number of modelled modes, which is very high for a faithful modeling of large space structures. A control scheme is proposed that allows one to use a reduced number of actuators to control all modeled modes suboptimally. In particular, the method of generalized inverse matrices is employed to implement the actuators such that the eigenvalues of the closed-loop system are as closed as possible to those specified by the optimal IMSC. Computer simulation of the proposed control scheme on a simply supported beam is given.
A method for modeling contact dynamics for automated capture mechanisms
NASA Technical Reports Server (NTRS)
Williams, Philip J.
1991-01-01
Logicon Control Dynamics develops contact dynamics models for space-based docking and berthing vehicles. The models compute contact forces for the physical contact between mating capture mechanism surfaces. Realistic simulation requires proportionality constants, for calculating contact forces, to approximate surface stiffness of contacting bodies. Proportionality for rigid metallic bodies becomes quite large. Small penetrations of surface boundaries can produce large contact forces.
A nonlocal electron conduction model for multidimensional radiation hydrodynamics codes
NASA Astrophysics Data System (ADS)
Schurtz, G. P.; Nicolaï, Ph. D.; Busquet, M.
2000-10-01
Numerical simulation of laser driven Inertial Confinement Fusion (ICF) related experiments require the use of large multidimensional hydro codes. Though these codes include detailed physics for numerous phenomena, they deal poorly with electron conduction, which is the leading energy transport mechanism of these systems. Electron heat flow is known, since the work of Luciani, Mora, and Virmont (LMV) [Phys. Rev. Lett. 51, 1664 (1983)], to be a nonlocal process, which the local Spitzer-Harm theory, even flux limited, is unable to account for. The present work aims at extending the original formula of LMV to two or three dimensions of space. This multidimensional extension leads to an equivalent transport equation suitable for easy implementation in a two-dimensional radiation-hydrodynamic code. Simulations are presented and compared to Fokker-Planck simulations in one and two dimensions of space.
Evaluation of subgrid-scale turbulence models using a fully simulated turbulent flow
NASA Technical Reports Server (NTRS)
Clark, R. A.; Ferziger, J. H.; Reynolds, W. C.
1977-01-01
An exact turbulent flow field was calculated on a three-dimensional grid with 64 points on a side. The flow simulates grid-generated turbulence from wind tunnel experiments. In this simulation, the grid spacing is small enough to include essentially all of the viscous energy dissipation, and the box is large enough to contain the largest eddy in the flow. The method is limited to low-turbulence Reynolds numbers, in our case R sub lambda = 36.6. To complete the calculation using a reasonable amount of computer time with reasonable accuracy, a third-order time-integration scheme was developed which runs at about the same speed as a simple first-order scheme. It obtains this accuracy by saving the velocity field and its first-time derivative at each time step. Fourth-order accurate space-differencing is used.
Intelligent Reconfigurable System with Self-Dammage Assessmentand Control Stress Capabilities
NASA Astrophysics Data System (ADS)
Trivailo, P.; Plotnikova, L.; Kao, T. W.
2002-01-01
Modern space structures are constructed using a modular approach that facilitates their transportation and assembly in space. Modular architecture of space structures also enables reconfiguration of large structures such that they can adapt to possible changes in environment, and also allows use of the limited structural resources available in space for completion of a much larger variety of tasks. An increase in size and complexity demands development of materials with a "smart" or active structural modulus and also of effective control algorithms to control the motion of large flexible structures. This challenging task has generated a lot of interest amongst scientists and engineers during the last two decades, however, research into the development of control schemes which can adapt to structural configuration changes has received less attention. This is possibly due to the increased complexity caused by alterations in geometry, which inevitably lead to changes in the dynamic properties of the system. This paper presents results of the application of a decentralized control approach for active control of large flexible structures undergoing significant reconfigurations. The Control Component Synthesis methodology was used to build controlled components and to assemble them into a controlled flexible structure that meets required performance specifications. To illustrate the efficiency of the method, numerical simulations were conducted for 2D and 3D modular truss structures and a multi-link beam system. In each case the performance of the decentralized control system has been evaluated using pole location maps, step and impulse response simulations and frequency response analysis. The performance of the decentralized control system has been measured against the optimal centralised control system for various excitation scenarios. A special case where one of the local component controllers fails was also examined. For better interpretation of the efficiency of the designed controllers, results of the simulations are illustrated using a Virtual Reality computer environment, offering advanced visual effects. Plotnikova@rmit.edu.au # Tsunwah@hotmail.com
NASA Astrophysics Data System (ADS)
Fermo, Raymond Luis Lachica
2011-12-01
Magnetic reconnection is a process responsible for the conversion of magnetic energy into plasma flows in laboratory, space, and astrophysical plasmas. A product of reconnection, magnetic islands have been observed in long current layers for various space plasmas, including the magnetopause, the magnetotail, and the solar corona. In this thesis, a statistical model is developed for the dynamics of magnetic islands in very large current layers, for which conventional plasma simulations prove inadequate. An island distribution function f characterizes islands by the flux they contain psi and the area they enclose A. An integro-differential evolution equation for f describes their creation at small scales, growth due to quasi-steady reconnection, convection along the current sheet, and their coalescence with one another. The steady-state solution of the evolution equation predicts a distribution of islands in which the signature of island merging is an asymmetry in psi-- r phase space. A Hall MHD (magnetohydrodynamic) simulation of a very long current sheet with large numbers of magnetic islands is used to explore their dynamics, specifically their growth via two distinct mechanisms: quasi-steady reconnection and merging. The results of the simulation enable validation of the statistical model and benchmarking of its parameters. A PIC (particle-in-cell) simulation investigates how secondary islands form in guide field reconnection, revealing that they are born at electron skin depth scales not as islands from the tearing instability but as vortices from a flow instability. A database of 1,098 flux transfer events (FTEs) observed by Cluster between 2001 and 2003 compares favorably with the model's predictions, and also suggests island merging plays a significant role in the magnetopause. Consequently, the magnetopause is likely populated by many FTEs too small to be recognized by spacecraft instrumentation. The results of this research suggest that a complete theory of reconnection in large current sheets should account for the disparate separation of scales---from the kinetic scales at which islands are produced to the macroscale objects observed in the systems in question.
Simulation of 2D Kinetic Effects in Plasmas using the Grid Based Continuum Code LOKI
NASA Astrophysics Data System (ADS)
Banks, Jeffrey; Berger, Richard; Chapman, Tom; Brunner, Stephan
2016-10-01
Kinetic simulation of multi-dimensional plasma waves through direct discretization of the Vlasov equation is a useful tool to study many physical interactions and is particularly attractive for situations where minimal fluctuation levels are desired, for instance, when measuring growth rates of plasma wave instabilities. However, direct discretization of phase space can be computationally expensive, and as a result there are few examples of published results using Vlasov codes in more than a single configuration space dimension. In an effort to fill this gap we have developed the Eulerian-based kinetic code LOKI that evolves the Vlasov-Poisson system in 2+2-dimensional phase space. The code is designed to reduce the cost of phase-space computation by using fully 4th order accurate conservative finite differencing, while retaining excellent parallel scalability that efficiently uses large scale computing resources. In this poster I will discuss the algorithms used in the code as well as some aspects of their parallel implementation using MPI. I will also overview simulation results of basic plasma wave instabilities relevant to laser plasma interaction, which have been obtained using the code.
Practical Unitary Simulator for Non-Markovian Complex Processes
NASA Astrophysics Data System (ADS)
Binder, Felix C.; Thompson, Jayne; Gu, Mile
2018-06-01
Stochastic processes are as ubiquitous throughout the quantitative sciences as they are notorious for being difficult to simulate and predict. In this Letter, we propose a unitary quantum simulator for discrete-time stochastic processes which requires less internal memory than any classical analogue throughout the simulation. The simulator's internal memory requirements equal those of the best previous quantum models. However, in contrast to previous models, it only requires a (small) finite-dimensional Hilbert space. Moreover, since the simulator operates unitarily throughout, it avoids any unnecessary information loss. We provide a stepwise construction for simulators for a large class of stochastic processes hence directly opening the possibility for experimental implementations with current platforms for quantum computation. The results are illustrated for an example process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shimojo, Fuyuki; Hattori, Shinnosuke; Department of Physics, Kumamoto University, Kumamoto 860-8555
We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at themore » peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 10{sup 6}-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of techniques are employed for efficiently calculating the long-range exact exchange correction and excited-state forces. The NAQMD trajectories are analyzed to extract the rates of various excitonic processes, which are then used in KMC simulation to study the dynamics of the global exciton flow network. This has allowed the study of large-scale photoexcitation dynamics in 6400-atom amorphous molecular solid, reaching the experimental time scales.« less
Definition of ground test for verification of large space structure control
NASA Technical Reports Server (NTRS)
Glaese, John R.
1994-01-01
Under this contract, the Large Space Structure Ground Test Verification (LSSGTV) Facility at the George C. Marshall Space Flight Center (MSFC) was developed. Planning in coordination with NASA was finalized and implemented. The contract was modified and extended with several increments of funding to procure additional hardware and to continue support for the LSSGTV facility. Additional tasks were defined for the performance of studies in the dynamics, control and simulation of tethered satellites. When the LSSGTV facility development task was completed, support and enhancement activities were funded through a new competitive contract won by LCD. All work related to LSSGTV performed under NAS8-35835 has been completed and documented. No further discussion of these activities will appear in this report. This report summarizes the tether dynamics and control studies performed.
NASA/MSFC ground experiment for large space structure control verification
NASA Technical Reports Server (NTRS)
Waites, H. B.; Seltzer, S. M.; Tollison, D. K.
1984-01-01
Marshall Space Flight Center has developed a facility in which closed loop control of Large Space Structures (LSS) can be demonstrated and verified. The main objective of the facility is to verify LSS control system techniques so that on orbit performance can be ensured. The facility consists of an LSS test article which is connected to a payload mounting system that provides control torque commands. It is attached to a base excitation system which will simulate disturbances most likely to occur for Orbiter and DOD payloads. A control computer will contain the calibration software, the reference system, the alignment procedures, the telemetry software, and the control algorithms. The total system will be suspended in such a fashion that LSS test article has the characteristics common to all LSS.
Increased intracranial pressure in mini-pigs exposed to simulated solar particle event radiation
NASA Astrophysics Data System (ADS)
Sanzari, Jenine K.; Muehlmatt, Amy; Savage, Alexandria; Lin, Liyong; Kennedy, Ann R.
2014-02-01
Changes in intracranial pressure (ICP) during space flight have stimulated an area of research in space medicine. It is widely speculated that elevations in ICP contribute to structural and functional ocular changes, including deterioration in vision, which is also observed during space flight. The aim of this study was to investigate changes in opening pressure (OP) occurring as a result of ionizing radiation exposure (at doses and dose-rates relevant to solar particle event radiation). We used a large animal model, the Yucatan mini-pig, and were able to obtain measurements over a 90 day period. This is the first investigation to show long term recordings of ICP in a large animal model without an invasive craniotomy procedure. Further, this is the first investigation reporting increased ICP after radiation exposure.
HiPEP Ion Optics System Evaluation Using Gridlets
NASA Technical Reports Server (NTRS)
Willliams, John D.; Farnell, Cody C.; Laufer, D. Mark; Martinez, Rafael A.
2004-01-01
Experimental measurements are presented for sub-scale ion optics systems comprised of 7 and 19 aperture pairs with geometrical features that are similar to the HiPEP ion optics system. Effects of hole diameter and grid-to-grid spacing are presented as functions of applied voltage and beamlet current. Recommendations are made for the beamlet current range where the ion optics system can be safely operated without experiencing direct impingement of high energy ions on the accelerator grid surface. Measurements are also presented of the accelerator grid voltage where beam plasma electrons backstream through the ion optics system. Results of numerical simulations obtained with the ffx code are compared to both the impingement limit and backstreaming measurements. An emphasis is placed on identifying differences between measurements and simulation predictions to highlight areas where more research is needed. Relatively large effects are observed in simulations when the discharge chamber plasma properties and ion optics geometry are varied. Parameters investigated using simulations include the applied voltages, grid spacing, hole-to-hole spacing, doubles-to-singles ratio, plasma potential, and electron temperature; and estimates are provided for the sensitivity of impingement limits on these parameters.
NASA Astrophysics Data System (ADS)
Avdeev, A. V.; Boreisho, A. S.; Ivakin, S. V.; Moiseev, A. A.; Savin, A. V.; Sokolov, E. I.; Smirnov, P. G.
2018-01-01
This article is devoted to the simulation of the processes of formation of dust clouds in the absence of gravitation, which is necessary for understanding the processes proceeding in dust clusters in outer space, upper planetary atmosphere, and on the surface of space objects, as well as for evaluating the possibilities of creating disperse structures with given properties. The chief aim of the simulation is to determine the general laws of the dynamics of the dust cloud at the initial stage of its formation. With the use of the original approach based on the particle-in-cell method that permits investigating the mechanics of large ensembles of particles on contemporary computational platforms, we consider the mechanics of a dusty medium in the process of its excitation in a closed container due to the vibration of the walls, and then in the process of particle scattering when the container opens in outer space. The main formation mechanisms of a dust formation have been elucidated, and the possibilities of mathematical simulation for predicting spatial and time characteristics of disperse structures have been shown.
Recovery of inter-row shading losses using differential power-processing submodule DC–DC converters
Doubleday, Kate; Choi, Beomseok; Maksimovic, Dragan; ...
2016-06-17
Large commercial photovoltaic (PV) systems can experience regular and predictable energy loss due to both inter-row shading and reduced diffuse irradiance in tightly spaced arrays. This article investigates the advantages of replacing bypass diodes with submodule-integrated DC-DC converters (subMICs) to mitigate these losses. Yearly simulations of commercial-scale PV systems were conducted considering a range of row-to-row pitches. In the limit case of array spacing (unity ground coverage), subMICs can confer a 7% increase in annual energy output and peak energy density (kW h/m 2). Simulation results are based on efficiency assumptions experimentally confirmed by prototype submodule differential power-processing converters.
Design of a new high-performance pointing controller for the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Johnson, C. D.
1993-01-01
A new form of high-performance, disturbance-adaptive pointing controller for the Hubble Space Telescope (HST) is proposed. This new controller is all linear (constant gains) and can maintain accurate 'pointing' of the HST in the face of persistent randomly triggered uncertain, unmeasurable 'flapping' motions of the large attached solar array panels. Similar disturbances associated with antennas and other flexible appendages can also be accommodated. The effectiveness and practicality of the proposed new controller is demonstrated by a detailed design and simulation testing of one such controller for a planar-motion, fully nonlinear model of HST. The simulation results show a high degree of disturbance isolation and pointing stability.
Parameter Studies, time-dependent simulations and design with automated Cartesian methods
NASA Technical Reports Server (NTRS)
Aftosmis, Michael
2005-01-01
Over the past decade, NASA has made a substantial investment in developing adaptive Cartesian grid methods for aerodynamic simulation. Cartesian-based methods played a key role in both the Space Shuttle Accident Investigation and in NASA's return to flight activities. The talk will provide an overview of recent technological developments focusing on the generation of large-scale aerodynamic databases, automated CAD-based design, and time-dependent simulations with of bodies in relative motion. Automation, scalability and robustness underly all of these applications and research in each of these topics will be presented.
NASA Astrophysics Data System (ADS)
Kawase, H.; Sasaki, H.; Murata, A.; Nosaka, M.; Ito, R.; Dairaku, K.; Sasai, T.; Yamazaki, T.; Sugimoto, S.; Watanabe, S.; Fujita, M.; Kawazoe, S.; Okada, Y.; Ishii, M.; Mizuta, R.; Takayabu, I.
2017-12-01
We performed large ensemble climate experiments to investigate future changes in extreme weather events using Meteorological Research Institute-Atmospheric General Circulation Model (MRI-AGCM) with about 60 km grid spacing and Non-Hydrostatic Regional Climate Model with 20 km grid spacing (NHRCM20). The global climate simulations are prescribed by the past and future sea surface temperature (SST). Two future climate simulations are conducted so that the global-mean surface air temperature rise 2 K and 4 K from the pre-industrial period. The non-warming simulations are also conducted by MRI-AGCM and NHRCM20. We focus on the future changes in snowfall in Japan. In winter, the Sea of Japan coast experiences heavy snowfall due to East Asian winter monsoon. The cold and dry air from the continent obtains abundant moisture from the warm Sea of Japan, causing enormous amount of snowfall especially in the mountainous area. The NHRCM20 showed winter total snowfall decreases in the most parts of Japan. In contrast, extremely heavy daily snowfall could increase at mountainous areas in the Central Japan and Northern parts of Japan when strong cold air outbreak occurs and the convergence zone appears over the Sea of Japan. The warmer Sea of Japan in the future climate could supply more moisture than that in the present climate, indicating that the cumulus convections could be enhanced around the convergence zone in the Sea of Japan. However, the horizontal resolution of 20 km is not enough to resolve Japan`s complex topography. Therefore, dynamical downscaling with 5 km grid spacing (NHRCM05) is also conducted using NHRCM20. The NHRCM05 does a better job simulating the regional boundary of snowfall and shows more detailed changes in future snowfall characteristics. The future changes in total and extremely heavy snowfall depend on the regions, elevations, and synoptic conditions around Japan.
NASA Technical Reports Server (NTRS)
Xu, Kuan-Man; Cheng, Anning
2007-01-01
The effects of subgrid-scale condensation and transport become more important as the grid spacings increase from those typically used in large-eddy simulation (LES) to those typically used in cloud-resolving models (CRMs). Incorporation of these effects can be achieved by a joint probability density function approach that utilizes higher-order moments of thermodynamic and dynamic variables. This study examines how well shallow cumulus and stratocumulus clouds are simulated by two versions of a CRM that is implemented with low-order and third-order turbulence closures (LOC and TOC) when a typical CRM horizontal resolution is used and what roles the subgrid-scale and resolved-scale processes play as the horizontal grid spacing of the CRM becomes finer. Cumulus clouds were mostly produced through subgrid-scale transport processes while stratocumulus clouds were produced through both subgrid-scale and resolved-scale processes in the TOC version of the CRM when a typical CRM grid spacing is used. The LOC version of the CRM relied upon resolved-scale circulations to produce both cumulus and stratocumulus clouds, due to small subgrid-scale transports. The mean profiles of thermodynamic variables, cloud fraction and liquid water content exhibit significant differences between the two versions of the CRM, with the TOC results agreeing better with the LES than the LOC results. The characteristics, temporal evolution and mean profiles of shallow cumulus and stratocumulus clouds are weakly dependent upon the horizontal grid spacing used in the TOC CRM. However, the ratio of the subgrid-scale to resolved-scale fluxes becomes smaller as the horizontal grid spacing decreases. The subcloud-layer fluxes are mostly due to the resolved scales when a grid spacing less than or equal to 1 km is used. The overall results of the TOC simulations suggest that a 1-km grid spacing is a good choice for CRM simulation of shallow cumulus and stratocumulus.
Coarse-Grain Bandwidth Estimation Techniques for Large-Scale Space Network
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Jennings, Esther
2013-01-01
In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-andforward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.
EFT of large scale structures in redshift space
NASA Astrophysics Data System (ADS)
Lewandowski, Matthew; Senatore, Leonardo; Prada, Francisco; Zhao, Cheng; Chuang, Chia-Hsun
2018-03-01
We further develop the description of redshift-space distortions within the effective field theory of large scale structures. First, we generalize the counterterms to include the effect of baryonic physics and primordial non-Gaussianity. Second, we evaluate the IR resummation of the dark matter power spectrum in redshift space. This requires us to identify a controlled approximation that makes the numerical evaluation straightforward and efficient. Third, we compare the predictions of the theory at one loop with the power spectrum from numerical simulations up to ℓ=6 . We find that the IR resummation allows us to correctly reproduce the baryon acoustic oscillation peak. The k reach—or, equivalently, the precision for a given k —depends on additional counterterms that need to be matched to simulations. Since the nonlinear scale for the velocity is expected to be longer than the one for the overdensity, we consider a minimal and a nonminimal set of counterterms. The quality of our numerical data makes it hard to firmly establish the performance of the theory at high wave numbers. Within this limitation, we find that the theory at redshift z =0.56 and up to ℓ=2 matches the data at the percent level approximately up to k ˜0.13 h Mpc-1 or k ˜0.18 h Mpc-1 , depending on the number of counterterms used, with a potentially large improvement over former analytical techniques.
Space Electric Research Test in the Electric Propulsion Laboratory
1964-06-21
Technicians prepare the Space Electric Research Test (SERT-I) payload for a test in Tank Number 5 of the Electric Propulsion Laboratory at the National Aeronautics and Space Administration (NASA) Lewis Research Center. Lewis researchers had been studying different methods of electric rocket propulsion since the mid-1950s. Harold Kaufman created the first successful engine, the electron bombardment ion engine, in the early 1960s. These electric engines created and accelerated small particles of propellant material to high exhaust velocities. Electric engines have a very small amount of thrust, but once lofted into orbit by workhorse chemical rockets, they are capable of small, continuous thrust for periods up to several years. The electron bombardment thruster operated at a 90-percent efficiency during testing in the Electric Propulsion Laboratory. The package was rapidly rotated in a vacuum to simulate its behavior in space. The SERT-I mission, launched from Wallops Island, Virginia, was the first flight test of Kaufman’s ion engine. SERT-I had one cesium engine and one mercury engine. The suborbital flight was only 50 minutes in duration but proved that the ion engine could operate in space. The Electric Propulsion Laboratory included two large space simulation chambers, one of which is seen here. Each uses twenty 2.6-foot diameter diffusion pumps, blowers, and roughing pumps to remove the air inside the tank to create the thin atmosphere. A helium refrigeration system simulates the cold temperatures of space.
Analysis of the coherent and turbulent stresses of a numerically simulated rough wall pipe
NASA Astrophysics Data System (ADS)
Chan, L.; MacDonald, M.; Chung, D.; Hutchins, N.; Ooi, A.
2017-04-01
A turbulent rough wall flow in a pipe is simulated using direct numerical simulation (DNS) where the roughness elements consist of explicitly gridded three-dimensional sinusoids. Two groups of simulations were conducted where the roughness semi-amplitude h+ and the roughness wavelength λ+ are systematically varied. The triple decomposition is applied to the velocity to separate the coherent and turbulent components. The coherent or dispersive component arises due to the roughness and depends on the topological features of the surface. The turbulent stress on the other hand, scales with the friction Reynolds number. For the case with the largest roughness wavelength, large secondary flows are observed which are similar to that of duct flows. The occurrence of these large secondary flows is due to the spanwise heterogeneity of the roughness which has a spacing approximately equal to the boundary layer thickness δ.
DOE Office of Scientific and Technical Information (OSTI.GOV)
FINNEY, Charles E A; Edwards, Kevin Dean; Stoyanov, Miroslav K
2015-01-01
Combustion instabilities in dilute internal combustion engines are manifest in cyclic variability (CV) in engine performance measures such as integrated heat release or shaft work. Understanding the factors leading to CV is important in model-based control, especially with high dilution where experimental studies have demonstrated that deterministic effects can become more prominent. Observation of enough consecutive engine cycles for significant statistical analysis is standard in experimental studies but is largely wanting in numerical simulations because of the computational time required to compute hundreds or thousands of consecutive cycles. We have proposed and begun implementation of an alternative approach to allowmore » rapid simulation of long series of engine dynamics based on a low-dimensional mapping of ensembles of single-cycle simulations which map input parameters to output engine performance. This paper details the use Titan at the Oak Ridge Leadership Computing Facility to investigate CV in a gasoline direct-injected spark-ignited engine with a moderately high rate of dilution achieved through external exhaust gas recirculation. The CONVERGE CFD software was used to perform single-cycle simulations with imposed variations of operating parameters and boundary conditions selected according to a sparse grid sampling of the parameter space. Using an uncertainty quantification technique, the sampling scheme is chosen similar to a design of experiments grid but uses functions designed to minimize the number of samples required to achieve a desired degree of accuracy. The simulations map input parameters to output metrics of engine performance for a single cycle, and by mapping over a large parameter space, results can be interpolated from within that space. This interpolation scheme forms the basis for a low-dimensional metamodel which can be used to mimic the dynamical behavior of corresponding high-dimensional simulations. Simulations of high-EGR spark-ignition combustion cycles within a parametric sampling grid were performed and analyzed statistically, and sensitivities of the physical factors leading to high CV are presented. With these results, the prospect of producing low-dimensional metamodels to describe engine dynamics at any point in the parameter space will be discussed. Additionally, modifications to the methodology to account for nondeterministic effects in the numerical solution environment are proposed« less
Formation et évolution des Galaxies : le rôle de leur environnement
NASA Astrophysics Data System (ADS)
Boselli, Alessandro
2016-08-01
The new panoramic detectors on large telescopes as well as the most performing space missions allowed us to complete large surveys of the Universe at different wavelengths and thus study the relationships between the different galaxy components at various epochs. At the same time, the increasing computing power allowed us to simulate the evolution of galaxies since their formation at an angular resolution never reached so far. In this article I will briefly describe how the comparison between the most recent observations and the predictions of models and simulations changed our view on the process of galaxy formation and evolution.
Prospects and challenges of touchless electrostatic detumbling of small bodies
NASA Astrophysics Data System (ADS)
Bennett, Trevor; Stevenson, Daan; Hogan, Erik; Schaub, Hanspeter
2015-08-01
The prospects of touchlessly detumbling a small, multiple meters in size, space object using electrostatic forces are intriguing. Physically capturing an object with a large rotation rate poses significant momentum transfer and collision risks. If the spin rate is reduced to less than 1 deg/s, relative motion sensing and control associated with mechanical docking becomes manageable. In particular, this paper surveys the prospects and challenges of detumbling large debris objects near Geostationary Earth Orbit for active debris remediation, and investigates if such electrostatic tractors are suitable for small asteroids being considered for asteroid retrieval missions. Active charge transfer is used to impart arresting electrostatic torques on such objects, given that they are sufficiently non-spherical. The concept of touchless electrostatic detumbling of space debris is outlined through analysis and experiments and is shown to hold great promise to arrest the rotation within days to weeks. However, even conservatively optimistic simulations of small asteroid detumbling scenarios indicate that such a method could take over a year to arrest the asteroid rotation. The numerical debris detumbling simulation includes a charge transfer model in a space environment, and illustrates how a conducting rocket body could be despun without physical contact.
Categorical color constancy for simulated surfaces.
Olkkonen, Maria; Hansen, Thorsten; Gegenfurtner, Karl R
2009-11-12
Color constancy is the ability to perceive constant surface colors under varying lighting conditions. Color constancy has traditionally been investigated with asymmetric matching, where stimuli are matched over two different contexts, or with achromatic settings, where a stimulus is made to appear gray. These methods deliver accurate information on the transformations of single points of color space under illuminant changes, but can be cumbersome and unintuitive for observers. Color naming is a fast and intuitive alternative to matching, allowing data collection from a large portion of color space. We asked observers to name the colors of 469 Munsell surfaces with known reflectance spectra simulated under five different illuminants. Observers were generally as consistent in naming the colors of surfaces under different illuminants as they were naming the colors of the same surfaces over time. The transformations in category boundaries caused by illuminant changes were generally small and could be explained well with simple linear models. Finally, an analysis of the pattern of naming consistency across color space revealed that largely the same hues were named consistently across illuminants and across observers even after correcting for category size effects. This indicates a possible relationship between perceptual color constancy and the ability to consistently communicate colors.
NASA Plum Brook's B-2 Test Facility: Thermal Vacuum and Propellant Test Facility
NASA Technical Reports Server (NTRS)
Kudlac, Maureen T.; Weaver, Harold F.; Cmar, Mark D.
2012-01-01
The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) Plum Brook Station (PBS) Spacecraft Propulsion Research Facility, commonly referred to as B-2, is NASA's third largest thermal vacuum facility. It is the largest designed to store and transfer large quantities of liquid hydrogen and liquid oxygen, and is perfectly suited to support developmental testing of upper stage chemical propulsion systems as well as fully integrated stages. The facility is also capable of providing thermal-vacuum simulation services to support testing of large lightweight structures, Cryogenic Fluid Management (CFM) systems, electric propulsion test programs, and other In-Space propulsion programs. A recently completed integrated system test demonstrated the refurbished thermal vacuum capabilities of the facility. The test used the modernized data acquisition and control system to monitor the facility. The heat sink provided a uniform temperature environment of approximately 77 K. The modernized infrared lamp array produced a nominal heat flux of 1.4 kW/sq m. With the lamp array and heat sink operating simultaneously, the thermal systems produced a heat flux pattern simulating radiation to space on one surface and solar exposure on the other surface.
Formation of the Orientale lunar multiring basin.
Johnson, Brandon C; Blair, David M; Collins, Gareth S; Melosh, H Jay; Freed, Andrew M; Taylor, G Jeffrey; Head, James W; Wieczorek, Mark A; Andrews-Hanna, Jeffrey C; Nimmo, Francis; Keane, James T; Miljković, Katarina; Soderblom, Jason M; Zuber, Maria T
2016-10-28
Multiring basins, large impact craters characterized by multiple concentric topographic rings, dominate the stratigraphy, tectonics, and crustal structure of the Moon. Using a hydrocode, we simulated the formation of the Orientale multiring basin, producing a subsurface structure consistent with high-resolution gravity data from the Gravity Recovery and Interior Laboratory (GRAIL) spacecraft. The simulated impact produced a transient crater, ~390 kilometers in diameter, that was not maintained because of subsequent gravitational collapse. Our simulations indicate that the flow of warm weak material at depth was crucial to the formation of the basin's outer rings, which are large normal faults that formed at different times during the collapse stage. The key parameters controlling ring location and spacing are impactor diameter and lunar thermal gradients. Copyright © 2016, American Association for the Advancement of Science.
Relativistic interpretation of Newtonian simulations for cosmic structure formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fidler, Christian; Tram, Thomas; Crittenden, Robert
2016-09-01
The standard numerical tools for studying non-linear collapse of matter are Newtonian N -body simulations. Previous work has shown that these simulations are in accordance with General Relativity (GR) up to first order in perturbation theory, provided that the effects from radiation can be neglected. In this paper we show that the present day matter density receives more than 1% corrections from radiation on large scales if Newtonian simulations are initialised before z =50. We provide a relativistic framework in which unmodified Newtonian simulations are compatible with linear GR even in the presence of radiation. Our idea is to usemore » GR perturbation theory to keep track of the evolution of relativistic species and the relativistic space-time consistent with the Newtonian trajectories computed in N -body simulations. If metric potentials are sufficiently small, they can be computed using a first-order Einstein–Boltzmann code such as CLASS. We make this idea rigorous by defining a class of GR gauges, the Newtonian motion gauges, which are defined such that matter particles follow Newtonian trajectories. We construct a simple example of a relativistic space-time within which unmodified Newtonian simulations can be interpreted.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rai, Raj K.; Berg, Larry K.; Kosović, Branko
High resolution numerical simulation can provide insight into important physical processes that occur within the planetary boundary layer (PBL). The present work employs large eddy simulation (LES) using the Weather Forecasting and Research (WRF) model, with the LES domain nested within mesoscale simulation, to simulate real conditions in the convective PBL over an area of complex terrain. A multiple nesting approach has been used to downsize the grid spacing from 12.15 km (mesoscale) to 0.03 km (LES). A careful selection of grid spacing in the WRF Meso domain has been conducted to minimize artifacts in the WRF-LES solutions. The WRF-LESmore » results have been evaluated with in situ and remote sensing observations collected during the US Department of Energy-supported Columbia BasinWind Energy Study (CBWES). Comparison of the first- and second-order moments, turbulence spectrum, and probability density function (PDF) of wind speed shows good agreement between the simulations and data. Furthermore, the WRF-LES variables show a great deal of variability in space and time caused by the complex topography in the LES domain. The WRF-LES results show that the flow structures, such as roll vortices and convective cells, vary depending on both the location and time of day. In addition to basic studies related to boundary-layer meteorology, results from these simulations can be used in other applications, such as studying wind energy resources, atmospheric dispersion, fire weather etc.« less
Consequence modeling using the fire dynamics simulator.
Ryder, Noah L; Sutula, Jason A; Schemel, Christopher F; Hamer, Andrew J; Van Brunt, Vincent
2004-11-11
The use of Computational Fluid Dynamics (CFD) and in particular Large Eddy Simulation (LES) codes to model fires provides an efficient tool for the prediction of large-scale effects that include plume characteristics, combustion product dispersion, and heat effects to adjacent objects. This paper illustrates the strengths of the Fire Dynamics Simulator (FDS), an LES code developed by the National Institute of Standards and Technology (NIST), through several small and large-scale validation runs and process safety applications. The paper presents two fire experiments--a small room fire and a large (15 m diameter) pool fire. The model results are compared to experimental data and demonstrate good agreement between the models and data. The validation work is then extended to demonstrate applicability to process safety concerns by detailing a model of a tank farm fire and a model of the ignition of a gaseous fuel in a confined space. In this simulation, a room was filled with propane, given time to disperse, and was then ignited. The model yields accurate results of the dispersion of the gas throughout the space. This information can be used to determine flammability and explosive limits in a space and can be used in subsequent models to determine the pressure and temperature waves that would result from an explosion. The model dispersion results were compared to an experiment performed by Factory Mutual. Using the above examples, this paper will demonstrate that FDS is ideally suited to build realistic models of process geometries in which large scale explosion and fire failure risks can be evaluated with several distinct advantages over more traditional CFD codes. Namely transient solutions to fire and explosion growth can be produced with less sophisticated hardware (lower cost) than needed for traditional CFD codes (PC type computer verses UNIX workstation) and can be solved for longer time histories (on the order of hundreds of seconds of computed time) with minimal computer resources and length of model run. Additionally results that are produced can be analyzed, viewed, and tabulated during and following a model run within a PC environment. There are some tradeoffs, however, as rapid computations in PC's may require a sacrifice in the grid resolution or in the sub-grid modeling, depending on the size of the geometry modeled.
Population Synthesis of Radio and Y-ray Millisecond Pulsars Using Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Gonthier, Peter L.; Billman, C.; Harding, A. K.
2013-04-01
We present preliminary results of a new population synthesis of millisecond pulsars (MSP) from the Galactic disk using Markov Chain Monte Carlo techniques to better understand the model parameter space. We include empirical radio and γ-ray luminosity models that are dependent on the pulsar period and period derivative with freely varying exponents. The magnitudes of the model luminosities are adjusted to reproduce the number of MSPs detected by a group of ten radio surveys and by Fermi, predicting the MSP birth rate in the Galaxy. We follow a similar set of assumptions that we have used in previous, more constrained Monte Carlo simulations. The parameters associated with the birth distributions such as those for the accretion rate, magnetic field and period distributions are also free to vary. With the large set of free parameters, we employ Markov Chain Monte Carlo simulations to explore the large and small worlds of the parameter space. We present preliminary comparisons of the simulated and detected distributions of radio and γ-ray pulsar characteristics. We express our gratitude for the generous support of the National Science Foundation (REU and RUI), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program.
Leake, S.A.; Lilly, M.R.
1995-01-01
The Fairbanks, Alaska, area has many contaminated sites in a shallow alluvial aquifer. A ground-water flow model is being developed using the MODFLOW finite-difference ground-water flow model program with the River Package. The modeled area is discretized in the horizontal dimensions into 118 rows and 158 columns of approximately 150-meter square cells. The fine grid spacing has the advantage of providing needed detail at the contaminated sites and surface-water features that bound the aquifer. However, the fine spacing of cells adds difficulty to simulating interaction between the aquifer and the large, braided Tanana River. In particular, the assignment of a river head is difficult if cells are much smaller than the river width. This was solved by developing a procedure for interpolating and extrapolating river head using a river distance function. Another problem is that future transient simulations would require excessive numbers of input records using the current version of the River Package. The proposed solution to this problem is to modify the River Package to linearly interpolate river head for time steps within each stress period, thereby reducing the number of stress periods required.
Space Construction Experiment Definition Study (SCEDS), part 2. Volume 2: Study results
NASA Technical Reports Server (NTRS)
1982-01-01
The Space Construction Experiment (SCE) was defined for integration into the Space Shuttle. This included development of flight assignment data, revision and update of preliminary mission timelines and test plans, analysis of flight safety issues, and definition of ground operations scenarios. New requirements for the flight experiment and changes for a large space antenna feed mask test article were incorporated. The program plan and cost estimates were updated. Revised SCE structural dynamics characteristics were provided for simulation and analysis of experimental tests to define and verify control limits and interactions effects between the SCE and the Orbiter digital automatic pilot.
Realtime Space Weather Forecasts Via Android Phone App
NASA Astrophysics Data System (ADS)
Crowley, G.; Haacke, B.; Reynolds, A.
2010-12-01
For the past several years, ASTRA has run a first-principles global 3-D fully coupled thermosphere-ionosphere model in real-time for space weather applications. The model is the Thermosphere-Ionosphere Mesosphere Electrodynamics General Circulation Model (TIMEGCM). ASTRA also runs the Assimilative Mapping of Ionospheric Electrodynamics (AMIE) in real-time. Using AMIE to drive the high latitude inputs to the TIMEGCM produces high fidelity simulations of the global thermosphere and ionosphere. These simulations can be viewed on the Android Phone App developed by ASTRA. The SpaceWeather app for the Android operating system is free and can be downloaded from the Google Marketplace. We present the current status of realtime thermosphere-ionosphere space-weather forcasting and discuss the way forward. We explore some of the issues in maintaining real-time simulations with assimilative data feeds in a quasi-operational setting. We also discuss some of the challenges of presenting large amounts of data on a smartphone. The ASTRA SpaceWeather app includes the broadest and most unique range of space weather data yet to be found on a single smartphone app. This is a one-stop-shop for space weather and the only app where you can get access to ASTRA’s real-time predictions of the global thermosphere and ionosphere, high latitude convection and geomagnetic activity. Because of the phone's GPS capability, users can obtain location specific vertical profiles of electron density, temperature, and time-histories of various parameters from the models. The SpaceWeather app has over 9000 downloads, 30 reviews, and a following of active users. It is clear that real-time space weather on smartphones is here to stay, and must be included in planning for any transition to operational space-weather use.
Thermal design and simulation of an attitude-varied space camera
NASA Astrophysics Data System (ADS)
Wang, Chenjie; Yang, Wengang; Feng, Liangjie; Li, XuYang; Wang, Yinghao; Fan, Xuewu; Wen, Desheng
2015-10-01
An attitude-varied space camera changes attitude continually when it is working, its attitude changes with large angle in short time leads to the significant change of heat flux; Moreover, the complicated inner heat sources, other payloads and the satellite platform will also bring thermal coupling effects to the space camera. According to a space camera which is located on a two dimensional rotating platform, detailed thermal design is accomplished by means of thermal isolation, thermal transmission and temperature compensation, etc. Then the ultimate simulation cases of both high temperature and low temperature are chosen considering the obscuration of the satellite platform and other payloads, and also the heat flux analysis of light entrance and radiator surface of the camera. NEVEDA and SindaG are used to establish the simulation model of the camera and the analysis is carried out. The results indicate that, under both passive and active thermal control, the temperature of optical components is 20+/-1°C,both their radial and axial temperature gradient are less than 0.3°C, while the temperature of the main structural components is 20+/-2°C, and the temperature fluctuation of the focal plane assemblies is 3.0-9.5°C The simulation shows that the thermal control system can meet the need of the mission, and the thermal design is efficient and reasonable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varble, A. C.; Zipser, Edward J.; Fridlind, Ann
2014-12-27
Ten 3D cloud-resolving model (CRM) simulations and four 3D limited area model (LAM) simulations of an intense mesoscale convective system observed on January 23-24, 2006 during the Tropical Warm Pool – International Cloud Experiment (TWP-ICE) are compared with each other and with observed radar reflectivity fields and dual-Doppler retrievals of vertical wind speeds in an attempt to explain published results showing a high bias in simulated convective radar reflectivity aloft. This high bias results from ice water content being large, which is a product of large, strong convective updrafts, although hydrometeor size distribution assumptions modulate the size of this bias.more » Snow reflectivity can exceed 40 dBZ in a two-moment scheme when a constant bulk density of 100 kg m-3 is used. Making snow mass more realistically proportional to area rather than volume should somewhat alleviate this problem. Graupel, unlike snow, produces high biased reflectivity in all simulations. This is associated with large amounts of liquid water above the freezing level in updraft cores. Peak vertical velocities in deep convective updrafts are greater than dual-Doppler retrieved values, especially in the upper troposphere. Freezing of large rainwater contents lofted above the freezing level in simulated updraft cores greatly contributes to these excessive upper tropospheric vertical velocities. Strong simulated updraft cores are nearly undiluted, with some showing supercell characteristics. Decreasing horizontal grid spacing from 900 meters to 100 meters weakens strong updrafts, but not enough to match observational retrievals. Therefore, overly intense simulated updrafts may partly be a product of interactions between convective dynamics, parameterized microphysics, and large-scale environmental biases that promote different convective modes and strengths than observed.« less
V-SUIT Model Validation Using PLSS 1.0 Test Results
NASA Technical Reports Server (NTRS)
Olthoff, Claas
2015-01-01
The dynamic portable life support system (PLSS) simulation software Virtual Space Suit (V-SUIT) has been under development at the Technische Universitat Munchen since 2011 as a spin-off from the Virtual Habitat (V-HAB) project. The MATLAB(trademark)-based V-SUIT simulates space suit portable life support systems and their interaction with a detailed and also dynamic human model, as well as the dynamic external environment of a space suit moving on a planetary surface. To demonstrate the feasibility of a large, system level simulation like V-SUIT, a model of NASA's PLSS 1.0 prototype was created. This prototype was run through an extensive series of tests in 2011. Since the test setup was heavily instrumented, it produced a wealth of data making it ideal for model validation. The implemented model includes all components of the PLSS in both the ventilation and thermal loops. The major components are modeled in greater detail, while smaller and ancillary components are low fidelity black box models. The major components include the Rapid Cycle Amine (RCA) CO2 removal system, the Primary and Secondary Oxygen Assembly (POS/SOA), the Pressure Garment System Volume Simulator (PGSVS), the Human Metabolic Simulator (HMS), the heat exchanger between the ventilation and thermal loops, the Space Suit Water Membrane Evaporator (SWME) and finally the Liquid Cooling Garment Simulator (LCGS). Using the created model, dynamic simulations were performed using same test points also used during PLSS 1.0 testing. The results of the simulation were then compared to the test data with special focus on absolute values during the steady state phases and dynamic behavior during the transition between test points. Quantified simulation results are presented that demonstrate which areas of the V-SUIT model are in need of further refinement and those that are sufficiently close to the test results. Finally, lessons learned from the modelling and validation process are given in combination with implications for the future development of other PLSS models in V-SUIT.
Near-Earth asteroid satellite spins under spin-orbit coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naidu, Shantanu P.; Margot, Jean-Luc
We develop a fourth-order numerical integrator to simulate the coupled spin and orbital motions of two rigid bodies having arbitrary mass distributions under the influence of their mutual gravitational potential. We simulate the dynamics of components in well-characterized binary and triple near-Earth asteroid systems and use surface of section plots to map the possible spin configurations of the satellites. For asynchronous satellites, the analysis reveals large regions of phase space where the spin state of the satellite is chaotic. For synchronous satellites, we show that libration amplitudes can reach detectable values even for moderately elongated shapes. The presence of chaoticmore » regions in the phase space has important consequences for the evolution of binary asteroids. It may substantially increase spin synchronization timescales, explain the observed fraction of asychronous binaries, delay BYORP-type evolution, and extend the lifetime of binaries. The variations in spin rate due to large librations also affect the analysis and interpretation of light curve and radar observations.« less
Sensor-scheduling simulation of disparate sensors for Space Situational Awareness
NASA Astrophysics Data System (ADS)
Hobson, T.; Clarkson, I.
2011-09-01
The art and science of space situational awareness (SSA) has been practised and developed from the time of Sputnik. However, recent developments, such as the accelerating pace of satellite launch, the proliferation of launch capable agencies, both commercial and sovereign, and recent well-publicised collisions involving man-made space objects, has further magnified the importance of timely and accurate SSA. The United States Strategic Command (USSTRATCOM) operates the Space Surveillance Network (SSN), a global network of sensors tasked with maintaining SSA. The rapidly increasing number of resident space objects will require commensurate improvements in the SSN. Sensors are scarce resources that must be scheduled judiciously to obtain measurements of maximum utility. Improvements in sensor scheduling and fusion, can serve to reduce the number of additional sensors that may be required. Recently, Hill et al. [1] have proposed and developed a simulation environment named TASMAN (Tasking Autonomous Sensors in a Multiple Application Network) to enable testing of alternative scheduling strategies within a simulated multi-sensor, multi-target environment. TASMAN simulates a high-fidelity, hardware-in-the-loop system by running multiple machines with different roles in parallel. At present, TASMAN is limited to simulations involving electro-optic sensors. Its high fidelity is at once a feature and a limitation, since supercomputing is required to run simulations of appreciable scale. In this paper, we describe an alternative, modular and scalable SSA simulation system that can extend the work of Hill et al with reduced complexity, albeit also with reduced fidelity. The tool has been developed in MATLAB and therefore can be run on a very wide range of computing platforms. It can also make use of MATLAB’s parallel processing capabilities to obtain considerable speed-up. The speed and flexibility so obtained can be used to quickly test scheduling algorithms even with a relatively large number of space objects. We further describe an application of the tool by exploring how the relative mixture of electro-optical and radar sensors can impact the scheduling, fusion and achievable accuracy of an SSA system. By varying the mixture of sensor types, we are able to characterise the main advantages and disadvantages of each configuration.
SHIELDS Final Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordanova, Vania Koleva
Predicting variations in the near-Earth space environment that can lead to spacecraft damage and failure, i.e. “space weather”, remains a big space physics challenge. A new capability was developed at Los Alamos National Laboratory (LANL) to understand, model, and predict Space Hazards Induced near Earth by Large Dynamic Storms, the SHIELDS framework. This framework simulates the dynamics of the Surface Charging Environment (SCE), the hot (keV) electrons representing the source and seed populations for the radiation belts, on both macro- and micro-scale. In addition to using physics-based models (like RAM-SCB, BATS-R-US, and iPIC3D), new data assimilation techniques employing data frommore » LANL instruments on the Van Allen Probes and geosynchronous satellites were developed. An order of magnitude improvement in the accuracy in the simulation of the spacecraft surface charging environment was thus obtained. SHIELDS also includes a post-processing tool designed to calculate the surface charging for specific spacecraft geometry using the Curvilinear Particle-In-Cell (CPIC) code and to evaluate anomalies' relation to SCE dynamics. Such diagnostics is critically important when performing forensic analyses of space-system failures.« less
NASA Technical Reports Server (NTRS)
Stone, N. H.; Samir, Uri
1986-01-01
Attempts to gain an understanding of spacecraft plasma dynamics via experimental investigation of the interaction between artificially synthesized, collisionless, flowing plasmas and laboratory test bodies date back to the early 1960's. In the past 25 years, a number of researchers have succeeded in simulating certain limited aspects of the complex spacecraft-space plasma interaction reasonably well. Theoretical treatments have also provided limited models of the phenomena. Several active experiments were recently conducted from the space shuttle that specifically attempted to observe the Orbiter-ionospheric interaction. These experiments have contributed greatly to an appreciation for the complexity of spacecraft-space plasma interaction but, so far, have answered few questions. Therefore, even though the plasma dynamics of hypersonic spacecraft is fundamental to space technology, it remains largely an open issue. A brief overview is provided of the primary results from previous ground-based experimental investigations and the preliminary results of investigations conducted on the STS-3 and Spacelab 2 missions. In addition, several, as yet unexplained, aspects of the spacecraft-space plasma interaction are suggested for future research.
A Tool for Parameter-space Explorations
NASA Astrophysics Data System (ADS)
Murase, Yohsuke; Uchitane, Takeshi; Ito, Nobuyasu
A software for managing simulation jobs and results, named "OACIS", is presented. It controls a large number of simulation jobs executed in various remote servers, keeps these results in an organized way, and manages the analyses on these results. The software has a web browser front end, and users can submit various jobs to appropriate remote hosts from a web browser easily. After these jobs are finished, all the result files are automatically downloaded from the computational hosts and stored in a traceable way together with the logs of the date, host, and elapsed time of the jobs. Some visualization functions are also provided so that users can easily grasp the overview of the results distributed in a high-dimensional parameter space. Thus, OACIS is especially beneficial for the complex simulation models having many parameters for which a lot of parameter searches are required. By using API of OACIS, it is easy to write a code that automates parameter selection depending on the previous simulation results. A few examples of the automated parameter selection are also demonstrated.
Molecular Simulation Uncovers the Conformational Space of the λ Cro Dimer in Solution
Ahlstrom, Logan S.; Miyashita, Osamu
2011-01-01
The significant variation among solved structures of the λ Cro dimer suggests its flexibility. However, contacts in the crystal lattice could have stabilized a conformation which is unrepresentative of its dominant solution form. Here we report on the conformational space of the Cro dimer in solution using replica exchange molecular dynamics in explicit solvent. The simulated ensemble shows remarkable correlation with available x-ray structures. Network analysis and a free energy surface reveal the predominance of closed and semi-open dimers, with a modest barrier separating these two states. The fully open conformation lies higher in free energy, indicating that it requires stabilization by DNA or crystal contacts. Most NMR models are found to be unstable conformations in solution. Intersubunit salt bridging between Arg4 and Glu53 during simulation stabilizes closed conformations. Because a semi-open state is among the low-energy conformations sampled in simulation, we propose that Cro-DNA binding may not entail a large conformational change relative to the dominant dimer forms in solution. PMID:22098751
A Data Management System for International Space Station Simulation Tools
NASA Technical Reports Server (NTRS)
Betts, Bradley J.; DelMundo, Rommel; Elcott, Sharif; McIntosh, Dawn; Niehaus, Brian; Papasin, Richard; Mah, Robert W.; Clancy, Daniel (Technical Monitor)
2002-01-01
Groups associated with the design, operational, and training aspects of the International Space Station make extensive use of modeling and simulation tools. Users of these tools often need to access and manipulate large quantities of data associated with the station, ranging from design documents to wiring diagrams. Retrieving and manipulating this data directly within the simulation and modeling environment can provide substantial benefit to users. An approach for providing these kinds of data management services, including a database schema and class structure, is presented. Implementation details are also provided as a data management system is integrated into the Intelligent Virtual Station, a modeling and simulation tool developed by the NASA Ames Smart Systems Research Laboratory. One use of the Intelligent Virtual Station is generating station-related training procedures in a virtual environment, The data management component allows users to quickly and easily retrieve information related to objects on the station, enhancing their ability to generate accurate procedures. Users can associate new information with objects and have that information stored in a database.
An optimal beam alignment method for large-scale distributed space surveillance radar system
NASA Astrophysics Data System (ADS)
Huang, Jian; Wang, Dongya; Xia, Shuangzhi
2018-06-01
Large-scale distributed space surveillance radar is a very important ground-based equipment to maintain a complete catalogue for Low Earth Orbit (LEO) space debris. However, due to the thousands of kilometers distance between each sites of the distributed radar system, how to optimally implement the Transmitting/Receiving (T/R) beams alignment in a great space using the narrow beam, which proposed a special and considerable technical challenge in the space surveillance area. According to the common coordinate transformation model and the radar beam space model, we presented a two dimensional projection algorithm for T/R beam using the direction angles, which could visually describe and assess the beam alignment performance. Subsequently, the optimal mathematical models for the orientation angle of the antenna array, the site location and the T/R beam coverage are constructed, and also the beam alignment parameters are precisely solved. At last, we conducted the optimal beam alignment experiments base on the site parameters of Air Force Space Surveillance System (AFSSS). The simulation results demonstrate the correctness and effectiveness of our novel method, which can significantly stimulate the construction for the LEO space debris surveillance equipment.
Emulation: A fast stochastic Bayesian method to eliminate model space
NASA Astrophysics Data System (ADS)
Roberts, Alan; Hobbs, Richard; Goldstein, Michael
2010-05-01
Joint inversion of large 3D datasets has been the goal of geophysicists ever since the datasets first started to be produced. There are two broad approaches to this kind of problem, traditional deterministic inversion schemes and more recently developed Bayesian search methods, such as MCMC (Markov Chain Monte Carlo). However, using both these kinds of schemes has proved prohibitively expensive, both in computing power and time cost, due to the normally very large model space which needs to be searched using forward model simulators which take considerable time to run. At the heart of strategies aimed at accomplishing this kind of inversion is the question of how to reliably and practicably reduce the size of the model space in which the inversion is to be carried out. Here we present a practical Bayesian method, known as emulation, which can address this issue. Emulation is a Bayesian technique used with considerable success in a number of technical fields, such as in astronomy, where the evolution of the universe has been modelled using this technique, and in the petroleum industry where history matching is carried out of hydrocarbon reservoirs. The method of emulation involves building a fast-to-compute uncertainty-calibrated approximation to a forward model simulator. We do this by modelling the output data from a number of forward simulator runs by a computationally cheap function, and then fitting the coefficients defining this function to the model parameters. By calibrating the error of the emulator output with respect to the full simulator output, we can use this to screen out large areas of model space which contain only implausible models. For example, starting with what may be considered a geologically reasonable prior model space of 10000 models, using the emulator we can quickly show that only models which lie within 10% of that model space actually produce output data which is plausibly similar in character to an observed dataset. We can thus much more tightly constrain the input model space for a deterministic inversion or MCMC method. By using this technique jointly on several datasets (specifically seismic, gravity, and magnetotelluric (MT) describing the same region), we can include in our modelling uncertainties in the data measurements, the relationships between the various physical parameters involved, as well as the model representation uncertainty, and at the same time further reduce the range of plausible models to several percent of the original model space. Being stochastic in nature, the output posterior parameter distributions also allow our understanding of/beliefs about a geological region can be objectively updated, with full assessment of uncertainties, and so the emulator is also an inversion-type tool in it's own right, with the advantage (as with any Bayesian method) that our uncertainties from all sources (both data and model) can be fully evaluated.
Nonlinear simulations of Jupiter's 5-micron hot spots
NASA Technical Reports Server (NTRS)
Showman, A. P.; Dowling, T. E.
2000-01-01
Large-scale nonlinear simulations of Jupiter's 5-micron hot spots produce long-lived coherent structures that cause subsidence in local regions, explaining the low cloudiness and the dryness measured by the Galileo probe inside a hot spot. Like observed hot spots, the simulated coherent structures are equatorially confined, have periodic spacing, propagate west relative to the flow, are generally confined to one hemisphere, and have an anticyclonic gyre on their equatorward side. The southern edge of the simulated hot spots develops vertical shear of up to 70 meters per second in the eastward wind, which can explain the results of the Galileo probe Doppler wind experiment.
NASA Astrophysics Data System (ADS)
Staff, J. E.; Koning, N.; Ouyed, R.; Thompson, A.; Pudritz, R. E.
2015-02-01
We present the results of large scale, three-dimensional magnetohydrodynamics simulations of disc winds for different initial magnetic field configurations. The jets are followed from the source to 90 au scale, which covers several pixels of Hubble Space Telescope images of nearby protostellar jets. Our simulations show that jets are heated along their length by many shocks. We compute the emission lines that are produced, and find excellent agreement with observations. The jet width is found to be between 20 and 30 au while the maximum velocities perpendicular to the jet are found to be up to above 100 km s-1. The initially less open magnetic field configuration simulations result in a wider, two-component jet; a cylindrically shaped outer jet surrounding a narrow and much faster, inner jet. These simulations preserve the underlying Keplerian rotation profile of the inner jet to large distances from the source. However, for the initially most open magnetic field configuration the kink mode creates a narrow corkscrew-like jet without a clear Keplerian rotation profile and even regions where we observe rotation opposite to the disc (counter-rotating). The RW Aur jet is narrow, indicating that the disc field in that case is very open meaning the jet can contain a counter-rotating component that we suggest explains why observations of rotation in this jet have given confusing results. Thus magnetized disc winds from underlying Keplerian discs can develop rotation profiles far down the jet that is not Keplerian.
WRF nested large-eddy simulations of deep convection during SEAC4RS
NASA Astrophysics Data System (ADS)
Heath, Nicholas K.; Fuelberg, Henry E.; Tanelli, Simone; Turk, F. Joseph; Lawson, R. Paul; Woods, Sarah; Freeman, Sean
2017-04-01
Large-eddy simulations (LES) and observations are often combined to increase our understanding and improve the simulation of deep convection. This study evaluates a nested LES method that uses the Weather Research and Forecasting (WRF) model and, specifically, tests whether the nested LES approach is useful for studying deep convection during a real-world case. The method was applied on 2 September 2013, a day of continental convection that occurred during the Studies of Emissions and Atmospheric Composition, Clouds and Climate Coupling by Regional Surveys (SEAC4RS) campaign. Mesoscale WRF output (1.35 km grid length) was used to drive a nested LES with 450 m grid spacing, which then drove a 150 m domain. Results reveal that the 450 m nested LES reasonably simulates observed reflectivity distributions and aircraft-observed in-cloud vertical velocities during the study period. However, when examining convective updrafts, reducing the grid spacing to 150 m worsened results. We find that the simulated updrafts in the 150 m run become too diluted by entrainment, thereby generating updrafts that are weaker than observed. Lastly, the 450 m simulation is combined with observations to study the processes forcing strong midlevel cloud/updraft edge downdrafts that were observed on 2 September. Results suggest that these strong downdrafts are forced by evaporative cooling due to mixing and by perturbation pressure forces acting to restore mass continuity around neighboring updrafts. We conclude that the WRF nested LES approach, with further development and evaluation, could potentially provide an effective method for studying deep convection in real-world cases.
Flux rope evolution in interplanetary coronal mass ejections: the 13 May 2005 event
NASA Astrophysics Data System (ADS)
Manchester, W. B., IV; van der Holst, B.; Lavraud, B.
2014-06-01
Coronal mass ejections (CMEs) are a dramatic manifestation of solar activity that release vast amounts of plasma into the heliosphere, and have many effects on the interplanetary medium and on planetary atmospheres, and are the major driver of space weather. CMEs occur with the formation and expulsion of large-scale magnetic flux ropes from the solar corona, which are routinely observed in interplanetary space. Simulating and predicting the structure and dynamics of these interplanetary CME magnetic fields are essential to the progress of heliospheric science and space weather prediction. We discuss the simulation of the 13 May 2005 CME event in which we follow the propagation of a flux rope from the solar corona to beyond Earth orbit. In simulating this event, we find that the magnetic flux rope reconnects with the interplanetary magnetic field, to evolve to an open configuration and later reconnects to reform a twisted structure sunward of the original rope. Observations of the 13 May 2005 CME magnetic field near Earth suggest that such a rearrangement of magnetic flux by reconnection may have occurred.
Simulating multiprimary LCDs on standard tri-stimulus LC displays
NASA Astrophysics Data System (ADS)
Lebowsky, Fritz; Vonneilich, Katrin; Bonse, Thomas
2008-01-01
Large-scale, direct view TV screens, in particular those based on liquid crystal technology, are beginning to use subpixel structures with more than three subpixels to implement a multi-primary display with up to six primaries. Since their input color space is likely to remain tri-stimulus RGB we first focus on some fundamental constraints. Among them, we elaborate simplified gamut mapping architectures as well as color filter geometry, transparency, and chromaticity coordinates in color space. Based on a 'display centric' RGB color space tetrahedrization combined with linear interpolation we describe a simulation framework which enables optimization for up to 7 primaries. We evaluated the performance through mapping the multi-primary design back onto a RGB LC display gamut without building a prototype multi-primary display. As long as we kept the RGB equivalent output signal within the display gamut we could analyze all desirable multi-primary configurations with regard to colorimetric variance and visually perceived quality. Not only does our simulation tool enable us to verify a novel concept it also demonstrates how carefully one needs to design a multiprimary display for LCD TV applications.
An FPGA computing demo core for space charge simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Jinyuan; Huang, Yifei; /Fermilab
2009-01-01
In accelerator physics, space charge simulation requires large amount of computing power. In a particle system, each calculation requires time/resource consuming operations such as multiplications, divisions, and square roots. Because of the flexibility of field programmable gate arrays (FPGAs), we implemented this task with efficient use of the available computing resources and completely eliminated non-calculating operations that are indispensable in regular micro-processors (e.g. instruction fetch, instruction decoding, etc.). We designed and tested a 16-bit demo core for computing Coulomb's force in an Altera Cyclone II FPGA device. To save resources, the inverse square-root cube operation in our design is computedmore » using a memory look-up table addressed with nine to ten most significant non-zero bits. At 200 MHz internal clock, our demo core reaches a throughput of 200 M pairs/s/core, faster than a typical 2 GHz micro-processor by about a factor of 10. Temperature and power consumption of FPGAs were also lower than those of micro-processors. Fast and convenient, FPGAs can serve as alternatives to time-consuming micro-processors for space charge simulation.« less
NASA Astrophysics Data System (ADS)
Paganini, Michela; de Oliveira, Luke; Nachman, Benjamin
2018-01-01
Physicists at the Large Hadron Collider (LHC) rely on detailed simulations of particle collisions to build expectations of what experimental data may look like under different theoretical modeling assumptions. Petabytes of simulated data are needed to develop analysis techniques, though they are expensive to generate using existing algorithms and computing resources. The modeling of detectors and the precise description of particle cascades as they interact with the material in the calorimeter are the most computationally demanding steps in the simulation pipeline. We therefore introduce a deep neural network-based generative model to enable high-fidelity, fast, electromagnetic calorimeter simulation. There are still challenges for achieving precision across the entire phase space, but our current solution can reproduce a variety of particle shower properties while achieving speedup factors of up to 100 000 × . This opens the door to a new era of fast simulation that could save significant computing time and disk space, while extending the reach of physics searches and precision measurements at the LHC and beyond.
pypet: A Python Toolkit for Data Management of Parameter Explorations
Meyer, Robert; Obermayer, Klaus
2016-01-01
pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines. PMID:27610080
pypet: A Python Toolkit for Data Management of Parameter Explorations.
Meyer, Robert; Obermayer, Klaus
2016-01-01
pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines.
Challenges and solutions for realistic room simulation
NASA Astrophysics Data System (ADS)
Begault, Durand R.
2002-05-01
Virtual room acoustic simulation (auralization) techniques have traditionally focused on answering questions related to speech intelligibility or musical quality, typically in large volumetric spaces. More recently, auralization techniques have been found to be important for the externalization of headphone-reproduced virtual acoustic images. Although externalization can be accomplished using a minimal simulation, data indicate that realistic auralizations need to be responsive to head motion cues for accurate localization. Computational demands increase when providing for the simulation of coupled spaces, small rooms lacking meaningful reverberant decays, or reflective surfaces in outdoor environments. Auditory threshold data for both early reflections and late reverberant energy levels indicate that much of the information captured in acoustical measurements is inaudible, minimizing the intensive computational requirements of real-time auralization systems. Results are presented for early reflection thresholds as a function of azimuth angle, arrival time, and sound-source type, and reverberation thresholds as a function of reverberation time and level within 250-Hz-2-kHz octave bands. Good agreement is found between data obtained in virtual room simulations and those obtained in real rooms, allowing a strategy for minimizing computational requirements of real-time auralization systems.
2012-05-22
tabulation of the reduced space is performed using the In Situ Adaptive Tabulation ( ISAT ) algorithm. In addition, we use x2f mpi – a Fortran library...for parallel vector-valued function evaluation (used with ISAT in this context) – to efficiently redistribute the chemistry workload among the...Constrained-Equilibrium (RCCE) method, and tabulation of the reduced space is performed using the In Situ Adaptive Tabulation ( ISAT ) algorithm. In addition
Rare Event Simulation in Radiation Transport
NASA Astrophysics Data System (ADS)
Kollman, Craig
This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved, even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiplied by the likelihood ratio between the true and simulated probabilities so as to keep our estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive "learning" algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give, with probability one, a sequence of estimates converging exponentially fast to the true solution. In the final chapter, an attempt to generalize this algorithm to a continuous state space is made. This involves partitioning the space into a finite number of cells. There is a tradeoff between additional computation per iteration and variance reduction per iteration that arises in determining the optimal grid size. All versions of this algorithm can be thought of as a compromise between deterministic and Monte Carlo methods, capturing advantages of both techniques.
25th Space Simulation Conference. Environmental Testing: The Earth-Space Connection
NASA Technical Reports Server (NTRS)
Packard, Edward
2008-01-01
Topics covered include: Methods of Helium Injection and Removal for Heat Transfer Augmentation; The ESA Large Space Simulator Mechanical Ground Support Equipment for Spacecraft Testing; Temperature Stability and Control Requirements for Thermal Vacuum/Thermal Balance Testing of the Aquarius Radiometer; The Liquid Nitrogen System for Chamber A: A Change from Original Forced Flow Design to a Natural Flow (Thermo Siphon) System; Return to Mercury: A Comparison of Solar Simulation and Flight Data for the MESSENGER Spacecraft; Floating Pressure Conversion and Equipment Upgrades of Two 3.5kw, 20k, Helium Refrigerators; Affect of Air Leakage into a Thermal-Vacuum Chamber on Helium Refrigeration Heat Load; Special ISO Class 6 Cleanroom for the Lunar Reconnaissance Orbiter (LRO) Project; A State-of-the-Art Contamination Effects Research and Test Facility Martian Dust Simulator; Cleanroom Design Practices and Their Influence on Particle Counts; Extra Terrestrial Environmental Chamber Design; Contamination Sources Effects Analysis (CSEA) - A Tool to Balance Cost/Schedule While Managing Facility Availability; SES and Acoustics at GSFC; HST Super Lightweight Interchangeable Carrier (SLIC) Static Test; Virtual Shaker Testing: Simulation Technology Improves Vibration Test Performance; Estimating Shock Spectra: Extensions beyond GEVS; Structural Dynamic Analysis of a Spacecraft Multi-DOF Shaker Table; Direct Field Acoustic Testing; Manufacture of Cryoshroud Surfaces for Space Simulation Chambers; The New LOTIS Test Facility; Thermal Vacuum Control Systems Options for Test Facilities; Extremely High Vacuum Chamber for Low Outgassing Processing at NASA Goddard; Precision Cleaning - Path to Premier; The New Anechoic Shielded Chambers Designed for Space and Commercial Applications at LIT; Extraction of Thermal Performance Values from Samples in the Lunar Dust Adhesion Bell Jar; Thermal (Silicon Diode) Data Acquisition System; Aquarius's Instrument Science Data System (ISDS) Automated to Acquire, Process, Trend Data and Produce Radiometric System Assessment Reports; Exhaustive Thresholds and Resistance Checkpoints; Reconfigurable HIL Testing of Earth Satellites; FPGA Control System for the Automated Test of MicroShutters; Ongoing Capabilities and Developments of Re-Entry Plasma Ground Tests at EADS-ASTRIUM; Operationally Responsive Space Standard Bus Battery Thermal Balance Testing and Heat Dissipation Analysis; Galileo - The Serial-Production AIT Challenge; The Space Systems Environmental Test Facility Database (SSETFD), Website Development Status; Simulated Reentry Heating by Torching; Micro-Vibration Measurements on Thermally Loaded Multi-Layer Insulation Samples in Vacuum; High Temperature Life Testing of 80Ni-20Cr Wire in a Simulated Mars Atmosphere for the Sample Analysis at Mars (SAM) Instrument Suit Gas Processing System (GPS) Carbon Dioxide Scrubber; The Planning and Implementation of Test Facility Improvements; and Development of a Silicon Carbide Molecular Beam Nozzle for Simulation Planetary Flybys and Low-Earth Orbit.
Gemini Simulator and Neil Armstrong
1963-11-06
Astronaut Neil Armstrong (left) was one of 14 astronauts, 8 NASA test pilots, and 2 McDonnell test pilots who took part in simulator studies. Armstrong was the first astronaut to participate (November 6, 1963). A.W. Vogeley described the simulator in his paper "Discussion of Existing and Planned Simulators For Space Research," "Many of the astronauts have flown this simulator in support of the Gemini studies and they, without exception, appreciated the realism of the visual scene. The simulator has also been used in the development of pilot techniques to handle certain jet malfunctions in order that aborts could be avoided. In these situations large attitude changes are sometimes necessary and the false motion cues that were generated due to earth gravity were somewhat objectionable; however, the pilots were readily able to overlook these false motion cues in favor of the visual realism." Roy F. Brissenden, noted in his paper "Initial Operations with Langley's Rendezvous Docking Facility," "The basic Gemini control studies developed the necessary techniques and demonstrated the ability of human pilots to perform final space docking with the specified Gemini-Agena systems using only visual references. ... Results... showed that trained astronauts can effect the docking with direct acceleration control and even with jet malfunctions as long as good visual conditions exist.... Probably more important than data results was the early confidence that the astronauts themselves gained in their ability to perform the maneuver in the ultimate flight mission." Francis B. Smith, noted in his paper "Simulators for Manned Space Research," "Some major areas of interest in these flights were fuel requirements, docking accuracies, the development of visual aids to assist alignment of the vehicles, and investigation of alternate control techniques with partial failure modes. However, the familiarization and confidence developed by the astronaut through flying and safely docking the simulator during these tests was one of the major contributions. For example, it was found that fuel used in docking from 200 feet typically dropped from about 20 pounds to 7 pounds after an astronaut had made a few training flights." -- Published in Barton C. Hacker and James M. Grimwood, On the Shoulders of Titans: A History of Project Gemini, NASA SP-4203; A.W. Vogeley, "Discussion of Existing and Planned Simulators For Space Research," Paper presented at the Conference on the Role of Simulation in Space Technology, August 17-21, 1964; Roy F. Brissenden, "Initial Operations with Langley's Rendezvous Docking Facility," Langley Working Paper, LWP-21, 1964; Francis B. Smith, "Simulators for Manned Space Research," Paper presented at the 1966 IEEE International convention, March 21-25, 1966.
Spacecraft Data Simulator for the test of level zero processing systems
NASA Technical Reports Server (NTRS)
Shi, Jeff; Gordon, Julie; Mirchandani, Chandru; Nguyen, Diem
1994-01-01
The Microelectronic Systems Branch (MSB) at Goddard Space Flight Center (GSFC) has developed a Spacecraft Data Simulator (SDS) to support the development, test, and verification of prototype and production Level Zero Processing (LZP) systems. Based on a disk array system, the SDS is capable of generating large test data sets up to 5 Gigabytes and outputting serial test data at rates up to 80 Mbps. The SDS supports data formats including NASA Communication (Nascom) blocks, Consultative Committee for Space Data System (CCSDS) Version 1 & 2 frames and packets, and all the Advanced Orbiting Systems (AOS) services. The capability to simulate both sequential and non-sequential time-ordered downlink data streams with errors and gaps is crucial to test LZP systems. This paper describes the system architecture, hardware and software designs, and test data designs. Examples of test data designs are included to illustrate the application of the SDS.
NASA Technical Reports Server (NTRS)
Montgomery, Raymond C.; Ghosh, Dave; Kenny, Sean
1991-01-01
This paper presents results of analytic and simulation studies to determine the effectiveness of torque-wheel actuators in suppressing the vibrations of two-link telerobotic arms with attached payloads. The simulations use a planar generic model of a two-link arm with a torque wheel at the free end. Parameters of the arm model are selected to be representative of a large space-based robotic arm of the same class as the Space Shuttle Remote Manipulator, whereas parameters of the torque wheel are selected to be similar to those of the Mini-Mast facility at the Langley Research Center. Results show that this class of torque-wheel can produce an oscillation of 2.5 cm peak-to-peak in the end point of the arm and that the wheel produces significantly less overshoot when the arm is issued an abrupt stop command from the telerobotic input station.
liger: mock relativistic light cones from Newtonian simulations
NASA Astrophysics Data System (ADS)
Borzyszkowski, Mikolaj; Bertacca, Daniele; Porciani, Cristiano
2017-11-01
We introduce a method to create mock galaxy catalogues in redshift space including general relativistic effects to linear order in the cosmological perturbations. We dub our method liger, short for `light cones with general relativity'. liger takes a (N-body or hydrodynamic) Newtonian simulation as an input and outputs the distribution of galaxies in comoving redshift space. This result is achieved making use of a coordinate transformation and simultaneously accounting for lensing magnification. The calculation includes both local corrections and terms that have been integrated along the line of sight. Our fast implementation allows the production of many realizations that can be used to forecast the performance of forthcoming wide-angle surveys and to estimate the covariance matrix of the observables. To facilitate this use, we also present a variant of liger designed for large-volume simulations with low-mass resolution. In this case, the galaxy distribution on large scales is obtained by biasing the matter-density field. Finally, we present two sample applications of liger. First, we discuss the impact of weak gravitational lensing on to the angular clustering of galaxies in a Euclid-like survey. In agreement with previous analytical studies, we find that magnification bias can be measured with high confidence. Secondly, we focus on two generally neglected Doppler-induced effects: magnification and the change of number counts with redshift. We show that the corresponding redshift-space distortions can be detected at 5.5σ significance with the completed Square Kilometre Array.
Evolution of Flexible Multibody Dynamics for Simulation Applications Supporting Human Spaceflight
NASA Technical Reports Server (NTRS)
Huynh, An; Brain, Thomas A.; MacLean, John R.; Quiocho, Leslie J.
2016-01-01
During the course of transition from the Space Shuttle and International Space Station programs to the Orion and Journey to Mars exploration programs, a generic flexible multibody dynamics formulation and associated software implementation has evolved to meet an ever changing set of requirements at the NASA Johnson Space Center (JSC). Challenging problems related to large transitional topologies and robotic free-flyer vehicle capture/ release, contact dynamics, and exploration missions concept evaluation through simulation (e.g., asteroid surface operations) have driven this continued development. Coupled with this need is the requirement to oftentimes support human spaceflight operations in real-time. Moreover, it has been desirable to allow even more rapid prototyping of on-orbit manipulator and spacecraft systems, to support less complex infrastructure software for massively integrated simulations, to yield further computational efficiencies, and to take advantage of recent advances and availability of multi-core computing platforms. Since engineering analysis, procedures development, and crew familiarity/training for human spaceflight is fundamental to JSC's charter, there is also a strong desire to share and reuse models in both the non-realtime and real-time domains, with the goal of retaining as much multibody dynamics fidelity as possible. Three specific enhancements are reviewed here: (1) linked list organization to address large transitional topologies, (2) body level model order reduction, and (3) parallel formulation/implementation. This paper provides a detailed overview of these primary updates to JSC's flexible multibody dynamics algorithms as well as a comparison of numerical results to previous formulations and associated software.
Dynamic Load Predictions for Launchers Using Extra-Large Eddy Simulations X-Les
NASA Astrophysics Data System (ADS)
Maseland, J. E. J.; Soemarwoto, B. I.; Kok, J. C.
2005-02-01
Flow-induced unsteady loads can have a strong impact on performance and flight characteristics of aerospace vehicles and therefore play a crucial role in their design and operation. Complementary to costly flight tests and delicate wind-tunnel experiments, unsteady loads can be calculated using time-accurate Computational Fluid Dynamics. A capability to accurately predict the dynamic loads on aerospace structures at flight Reynolds numbers can be of great value for the design and analysis of aerospace vehicles. Advanced space launchers are subject to dynamic loads in the base region during the ascent to space. In particular the engine and nozzle experience aerodynamic pressure fluctuations resulting from massive flow separations. Understanding these phenomena is essential for performance enhancements for future launchers which operate a larger nozzle. A new hybrid RANS-LES turbulence modelling approach termed eXtra-Large Eddy Simulations (X-LES) holds the promise to capture the flow structures associated with massive separations and enables the prediction of the broad-band spectrum of dynamic loads. This type of method has become a focal point, reducing the cost of full LES, driven by the demand for their applicability in an industrial environment. The industrial feasibility of X-LES simulations is demonstrated by computing the unsteady aerodynamic loads on the main-engine nozzle of a generic space launcher configuration. The potential to calculate the dynamic loads is qualitatively assessed for transonic flow conditions in a comparison to wind-tunnel experiments. In terms of turn-around-times, X-LES computations are already feasible within the time-frames of the development process to support the structural design. Key words: massive separated flows; buffet loads; nozzle vibrations; space launchers; time-accurate CFD; composite RANS-LES formulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takemasa, Yuichi; Togari, Satoshi; Arai, Yoshinobu
1996-11-01
Vertical temperature differences tend to be great in a large indoor space such as an atrium, and it is important to predict variations of vertical temperature distribution in the early stage of the design. The authors previously developed and reported on a new simplified unsteady-state calculation model for predicting vertical temperature distribution in a large space. In this paper, this model is applied to predicting the vertical temperature distribution in an existing low-rise atrium that has a skylight and is affected by transmitted solar radiation. Detailed calculation procedures that use the model are presented with all the boundary conditions, andmore » analytical simulations are carried out for the cooling condition. Calculated values are compared with measured results. The results of the comparison demonstrate that the calculation model can be applied to the design of a large space. The effects of occupied-zone cooling are also discussed and compared with those of all-zone cooling.« less
The MICE grand challenge lightcone simulation - I. Dark matter clustering
NASA Astrophysics Data System (ADS)
Fosalba, P.; Crocce, M.; Gaztañaga, E.; Castander, F. J.
2015-04-01
We present a new N-body simulation from the Marenostrum Institut de Ciències de l'Espai (MICE) collaboration, the MICE Grand Challenge (MICE-GC), containing about 70 billion dark matter particles in a (3 Gpc h-1)3 comoving volume. Given its large volume and fine spatial resolution, spanning over five orders of magnitude in dynamic range, it allows an accurate modelling of the growth of structure in the universe from the linear through the highly non-linear regime of gravitational clustering. We validate the dark matter simulation outputs using 3D and 2D clustering statistics, and discuss mass-resolution effects in the non-linear regime by comparing to previous simulations and the latest numerical fits. We show that the MICE-GC run allows for a measurement of the BAO feature with per cent level accuracy and compare it to state-of-the-art theoretical models. We also use sub-arcmin resolution pixelized 2D maps of the dark matter counts in the lightcone to make tomographic analyses in real and redshift space. Our analysis shows the simulation reproduces the Kaiser effect on large scales, whereas we find a significant suppression of power on non-linear scales relative to the real space clustering. We complete our validation by presenting an analysis of the three-point correlation function in this and previous MICE simulations, finding further evidence for mass-resolution effects. This is the first of a series of three papers in which we present the MICE-GC simulation, along with a wide and deep mock galaxy catalogue built from it. This mock is made publicly available through a dedicated web portal, http://cosmohub.pic.es.
NASA Astrophysics Data System (ADS)
Lin, Mingpei; Xu, Ming; Fu, Xiaoyu
2017-05-01
Currently, a tremendous amount of space debris in Earth's orbit imperils operational spacecraft. It is essential to undertake risk assessments of collisions and predict dangerous encounters in space. However, collision predictions for an enormous amount of space debris give rise to large-scale computations. In this paper, a parallel algorithm is established on the Compute Unified Device Architecture (CUDA) platform of NVIDIA Corporation for collision prediction. According to the parallel structure of NVIDIA graphics processors, a block decomposition strategy is adopted in the algorithm. Space debris is divided into batches, and the computation and data transfer operations of adjacent batches overlap. As a consequence, the latency to access shared memory during the entire computing process is significantly reduced, and a higher computing speed is reached. Theoretically, a simulation of collision prediction for space debris of any amount and for any time span can be executed. To verify this algorithm, a simulation example including 1382 pieces of debris, whose operational time scales vary from 1 min to 3 days, is conducted on Tesla C2075 of NVIDIA. The simulation results demonstrate that with the same computational accuracy as that of a CPU, the computing speed of the parallel algorithm on a GPU is 30 times that on a CPU. Based on this algorithm, collision prediction of over 150 Chinese spacecraft for a time span of 3 days can be completed in less than 3 h on a single computer, which meets the timeliness requirement of the initial screening task. Furthermore, the algorithm can be adapted for multiple tasks, including particle filtration, constellation design, and Monte-Carlo simulation of an orbital computation.
NASA Astrophysics Data System (ADS)
TenBarge, J. M.; Shay, M. A.; Sharma, P.; Juno, J.; Haggerty, C. C.; Drake, J. F.; Bhattacharjee, A.; Hakim, A.
2017-12-01
Turbulence and magnetic reconnection are the primary mechanisms responsible for the conversion of stored magnetic energy into particle energy in many space and astrophysical plasmas. The magnetospheric multiscale mission (MMS) has given us unprecedented access to high cadence particle and field data of turbulence and magnetic reconnection at earth's magnetopause. The observations include large guide field reconnection events generated within the turbulent magnetopause. Motivated by these observations, we present a study of large guide reconnection using the fully kinetic Eulerian Vlasov-Maxwell component of the Gkeyll simulation framework, and we also employ and compare with gyrokinetics to explore the asymptotically large guide field limit. In addition to studying the configuration space dynamics, we leverage the recently developed field-particle correlations to diagnose the dominant sources of dissipation and compare the results of the field-particle correlation to other energy dissipation measures.
The Use of Decentralized Control in the Design of a Large Segmented Space Reflector
NASA Technical Reports Server (NTRS)
Ryaciotaki-Boussalis, Helen; Mirmirani, Maj; Rad, Khosrow; Morales, Mauricio; Velazquez, Efrain; Chassiakos, Anastasios; Luzardo, Jose-Alberto
1997-01-01
The 3-dimensional model for a segmented reflector telescope is developed using finite element techniques. The structure is decomposed into six subsystems. System control design using neural networks is performed. Performance evaluation is demonstrated via simulation using PRO-MATLAB and SIMULINK.
1987-09-01
can be reduced substantially, compared to using numerical methods to model inter - " connect parasitics. Although some accuracy might be lost with...conductor widths and spacings listed in Table 2 1 , have been employed for simulation. In the first set of the simulations, planar dielectric inter ...model, there are no restrictions on the iumber ol diele-iric and conductors. andl the shape of the conductors and the dielectric inter - a.e,, In the
Regional Simulations of Stratospheric Lofting of Smoke Plumes
NASA Astrophysics Data System (ADS)
Stenchikov, G. L.; Fromm, M.; Robock, A.
2006-12-01
The lifetime and spatial distribution of sooty aerosols from multiple fires that would cause major climate impact were debated in studies of climatic and environmental consequences of a nuclear war in the 1980s. The Kuwait oil fires in 1991 did not show a cumulative effect of multiple smoke plumes on large-scale circulation systems and smoke was mainly dispersed in the middle troposphere. However, recent observations show that smoke from large forest fires can be directly injected into the lower stratosphere by strong pyro-convective storms. Smoke plumes in the upper troposphere can be partially mixed into the lower stratosphere because of the same heating and lofting effect that was simulated in large-scale nuclear winter simulations with interactive aerosols. However nuclear winter simulations were conducted using climate models with grid spacing of more than 100 km, which do not account for the fine-scale dynamic processes. Therefore in this study we conduct fine-scale regional simulations of the aerosol plume using the Regional Atmospheric Modeling System (RAMS) mesoscale model which was modified to account for radiatively interactive tracers. To resolve fine-scale dynamic processes we use horizontal grid spacing of 25 km and 60 vertical layers, and initiate simulations with the NCEP reanalysis fields. We find that dense aerosol layers could be lofted from 1 to a few km per day, but this critically depends on the optical depth of aerosol layer, single scatter albedo, and how fast the plume is being diluted. Kuwaiti plumes from different small-area fires reached only 5-6 km altitude and were probably diffused and diluted in the lower and middle troposphere. A plume of 100 km spatial scale initially developed in the upper troposphere tends to penetrate into the stratosphere. Short-term cloud resolving simulations of such a plume show that aerosol heating intensifies small-scale motions that tend to mix smoke polluted air into the lower stratosphere. Regional simulations allow us to more accurately estimate the rate of lifting and spreading of aerosol clouds. But they do not reveal any dynamic processes that could prevent heating and lofting of absorbing aerosols.
Simulation of Combustion Systems with Realistic g-jitter
NASA Technical Reports Server (NTRS)
Mell, William E.; McGrattan, Kevin B.; Baum, Howard R.
2003-01-01
In this project a transient, fully three-dimensional computer simulation code was developed to simulate the effects of realistic g-jitter on a number of combustion systems. The simulation code is capable of simulating flame spread on a solid and nonpremixed or premixed gaseous combustion in nonturbulent flow with simple combustion models. Simple combustion models were used to preserve computational efficiency since this is meant to be an engineering code. Also, the use of sophisticated turbulence models was not pursued (a simple Smagorinsky type model can be implemented if deemed appropriate) because if flow velocities are large enough for turbulence to develop in a reduced gravity combustion scenario it is unlikely that g-jitter disturbances (in NASA's reduced gravity facilities) will play an important role in the flame dynamics. Acceleration disturbances of realistic orientation, magnitude, and time dependence can be easily included in the simulation. The simulation algorithm was based on techniques used in an existing large eddy simulation code which has successfully simulated fire dynamics in complex domains. A series of simulations with measured and predicted acceleration disturbances on the International Space Station (ISS) are presented. The results of this series of simulations suggested a passive isolation system and appropriate scheduling of crew activity would provide a sufficiently "quiet" acceleration environment for spherical diffusion flames.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamilton, K.; Wilson, R.J.; Hemler, R.S.
1999-11-15
The large-scale circulation in the Geophysical Fluid Dynamics Laboratory SKYHI troposphere-stratosphere-mesosphere finite-difference general circulation model is examined as a function of vertical and horizontal resolution. The experiments examined include one with horizontal grid spacing of {approximately}35 km and another with {approximately}100 km horizontal grid spacing but very high vertical resolution (160 levels between the ground and about 85 km). The simulation of the middle-atmospheric zonal-mean winds and temperatures in the extratropics is found to be very sensitive to horizontal resolution. For example, in the early Southern Hemisphere winter the South Pole near 1 mb in the model is colder thanmore » observed, but the bias is reduced with improved horizontal resolution (from {approximately}70 C in a version with {approximately}300 km grid spacing to less than 10 C in the {approximately}35 km version). The extratropical simulation is found to be only slightly affected by enhancements of the vertical resolution. By contrast, the tropical middle-atmospheric simulation is extremely dependent on the vertical resolution employed. With level spacing in the lower stratosphere {approximately}1.5 km, the lower stratospheric zonal-mean zonal winds in the equatorial region are nearly constant in time. When the vertical resolution is doubled, the simulated stratospheric zonal winds exhibit a strong equatorially centered oscillation with downward propagation of the wind reversals and with formation of strong vertical shear layers. This appears to be a spontaneous internally generated oscillation and closely resembles the observed QBO in many respects, although the simulated oscillation has a period less than half that of the real QBO.« less
Hubble space telescope six-battery test bed
NASA Technical Reports Server (NTRS)
Pajak, J. A.; Bush, J. R., Jr.; Lanier, J. R., Jr.
1990-01-01
A test bed for a large space power system breadboard for the Hubble Space Telescope (HST) was designed and built to test the system under simulated orbital conditions. A discussion of the data acquisition and control subsystems designed to provide for continuous 24 hr per day operation and a general overview of the test bed is presented. The data acquisition and control subsystems provided the necessary monitoring and protection to assure safe shutdown with protection of test articles in case of loss of power or equipment failure over the life of the test (up to 5 years).
Naden, Levi N; Shirts, Michael R
2016-04-12
We show how thermodynamic properties of molecular models can be computed over a large, multidimensional parameter space by combining multistate reweighting analysis with a linear basis function approach. This approach reduces the computational cost to estimate thermodynamic properties from molecular simulations for over 130,000 tested parameter combinations from over 1000 CPU years to tens of CPU days. This speed increase is achieved primarily by computing the potential energy as a linear combination of basis functions, computed from either modified simulation code or as the difference of energy between two reference states, which can be done without any simulation code modification. The thermodynamic properties are then estimated with the Multistate Bennett Acceptance Ratio (MBAR) as a function of multiple model parameters without the need to define a priori how the states are connected by a pathway. Instead, we adaptively sample a set of points in parameter space to create mutual configuration space overlap. The existence of regions of poor configuration space overlap are detected by analyzing the eigenvalues of the sampled states' overlap matrix. The configuration space overlap to sampled states is monitored alongside the mean and maximum uncertainty to determine convergence, as neither the uncertainty or the configuration space overlap alone is a sufficient metric of convergence. This adaptive sampling scheme is demonstrated by estimating with high precision the solvation free energies of charged particles of Lennard-Jones plus Coulomb functional form with charges between -2 and +2 and generally physical values of σij and ϵij in TIP3P water. We also compute entropy, enthalpy, and radial distribution functions of arbitrary unsampled parameter combinations using only the data from these sampled states and use the estimates of free energies over the entire space to examine the deviation of atomistic simulations from the Born approximation to the solvation free energy.
NASA Astrophysics Data System (ADS)
Zhao, Feng; Frietman, Edward E. E.; Han, Zhong; Chen, Ray T.
1999-04-01
A characteristic feature of a conventional von Neumann computer is that computing power is delivered by a single processing unit. Although increasing the clock frequency improves the performance of the computer, the switching speed of the semiconductor devices and the finite speed at which electrical signals propagate along the bus set the boundaries. Architectures containing large numbers of nodes can solve this performance dilemma, with the comment that main obstacles in designing such systems are caused by difficulties to come up with solutions that guarantee efficient communications among the nodes. Exchanging data becomes really a bottleneck should al nodes be connected by a shared resource. Only optics, due to its inherent parallelism, could solve that bottleneck. Here, we explore a multi-faceted free space image distributor to be used in optical interconnects in massively parallel processing. In this paper, physical and optical models of the image distributor are focused on from diffraction theory of light wave to optical simulations. the general features and the performance of the image distributor are also described. The new structure of an image distributor and the simulations for it are discussed. From the digital simulation and experiment, it is found that the multi-faceted free space image distributing technique is quite suitable for free space optical interconnection in massively parallel processing and new structure of the multifaceted free space image distributor would perform better.
Gao, Yi Qin
2008-04-07
Here, we introduce a simple self-adaptive computational method to enhance the sampling in energy, configuration, and trajectory spaces. The method makes use of two strategies. It first uses a non-Boltzmann distribution method to enhance the sampling in the phase space, in particular, in the configuration space. The application of this method leads to a broad energy distribution in a large energy range and a quickly converged sampling of molecular configurations. In the second stage of simulations, the configuration space of the system is divided into a number of small regions according to preselected collective coordinates. An enhanced sampling of reactive transition paths is then performed in a self-adaptive fashion to accelerate kinetics calculations.
Research into command, control, and communications in space construction
NASA Technical Reports Server (NTRS)
Davis, Randal
1990-01-01
Coordinating and controlling large numbers of autonomous or semi-autonomous robot elements in a space construction activity will present problems that are very different from most command and control problems encountered in the space business. As part of our research into the feasibility of robot constructors in space, the CSC Operations Group is examining a variety of command, control, and communications (C3) issues. Two major questions being asked are: can we apply C3 techniques and technologies already developed for use in space; and are there suitable terrestrial solutions for extraterrestrial C3 problems? An overview of the control architectures, command strategies, and communications technologies that we are examining is provided and plans for simulations and demonstrations of our concepts are described.
Di Girolamo, Paolo; Behrendt, Andreas; Wulfmeyer, Volker
2018-04-02
The performance of a space-borne water vapour and temperature lidar exploiting the vibrational and pure rotational Raman techniques in the ultraviolet is simulated. This paper discusses simulations under a variety of environmental and climate scenarios. Simulations demonstrate the capability of Raman lidars deployed on-board low-Earth-orbit satellites to provide global-scale water vapour mixing ratio and temperature measurements in the lower to middle troposphere, with accuracies exceeding most observational requirements for numerical weather prediction (NWP) and climate research applications. These performances are especially attractive for measurements in the low troposphere in order to close the most critical gaps in the current earth observation system. In all climate zones, considering vertical and horizontal resolutions of 200 m and 50 km, respectively, mean water vapour mixing ratio profiling precision from the surface up to an altitude of 4 km is simulated to be 10%, while temperature profiling precision is simulated to be 0.40-0.75 K in the altitude interval up to 15 km. Performances in the presence of clouds are also simulated. Measurements are found to be possible above and below cirrus clouds with an optical thickness of 0.3. This combination of accuracy and vertical resolution cannot be achieved with any other space borne remote sensing technique and will provide a breakthrough in our knowledge of global and regional water and energy cycles, as well as in the quality of short- to medium-range weather forecasts. Besides providing a comprehensive set of simulations, this paper also provides an insight into specific possible technological solutions that are proposed for the implementation of a space-borne Raman lidar system. These solutions refer to technological breakthroughs gained during the last decade in the design and development of specific lidar devices and sub-systems, primarily in high-power, high-efficiency solid-state laser sources, low-weight large aperture telescopes, and high-gain, high-quantum efficiency detectors.
Large Survey of Neutron Spectrum Moments Due to ICF Drive Asymmetry
NASA Astrophysics Data System (ADS)
Field, J. E.; Munro, D.; Spears, B.; Peterson, J. L.; Brandon, S.; Gaffney, J. A.; Hammer, J.; Langer, S.; Nora, R. C.; Springer, P.; ICF Workflow Collaboration Collaboration
2016-10-01
We have recently completed the largest HYDRA simulation survey to date ( 60 , 000 runs) of drive asymmetry on the new Trinity computer at LANL. The 2D simulations covered a large space of credible perturbations to the drive of ICF implosions on the NIF. Cumulants of the produced birth energy spectrum for DD and DT reaction neutrons were tallied using new methods. Comparison of the experimental spectra with our map of predicted spectra from simulation should provide a wealth of information about the burning plasma region. We report on our results, highlighting areas of agreement (and disagreement) with experimental spectra. We also identify features in the predicted spectra that might be amenable to measurement with improved diagnostics. Prepared by LLNL under Contract DE-AC52-07NA27344. IM release #: LLNL-PROC-697321.
Learning in the model space for cognitive fault diagnosis.
Chen, Huanhuan; Tino, Peter; Rodan, Ali; Yao, Xin
2014-01-01
The emergence of large sensor networks has facilitated the collection of large amounts of real-time data to monitor and control complex engineering systems. However, in many cases the collected data may be incomplete or inconsistent, while the underlying environment may be time-varying or unformulated. In this paper, we develop an innovative cognitive fault diagnosis framework that tackles the above challenges. This framework investigates fault diagnosis in the model space instead of the signal space. Learning in the model space is implemented by fitting a series of models using a series of signal segments selected with a sliding window. By investigating the learning techniques in the fitted model space, faulty models can be discriminated from healthy models using a one-class learning algorithm. The framework enables us to construct a fault library when unknown faults occur, which can be regarded as cognitive fault isolation. This paper also theoretically investigates how to measure the pairwise distance between two models in the model space and incorporates the model distance into the learning algorithm in the model space. The results on three benchmark applications and one simulated model for the Barcelona water distribution network confirm the effectiveness of the proposed framework.
NASA Technical Reports Server (NTRS)
Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.
2007-01-01
The desire to create more complex visual scenes in modern flight simulators outpaces recent increases in processor speed. As a result, simulation transport delay remains a problem. New approaches for compensating the transport delay in a flight simulator have been developed and are presented in this report. The lead/lag filter, the McFarland compensator and the Sobiski/Cardullo state space filter are three prominent compensators. The lead/lag filter provides some phase lead, while introducing significant gain distortion in the same frequency interval. The McFarland predictor can compensate for much longer delay and cause smaller gain error in low frequencies than the lead/lag filter, but the gain distortion beyond the design frequency interval is still significant, and it also causes large spikes in prediction. Though, theoretically, the Sobiski/Cardullo predictor, a state space filter, can compensate the longest delay with the least gain distortion among the three, it has remained in laboratory use due to several limitations. The first novel compensator is an adaptive predictor that makes use of the Kalman filter algorithm in a unique manner. In this manner the predictor can accurately provide the desired amount of prediction, while significantly reducing the large spikes caused by the McFarland predictor. Among several simplified online adaptive predictors, this report illustrates mathematically why the stochastic approximation algorithm achieves the best compensation results. A second novel approach employed a reference aircraft dynamics model to implement a state space predictor on a flight simulator. The practical implementation formed the filter state vector from the operator s control input and the aircraft states. The relationship between the reference model and the compensator performance was investigated in great detail, and the best performing reference model was selected for implementation in the final tests. Theoretical analyses of data from offline simulations with time delay compensation show that both novel predictors effectively suppress the large spikes caused by the McFarland compensator. The phase errors of the three predictors are not significant. The adaptive predictor yields greater gain errors than the McFarland predictor for short delays (96 and 138 ms), but shows smaller errors for long delays (186 and 282 ms). The advantage of the adaptive predictor becomes more obvious for a longer time delay. Conversely, the state space predictor results in substantially smaller gain error than the other two predictors for all four delay cases.
Monte Carlo simulations for the space radiation superconducting shield project (SR2S).
Vuolo, M; Giraudo, M; Musenich, R; Calvelli, V; Ambroglini, F; Burger, W J; Battiston, R
2016-02-01
Astronauts on deep-space long-duration missions will be exposed for long time to galactic cosmic rays (GCR) and Solar Particle Events (SPE). The exposure to space radiation could lead to both acute and late effects in the crew members and well defined countermeasures do not exist nowadays. The simplest solution given by optimized passive shielding is not able to reduce the dose deposited by GCRs below the actual dose limits, therefore other solutions, such as active shielding employing superconducting magnetic fields, are under study. In the framework of the EU FP7 SR2S Project - Space Radiation Superconducting Shield--a toroidal magnetic system based on MgB2 superconductors has been analyzed through detailed Monte Carlo simulations using Geant4 interface GRAS. Spacecraft and magnets were modeled together with a simplified mechanical structure supporting the coils. Radiation transport through magnetic fields and materials was simulated for a deep-space mission scenario, considering for the first time the effect of secondary particles produced in the passage of space radiation through the active shielding and spacecraft structures. When modeling the structures supporting the active shielding systems and the habitat, the radiation protection efficiency of the magnetic field is severely decreasing compared to the one reported in previous studies, when only the magnetic field was modeled around the crew. This is due to the large production of secondary radiation taking place in the material surrounding the habitat. Copyright © 2016 The Committee on Space Research (COSPAR). Published by Elsevier Ltd. All rights reserved.
Trap configuration and spacing influences parameter estimates in spatial capture-recapture models
Sun, Catherine C.; Fuller, Angela K.; Royle, J. Andrew
2014-01-01
An increasing number of studies employ spatial capture-recapture models to estimate population size, but there has been limited research on how different spatial sampling designs and trap configurations influence parameter estimators. Spatial capture-recapture models provide an advantage over non-spatial models by explicitly accounting for heterogeneous detection probabilities among individuals that arise due to the spatial organization of individuals relative to sampling devices. We simulated black bear (Ursus americanus) populations and spatial capture-recapture data to evaluate the influence of trap configuration and trap spacing on estimates of population size and a spatial scale parameter, sigma, that relates to home range size. We varied detection probability and home range size, and considered three trap configurations common to large-mammal mark-recapture studies: regular spacing, clustered, and a temporal sequence of different cluster configurations (i.e., trap relocation). We explored trap spacing and number of traps per cluster by varying the number of traps. The clustered arrangement performed well when detection rates were low, and provides for easier field implementation than the sequential trap arrangement. However, performance differences between trap configurations diminished as home range size increased. Our simulations suggest it is important to consider trap spacing relative to home range sizes, with traps ideally spaced no more than twice the spatial scale parameter. While spatial capture-recapture models can accommodate different sampling designs and still estimate parameters with accuracy and precision, our simulations demonstrate that aspects of sampling design, namely trap configuration and spacing, must consider study area size, ranges of individual movement, and home range sizes in the study population.
Improvement of Automated POST Case Success Rate Using Support Vector Machines
NASA Technical Reports Server (NTRS)
Zwack, Mathew R.; Dees, Patrick D.
2017-01-01
During early conceptual design of complex systems, concept down selection can have a large impact upon program life-cycle cost. Therefore, any concepts selected during early design will inherently commit program costs and affect the overall probability of program success. For this reason it is important to consider as large a design space as possible in order to better inform the down selection process. For conceptual design of launch vehicles, trajectory analysis and optimization often presents the largest obstacle to evaluating large trade spaces. This is due to the sensitivity of the trajectory discipline to changes in all other aspects of the vehicle design. Small deltas in the performance of other subsystems can result in relatively large fluctuations in the ascent trajectory because the solution space is non-linear and multi-modal. In order to help capture large design spaces for new launch vehicles, the authors have performed previous work seeking to automate the execution of the industry standard tool, Program to Optimize Simulated Trajectories (POST). This work initially focused on implementation of analyst heuristics to enable closure of cases in an automated fashion, with the goal of applying the concepts of design of experiments (DOE) and surrogate modeling to enable near instantaneous throughput of vehicle cases.3 As noted in [4] work was then completed to improve the DOE process by utilizing a graph theory based approach to connect similar design points.
The Design of an Adaptive Attitude Control System
1992-09-01
spacecraft to reorient itself by rotating about the eigenaxis will be executing an optimal maneuver . [Ref. 9: pp. 375-3761 2. Quaternion Feedback Regulator...34% The below program will simulate the CER Control System for Large "% Angle (Slewing) Motion. The Control Law is a Quaternion Feedback "% Regulator...Equipment/Retriever (CER) during autonomous attitude hold and large angle or slewing maneuvers . The CER is a proposed space robot that deploys from
Computational study of 3-D hot-spot initiation in shocked insensitive high-explosive
NASA Astrophysics Data System (ADS)
Najjar, F. M.; Howard, W. M.; Fried, L. E.; Manaa, M. R.; Nichols, A., III; Levesque, G.
2012-03-01
High-explosive (HE) material consists of large-sized grains with micron-sized embedded impurities and pores. Under various mechanical/thermal insults, these pores collapse generating hightemperature regions leading to ignition. A hydrodynamic study has been performed to investigate the mechanisms of pore collapse and hot spot initiation in TATB crystals, employing a multiphysics code, ALE3D, coupled to the chemistry module, Cheetah. This computational study includes reactive dynamics. Two-dimensional high-resolution large-scale meso-scale simulations have been performed. The parameter space is systematically studied by considering various shock strengths, pore diameters and multiple pore configurations. Preliminary 3-D simulations are undertaken to quantify the 3-D dynamics.
Dynamics of flexible bodies in tree topology - A computer oriented approach
NASA Technical Reports Server (NTRS)
Singh, R. P.; Vandervoort, R. J.; Likins, P. W.
1984-01-01
An approach suited for automatic generation of the equations of motion for large mechanical systems (i.e., large space structures, mechanisms, robots, etc.) is presented. The system topology is restricted to a tree configuration. The tree is defined as an arbitrary set of rigid and flexible bodies connected by hinges characterizing relative translations and rotations of two adjoining bodies. The equations of motion are derived via Kane's method. The resulting equation set is of minimum dimension. Dynamical equations are imbedded in a computer program called TREETOPS. Extensive control simulation capability is built in the TREETOPS program. The simulation is driven by an interactive set-up program resulting in an easy to use analysis tool.
Desktop Simulation: Towards a New Strategy for Arts Technology Education
ERIC Educational Resources Information Center
Eidsheim, Nina Sun
2009-01-01
For arts departments in many institutions, technology education entails prohibitive equipment costs, maintenance requirements and administrative demands. There are also inherent pedagogical challenges: for example, recording studio classes where, due to space and time constraints, only a few students in what might be a large class can properly…
Inviscid and Viscous CFD Analysis of Booster Separation for the Space Launch System Vehicle
NASA Technical Reports Server (NTRS)
Dalle, Derek J.; Rogers, Stuart E.; Chan, William M.; Lee, Henry C.
2016-01-01
This paper presents details of Computational Fluid Dynamic (CFD) simulations of the Space Launch System during solid-rocket booster separation using the Cart3D inviscid and Overflow viscous CFD codes. The discussion addresses the use of multiple data sources of computational aerodynamics, experimental aerodynamics, and trajectory simulations for this critical phase of flight. Comparisons are shown between Cart3D simulations and a wind tunnel test performed at NASA Langley Research Center's Unitary Plan Wind Tunnel, and further comparisons are shown between Cart3D and viscous Overflow solutions for the flight vehicle. The Space Launch System (SLS) is a new exploration-class launch vehicle currently in development that includes two Solid Rocket Boosters (SRBs) modified from Space Shuttle hardware. These SRBs must separate from the SLS core during a phase of flight where aerodynamic loads are nontrivial. The main challenges for creating a separation aerodynamic database are the large number of independent variables (including orientation of the core, relative position and orientation of the boosters, and rocket thrust levels) and the complex flow caused by exhaust plumes of the booster separation motors (BSMs), which are small rockets designed to push the boosters away from the core by firing partially in the direction opposite to the motion of the vehicle.
James Webb Space Telescope Optical Simulation Testbed: Segmented Mirror Phase Retrieval Testing
NASA Astrophysics Data System (ADS)
Laginja, Iva; Egron, Sylvain; Brady, Greg; Soummer, Remi; Lajoie, Charles-Philippe; Bonnefois, Aurélie; Long, Joseph; Michau, Vincent; Choquet, Elodie; Ferrari, Marc; Leboulleux, Lucie; Mazoyer, Johan; N’Diaye, Mamadou; Perrin, Marshall; Petrone, Peter; Pueyo, Laurent; Sivaramakrishnan, Anand
2018-01-01
The James Webb Space Telescope (JWST) Optical Simulation Testbed (JOST) is a hardware simulator designed to produce JWST-like images. A model of the JWST three mirror anastigmat is realized with three lenses in form of a Cooke Triplet, which provides JWST-like optical quality over a field equivalent to a NIRCam module, and an Iris AO segmented mirror with hexagonal elements is standing in for the JWST segmented primary. This setup successfully produces images extremely similar to NIRCam images from cryotesting in terms of the PSF morphology and sampling relative to the diffraction limit.The testbed is used for staff training of the wavefront sensing and control (WFS&C) team and for independent analysis of WFS&C scenarios of the JWST. Algorithms like geometric phase retrieval (GPR) that may be used in flight and potential upgrades to JWST WFS&C will be explored. We report on the current status of the testbed after alignment, implementation of the segmented mirror, and testing of phase retrieval techniques.This optical bench complements other work at the Makidon laboratory at the Space Telescope Science Institute, including the investigation of coronagraphy for segmented aperture telescopes. Beyond JWST we intend to use JOST for WFS&C studies for future large segmented space telescopes such as LUVOIR.
NASA Technical Reports Server (NTRS)
Webster, W., Jr.; Frawley, J. J.; Stefanik, M.
1984-01-01
Simulation studies established that the main (core), crustal and electrojet components of the Earth's magnetic field can be observed with greater resolution or over a longer time-base than is presently possible by using the capabilities provided by the space station. Two systems are studied. The first, a large lifetime, magnetic monitor would observe the main field and its time variation. The second, a remotely-piloted, magnetic probe would observe the crustal field at low altitude and the electrojet field in situ. The system design and the scientific performance of these systems is assessed. The advantages of the space station are reviewed.
Orbital construction support equipment - Manned remote work station
NASA Technical Reports Server (NTRS)
Nassiff, S. H.
1978-01-01
The Manned Remote Work Station (MRWS) is a versatile piece of orbital construction support equipment which can support in-space construction in various modes of operation. Proposed near-term Space Shuttle mission support and future large orbiting systems support, along with the various construction modes of MRWS operation, are discussed. Preliminary flight subsystems requirements and configuration design are presented. Integration of the MRWS development test article with the JSC Mockup and Integration Facility, including ground-test objectives and techniques for zero-g simulations, is also presented.
A radiant heating test facility for space shuttle orbiter thermal protection system certification
NASA Technical Reports Server (NTRS)
Sherborne, W. D.; Milhoan, J. D.
1980-01-01
A large scale radiant heating test facility was constructed so that thermal certification tests can be performed on the new generation of thermal protection systems developed for the space shuttle orbiter. This facility simulates surface thermal gradients, onorbit cold-soak temperatures down to 200 K, entry heating temperatures to 1710 K in an oxidizing environment, and the dynamic entry pressure environment. The capabilities of the facility and the development of new test equipment are presented.
Large eddy simulation study of spanwise spacing effects on secondary flows in turbulent channel flow
NASA Astrophysics Data System (ADS)
Aliakbarimiyanmahaleh, Mohammad; Anderson, William
2015-11-01
The structure of turbulent flow over a complex topography composed of streamwise-aligned rows of cones with varying spanwise spacing, s is studied with large-eddy simulation (LES). Similar to the experimental study of Vanderwel and Ganapathisubramani, 2015: J. Fluid Mech., we investigate the relationship between secondary flow and s, for 0 . 25 <= s / δ <= 5 . For cases with s / δ > 2 , domain-scale rollers freely exist. These had previously been called ``turbulent secondary flows'' (Willingham et al., 2014: Phys. Fluids; Barros and Christensen, 2014: J. Fluid Mech.; Anderson et al., 2015: J. Fluid Mech.), but closer inspection of the statistics indicates these are a turbulent tertiary flow: they only remain ``anchored'' to the conical roughness elements for s / δ > 2 . For s / δ < 2 , turbulent tertiary flows are prevented from occupying the domain by virtue of proximity to adjacent, counter-rotating tertiary flows. Turbulent secondary flows are associated with the conical roughness elements. These turbulent secondary flows emanate from individual conical topographic elements and set the roughness sublayer depth. The turbulent secondary flows remain intact for large and small spacing. For s / δ < 1 , a mean tertiary flow is not present. This work was supported by the Air Force Office of Sci. Research, Young Inv. Program (PM: Dr. R. Ponnoppan and Ms. E. Montomery) under Grant # FA9550-14-1-0394. Computational resources were provided by the Texas Adv. Comp. Center at the Univ. of Texas.
NASA Astrophysics Data System (ADS)
Huang, Z.; Jia, X.; Rubin, M.; Fougere, N.; Gombosi, T. I.; Tenishev, V.; Combi, M. R.; Bieler, A. M.; Toth, G.; Hansen, K. C.; Shou, Y.
2014-12-01
We study the plasma environment of the comet Churyumov-Gerasimenko, which is the target of the Rosetta mission, by performing large scale numerical simulations. Our model is based on BATS-R-US within the Space Weather Modeling Framework that solves the governing multifluid MHD equations, which describe the behavior of the cometary heavy ions, the solar wind protons, and electrons. The model includes various mass loading processes, including ionization, charge exchange, dissociative ion-electron recombination, as well as collisional interactions between different fluids. The neutral background used in our MHD simulations is provided by a kinetic Direct Simulation Monte Carlo (DSMC) model. We will simulate how the cometary plasma environment changes at different heliocentric distances.
Automated Knowledge Discovery from Simulators
NASA Technical Reports Server (NTRS)
Burl, Michael C.; DeCoste, D.; Enke, B. L.; Mazzoni, D.; Merline, W. J.; Scharenbroich, L.
2006-01-01
In this paper, we explore one aspect of knowledge discovery from simulators, the landscape characterization problem, where the aim is to identify regions in the input/ parameter/model space that lead to a particular output behavior. Large-scale numerical simulators are in widespread use by scientists and engineers across a range of government agencies, academia, and industry; in many cases, simulators provide the only means to examine processes that are infeasible or impossible to study otherwise. However, the cost of simulation studies can be quite high, both in terms of the time and computational resources required to conduct the trials and the manpower needed to sift through the resulting output. Thus, there is strong motivation to develop automated methods that enable more efficient knowledge extraction.
NASA Astrophysics Data System (ADS)
Carlotti, Marco; Kovalchuk, Andrii; Wächter, Tobias; Qiu, Xinkai; Zharnikov, Michael; Chiechi, Ryan C.
2016-12-01
Tunnelling currents through tunnelling junctions comprising molecules with cross-conjugation are markedly lower than for their linearly conjugated analogues. This effect has been shown experimentally and theoretically to arise from destructive quantum interference, which is understood to be an intrinsic, electronic property of molecules. Here we show experimental evidence of conformation-driven interference effects by examining through-space conjugation in which π-conjugated fragments are arranged face-on or edge-on in sufficiently close proximity to interact through space. Observing these effects in the latter requires trapping molecules in a non-equilibrium conformation closely resembling the X-ray crystal structure, which we accomplish using self-assembled monolayers to construct bottom-up, large-area tunnelling junctions. In contrast, interference effects are completely absent in zero-bias simulations on the equilibrium, gas-phase conformation, establishing through-space conjugation as both of fundamental interest and as a potential tool for tuning tunnelling charge-transport in large-area, solid-state molecular-electronic devices.
Space Propulsion Research Facility (B-2): An Innovative, Multi-Purpose Test Facility
NASA Technical Reports Server (NTRS)
Hill, Gerald M.; Weaver, Harold F.; Kudlac, Maureen T.; Maloney, Christian T.; Evans, Richard K.
2011-01-01
The Space Propulsion Research Facility, commonly referred to as B-2, is designed to hot fire rocket engines or upper stage launch vehicles with up to 890,000 N force (200,000 lb force), after environmental conditioning of the test article in simulated thermal vacuum space environment. As NASA s third largest thermal vacuum facility, and the largest designed to store and transfer large quantities of propellant, it is uniquely suited to support developmental testing associated with large lightweight structures and Cryogenic Fluid Management (CFM) systems, as well as non-traditional propulsion test programs such as Electric and In-Space propulsion. B-2 has undergone refurbishment of key subsystems to support the NASA s future test needs, including data acquisition and controls, vacuum, and propellant systems. This paper details the modernization efforts at B-2 to support the Nation s thermal vacuum/propellant test capabilities, the unique design considerations implemented for efficient operations and maintenance, and ultimately to reduce test costs.
Magnetosphere Modeling: From Cartoons to Simulations
NASA Astrophysics Data System (ADS)
Gombosi, T. I.
2017-12-01
Over the last half a century physics-based global computer simulations became a bridge between experiment and basic theory and now it represents the "third pillar" of geospace research. Today, many of our scientific publications utilize large-scale simulations to interpret observations, test new ideas, plan campaigns, or design new instruments. Realistic simulations of the complex Sun-Earth system have been made possible by the dramatically increased power of both computing hardware and numerical algorithms. Early magnetosphere models were based on simple E&M concepts (like the Chapman-Ferraro cavity) and hydrodynamic analogies (bow shock). At the beginning of the space age current system models were developed culminating in the sophisticated Tsyganenko-type description of the magnetic configuration. The first 3D MHD simulations of the magnetosphere were published in the early 1980s. A decade later there were several competing global models that were able to reproduce many fundamental properties of the magnetosphere. The leading models included the impact of the ionosphere by using a height-integrated electric potential description. Dynamic coupling of global and regional models started in the early 2000s by integrating a ring current and a global magnetosphere model. It has been recognized for quite some time that plasma kinetic effects play an important role. Presently, global hybrid simulations of the dynamic magnetosphere are expected to be possible on exascale supercomputers, while fully kinetic simulations with realistic mass ratios are still decades away. In the 2010s several groups started to experiment with PIC simulations embedded in large-scale 3D MHD models. Presently this integrated MHD-PIC approach is at the forefront of magnetosphere simulations and this technique is expected to lead to some important advances in our understanding of magnetosheric physics. This talk will review the evolution of magnetosphere modeling from cartoons to current systems, to global MHD to MHD-PIC and discuss the role of state-of-the-art models in forecasting space weather.
Using parallel computing for the display and simulation of the space debris environment
NASA Astrophysics Data System (ADS)
Möckel, M.; Wiedemann, C.; Flegel, S.; Gelhaus, J.; Vörsmann, P.; Klinkrad, H.; Krag, H.
2011-07-01
Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction to OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.
Using parallel computing for the display and simulation of the space debris environment
NASA Astrophysics Data System (ADS)
Moeckel, Marek; Wiedemann, Carsten; Flegel, Sven Kevin; Gelhaus, Johannes; Klinkrad, Heiner; Krag, Holger; Voersmann, Peter
Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction of OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.
Development of a verification program for deployable truss advanced technology
NASA Technical Reports Server (NTRS)
Dyer, Jack E.
1988-01-01
Use of large deployable space structures to satisfy the growth demands of space systems is contingent upon reducing the associated risks that pervade many related technical disciplines. The overall objectives of this program was to develop a detailed plan to verify deployable truss advanced technology applicable to future large space structures and to develop a preliminary design of a deployable truss reflector/beam structure for use a a technology demonstration test article. The planning is based on a Shuttle flight experiment program using deployable 5 and 15 meter aperture tetrahedral truss reflections and a 20 m long deployable truss beam structure. The plan addresses validation of analytical methods, the degree to which ground testing adequately simulates flight and in-space testing requirements for large precision antenna designs. Based on an assessment of future NASA and DOD space system requirements, the program was developed to verify four critical technology areas: deployment, shape accuracy and control, pointing and alignment, and articulation and maneuvers. The flight experiment technology verification objectives can be met using two shuttle flights with the total experiment integrated on a single Shuttle Test Experiment Platform (STEP) and a Mission Peculiar Experiment Support Structure (MPESS). First flight of the experiment can be achieved 60 months after go-ahead with a total program duration of 90 months.
Neutral Buoyancy Test NB-14 Large Space Structure Assembly
NASA Technical Reports Server (NTRS)
1977-01-01
Once the United States' space program had progressed from Earth's orbit into outerspace, the prospect of building and maintaining a permanent presence in space was realized. To accomplish this feat, NASA launched a temporary workstation, Skylab, to discover the effects of low gravity and weightlessness on the human body, and also to develop tools and equipment that would be needed in the future to build and maintain a more permanent space station. The structures, techniques, and work schedules had to be carefully designed to fit this unique construction site. The components had to be lightweight for transport into orbit, yet durable. The station also had to be made with removable parts for easy servicing and repairs by astronauts. All of the tools necessary for service and repairs had to be designed for easy manipulation by a suited astronaut. And construction methods had to be efficient due to limited time the astronauts could remain outside their controlled environment. In lieu of all the specific needs for this project, an environment on Earth had to be developed that could simulate a low gravity atmosphere. A Neutral Buoyancy Simulator (NBS) was constructed by NASA Marshall Space Flight Center (MSFC) in 1968. Since then, NASA scientists have used this facility to understand how humans work best in low gravity and also provide information about the different kinds of structures that can be built.Pictured is an experiment where the astronaut is required to move a large object which weighed 19,000 pounds. It was moved with realitive ease once the astronaut became familiar with his environment and his near weightless condition. Experiments of this nature provided scientists with the information needed regarding weight and mass allowances astronauts could manage in preparation for building a permanent space station in the future.
Koski, Jason P; Riggleman, Robert A
2017-04-28
Block copolymers, due to their ability to self-assemble into periodic structures with long range order, are appealing candidates to control the ordering of functionalized nanoparticles where it is well-accepted that the spatial distribution of nanoparticles in a polymer matrix dictates the resulting material properties. The large parameter space associated with block copolymer nanocomposites makes theory and simulation tools appealing to guide experiments and effectively isolate parameters of interest. We demonstrate a method for performing field-theoretic simulations in a constant volume-constant interfacial tension ensemble (nVγT) that enables the determination of the equilibrium properties of block copolymer nanocomposites, including when the composites are placed under tensile or compressive loads. Our approach is compatible with the complex Langevin simulation framework, which allows us to go beyond the mean-field approximation. We validate our approach by comparing our nVγT approach with free energy calculations to determine the ideal domain spacing and modulus of a symmetric block copolymer melt. We analyze the effect of numerical and thermodynamic parameters on the efficiency of the nVγT ensemble and subsequently use our method to investigate the ideal domain spacing, modulus, and nanoparticle distribution of a lamellar forming block copolymer nanocomposite. We find that the nanoparticle distribution is directly linked to the resultant domain spacing and is dependent on polymer chain density, nanoparticle size, and nanoparticle chemistry. Furthermore, placing the system under tension or compression can qualitatively alter the nanoparticle distribution within the block copolymer.
System dynamics and simulation of LSS
NASA Technical Reports Server (NTRS)
Ryan, R. F.
1978-01-01
Large Space Structures have many unique problems arising from mission objectives and the resulting configuration. Inherent in these configurations is a strong coupling among several of the designing disciplines. In particular, the coupling between structural dynamics and control is a key design consideration. The solution to these interactive problems requires efficient and accurate analysis, simulation and test techniques, and properly planned and conducted design trade studies. The discussion presented deals with these subjects and concludes with a brief look at some NASA capabilities which can support these technology studies.
Structural Analysis of a Magnetically Actuated Silicon Nitride Micro-Shutter for Space Applications
NASA Technical Reports Server (NTRS)
Loughlin, James P.; Fettig, Rainer K.; Moseley, S. Harvey; Kutyrev, Alexander S.; Mott, D. Brent; Obenschain, Arthur F. (Technical Monitor)
2002-01-01
Finite element models have been created to simulate the electrostatic and electromagnetic actuation of a 0.5 micrometers silicon nitride micro-shutter for use in a spacebased Multi-object Spectrometer (MOS). The microshutter uses a torsion hinge to go from the closed, 0 degree, position, to the open, 90 degree position. Stresses in the torsion hinge are determined with a large deformation nonlinear finite element model. The simulation results are compared to experimental measurements of fabricated micro-shutter devices.
Minimizing distortion and internal forces in truss structures by simulated annealing
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.; Padula, Sharon L.
1990-01-01
Inaccuracies in the length of members and the diameters of joints of large space structures may produce unacceptable levels of surface distortion and internal forces. Here, two discrete optimization problems are formulated, one to minimize surface distortion (DSQRMS) and the other to minimize internal forces (FSQRMS). Both of these problems are based on the influence matrices generated by a small-deformation linear analysis. Good solutions are obtained for DSQRMS and FSQRMS through the use of a simulated annealing heuristic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgansen, K.A.; Pin, F.G.
A new method for mitigating unexpected impact of a redundant manipulator with an object in its environment is presented. Kinematic constraints are utilized with the recently developed method known as Full Space Parameterization (FSP). System performance criterion and constraints are changed at impact to return the end effector to the point of impact and halt the arm. Since large joint accelerations could occur as the manipulator is halted, joint acceleration bounds are imposed to simulate physical actuator limitations. Simulation results are presented for the case of a simple redundant planar manipulator.
IRIS thermal balance test within ESTEC LSS
NASA Technical Reports Server (NTRS)
Messidoro, Piero; Ballesio, Marino; Vessaz, J. P.
1988-01-01
The Italian Research Interim Stage (IRIS) thermal balance test was successfully performed in the ESTEC Large Space Simulator (LSS) to qualify the thermal design and to validate the thermal mathematical model. Characteristics of the test were the complexity of the set-up required to simulate the Shuttle cargo bay and allowing IRIS mechanism actioning and operation for the first time in the new LSS facility. Details of the test are presented, and test results for IRIS and the LSS facility are described.
Dewetting and Hydrophobic Interaction in Physical and Biological Systems
Berne, Bruce J.; Weeks, John D.; Zhou, Ruhong
2013-01-01
Hydrophobicity manifests itself differently on large and small length scales. This review focuses on large length scale hydrophobicity, particularly on dewetting at single hydrophobic surfaces and drying in regions bounded on two or more sides by hydrophobic surfaces. We review applicable theories, simulations and experiments pertaining to large scale hydrophobicity in physical and biomoleclar systems and clarify some of the critical issues pertaining to this subject. Given space constraints, we could not review all of the significant and interesting work in this very active field. PMID:18928403
Supporting observation campaigns with high resolution modeling
NASA Astrophysics Data System (ADS)
Klocke, Daniel; Brueck, Matthias; Voigt, Aiko
2017-04-01
High resolution simulation in support of measurement campaigns offers a promising and emerging way to create large-scale context for small-scale observations of clouds and precipitation processes. As these simulation include the coupling of measured small-scale processes with the circulation, they also help to integrate the research communities from modeling and observations and allow for detailed model evaluations against dedicated observations. In connection with the measurement campaign NARVAL (August 2016 and December 2013) simulations with a grid-spacing of 2.5 km for the tropical Atlantic region (9000x3300 km), with local refinement to 1.2 km for the western part of the domain, were performed using the icosahedral non-hydrostatic (ICON) general circulation model. These simulations are again used to drive large eddy resolving simulations with the same model for selected days in the high definition clouds and precipitation for advancing climate prediction (HD(CP)2) project. The simulations are presented with the focus on selected results showing the benefit for the scientific communities doing atmospheric measurements and numerical modeling of climate and weather. Additionally, an outlook will be given on how similar simulations will support the NAWDEX measurement campaign in the North Atlantic and AC3 measurement campaign in the Arctic.
Implementation of an open-scenario, long-term space debris simulation approach
NASA Astrophysics Data System (ADS)
Stupl, J.; Nelson, B.; Faber, N.; Perez, A.; Carlino, R.; Yang, F.; Henze, C.; Karacalioglu, A.; O'Toole, C.; Swenson, J.
This paper provides a status update on the implementation of a flexible, long-term space debris simulation approach. The motivation is to build a tool that can assess the long-term impact of various options for debris-remediation, including the LightForce space debris collision avoidance scheme. State-of-the-art simulation approaches that assess the long-term development of the debris environment use either completely statistical approaches, or they rely on large time steps in the order of several (5-15) days if they simulate the positions of single objects over time. They cannot be easily adapted to investigate the impact of specific collision avoidance schemes or de-orbit schemes, because the efficiency of a collision avoidance maneuver can depend on various input parameters, including ground station positions, space object parameters and orbital parameters of the conjunctions and take place in much smaller timeframes than 5-15 days. For example, LightForce only changes the orbit of a certain object (aiming to reduce the probability of collision), but it does not remove entire objects or groups of objects. In the same sense, it is also not straightforward to compare specific de-orbit methods in regard to potential collision risks during a de-orbit maneuver. To gain flexibility in assessing interactions with objects, we implement a simulation that includes every tracked space object in LEO, propagates all objects with high precision, and advances with variable-sized time-steps as small as one second. It allows the assessment of the (potential) impact of changes to any object. The final goal is to employ a Monte Carlo approach to assess the debris evolution during the simulation time-frame of 100 years and to compare a baseline scenario to debris remediation scenarios or other scenarios of interest. To populate the initial simulation, we use the entire space-track object catalog in LEO. We then use a high precision propagator to propagate all objects over the entire simulation duration. If collisions are detected, the appropriate number of debris objects are created and inserted into the simulation framework. Depending on the scenario, further objects, e.g. due to new launches, can be added. At the end of the simulation, the total number of objects above a cut-off size and the number of detected collisions provide benchmark parameters for the comparison between scenarios. The simulation approach is computationally intensive as it involves ten thousands of objects; hence we use a highly parallel approach employing up to a thousand cores on the NASA Pleiades supercomputer for a single run. This paper describes our simulation approach, the status of its implementation, the approach in developing scenarios and examples of first test runs.
Modeling the Lyα Forest in Collisionless Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorini, Daniele; Oñorbe, José; Lukić, Zarija
2016-08-11
Cosmological hydrodynamic simulations can accurately predict the properties of the intergalactic medium (IGM), but only under the condition of retaining the high spatial resolution necessary to resolve density fluctuations in the IGM. This resolution constraint prohibits simulating large volumes, such as those probed by BOSS and future surveys, like DESI and 4MOST. To overcome this limitation, we present in this paper "Iteratively Matched Statistics" (IMS), a novel method to accurately model the Lyα forest with collisionless N-body simulations, where the relevant density fluctuations are unresolved. We use a small-box, high-resolution hydrodynamic simulation to obtain the probability distribution function (PDF) andmore » the power spectrum of the real-space Lyα forest flux. These two statistics are iteratively mapped onto a pseudo-flux field of an N-body simulation, which we construct from the matter density. We demonstrate that our method can reproduce the PDF, line of sight and 3D power spectra of the Lyα forest with good accuracy (7%, 4%, and 7% respectively). We quantify the performance of the commonly used Gaussian smoothing technique and show that it has significantly lower accuracy (20%–80%), especially for N-body simulations with achievable mean inter-particle separations in large-volume simulations. Finally, in addition, we show that IMS produces reasonable and smooth spectra, making it a powerful tool for modeling the IGM in large cosmological volumes and for producing realistic "mock" skies for Lyα forest surveys.« less
MODELING THE Ly α FOREST IN COLLISIONLESS SIMULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorini, Daniele; Oñorbe, José; Hennawi, Joseph F.
2016-08-20
Cosmological hydrodynamic simulations can accurately predict the properties of the intergalactic medium (IGM), but only under the condition of retaining the high spatial resolution necessary to resolve density fluctuations in the IGM. This resolution constraint prohibits simulating large volumes, such as those probed by BOSS and future surveys, like DESI and 4MOST. To overcome this limitation, we present “Iteratively Matched Statistics” (IMS), a novel method to accurately model the Ly α forest with collisionless N -body simulations, where the relevant density fluctuations are unresolved. We use a small-box, high-resolution hydrodynamic simulation to obtain the probability distribution function (PDF) and themore » power spectrum of the real-space Ly α forest flux. These two statistics are iteratively mapped onto a pseudo-flux field of an N -body simulation, which we construct from the matter density. We demonstrate that our method can reproduce the PDF, line of sight and 3D power spectra of the Ly α forest with good accuracy (7%, 4%, and 7% respectively). We quantify the performance of the commonly used Gaussian smoothing technique and show that it has significantly lower accuracy (20%–80%), especially for N -body simulations with achievable mean inter-particle separations in large-volume simulations. In addition, we show that IMS produces reasonable and smooth spectra, making it a powerful tool for modeling the IGM in large cosmological volumes and for producing realistic “mock” skies for Ly α forest surveys.« less
NASA Astrophysics Data System (ADS)
Hossain, U. H.; Ensinger, W.
2015-12-01
Devices operating in space, e.g. in satellites, are being hit by cosmic rays. These include so-called HZE-ions, with High mass (Z) and energy (E). These highly energetic heavy ions penetrate deeply into the materials and deposit a large amount of energy, typically several keV per nm range. Serious damage is created. In space vehicles, polymers are used which are degraded under ion bombardment. HZE ion irradiation can experimentally be simulated in large scale accelerators. In the present study, the radiation damage of aliphatic vinyl- and fluoro-polymers by heavy ions with energies in the GeV range is described. The ions cause bond scission and create volatile small molecular species, leading to considerable mass loss of the polymers. Since hydrogen, oxygen and fluorine-containing molecules are created and these elements are depleted, the remaining material is carbon-richer than the original polymers and contains conjugated CC double bonds. This process is investigated by measuring the optical band gap with UV-Vis absorption spectrometry as a function of ion fluence. The results show how the optical band gaps shift from the UV into the Vis region upon ion irradiation for the different polymers.
NASA Astrophysics Data System (ADS)
Zheng, Lianqing; Yang, Wei
2008-07-01
Recently, accelerated molecular dynamics (AMD) technique was generalized to realize essential energy space random walks so that further sampling enhancement and effective localized enhanced sampling could be achieved. This method is especially meaningful when essential coordinates of the target events are not priori known; moreover, the energy space metadynamics method was also introduced so that biasing free energy functions can be robustly generated. Despite the promising features of this method, due to the nonequilibrium nature of the metadynamics recursion, it is challenging to rigorously use the data obtained at the recursion stage to perform equilibrium analysis, such as free energy surface mapping; therefore, a large amount of data ought to be wasted. To resolve such problem so as to further improve simulation convergence, as promised in our original paper, we are reporting an alternate approach: the adaptive-length self-healing (ALSH) strategy for AMD simulations; this development is based on a recent self-healing umbrella sampling method. Here, the unit simulation length for each self-healing recursion is increasingly updated based on the Wang-Landau flattening judgment. When the unit simulation length for each update is long enough, all the following unit simulations naturally run into the equilibrium regime. Thereafter, these unit simulations can serve for the dual purposes of recursion and equilibrium analysis. As demonstrated in our model studies, by applying ALSH, both fast recursion and short nonequilibrium data waste can be compromised. As a result, combining all the data obtained from all the unit simulations that are in the equilibrium regime via the weighted histogram analysis method, efficient convergence can be robustly ensured, especially for the purpose of free energy surface mapping.
NASA Technical Reports Server (NTRS)
Glaese, John R.; McDonald, Emmett J.
2000-01-01
Orbiting space solar power systems are currently being investigated for possible flight in the time frame of 2015-2020 and later. Such space solar power (SSP) satellites are required to be extremely large in order to make practical the process of collection, conversion to microwave radiation, and reconversion to electrical power at earth stations or at remote locations in space. These large structures are expected to be very flexible presenting unique problems associated with their dynamics and control. The purpose of this project is to apply the expanded TREETOPS multi-body dynamics analysis computer simulation program (with expanded capabilities developed in the previous activity) to investigate the control problems associated with the integrated symmetrical concentrator (ISC) conceptual SSP system. SSP satellites are, as noted, large orbital systems having many bodies (perhaps hundreds) with flexible arrays operating in an orbiting environment where the non-uniform gravitational forces may be the major load producers on the structure so that a high fidelity gravity model is required. The current activity arises from our NRA8-23 SERT proposal. Funding, as a supplemental selection, has been provided by NASA with reduced scope from that originally proposed.
Tada, Shigeru
2015-01-01
The analysis of cell separation has many important biological and medical applications. Dielectrophoresis (DEP) is one of the most effective and widely used techniques for separating and identifying biological species. In the present study, a DEP flow channel, a device that exploits the differences in the dielectric properties of cells in cell separation, was numerically simulated and its cell-separation performance examined. The samples of cells used in the simulation were modeled as human leukocyte (B cell) live and dead cells. The cell-separation analysis was carried out for a flow channel equipped with a planar electrode on the channel's top face and a pair of interdigitated counter electrodes on the bottom. This yielded a three-dimensional (3D) nonuniform AC electric field in the entire space of the flow channel. To investigate the optimal separation conditions for mixtures of live and dead cells, the strength of the applied electric field was varied. With appropriately selected conditions, the device was predicted to be very effective at separating dead cells from live cells. The major advantage of the proposed method is that a large volume of sample can be processed rapidly because of a large spacing of the channel height.
Large-eddy simulation of the passage of a shock wave through homogeneous turbulence
NASA Astrophysics Data System (ADS)
Braun, N. O.; Pullin, D. I.; Meiron, D. I.
2017-11-01
The passage of a nominally plane shockwave through homogeneous, compressible turbulence is a canonical problem representative of flows seen in supernovae, supersonic combustion engines, and inertial confinement fusion. The interaction of isotropic turbulence with a stationary normal shockwave is considered at inertial range Taylor Reynolds numbers, Reλ = 100 - 2500 , using Large Eddy Simulation (LES). The unresolved, subgrid terms are approximated by the stretched-vortex model (Kosovic et al., 2002), which allows self-consistent reconstruction of the subgrid contributions to the turbulent statistics of interest. The mesh is adaptively refined in the vicinity of the shock to resolve small amplitude shock oscillations, and the implications of mesh refinement on the subgrid modeling are considered. Simulations are performed at a range of shock Mach numbers, Ms = 1.2 - 3.0 , and turbulent Mach numbers, Mt = 0.06 - 0.18 , to explore the parameter space of the interaction at high Reynolds number. The LES shows reasonable agreement with linear analysis and lower Reynolds number direct numerical simulations. LANL Subcontract 305963.
Ng, Jonathan; Huang, Yi -Min; Hakim, Ammar; ...
2015-11-05
As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Furthermore, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment modelmore » with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.« less
Laboratory Spectroscopy of Large Carbon Molecules and Ions in Support of Space Missions
NASA Technical Reports Server (NTRS)
Salana, Farid; Tan, X.; Cami, J.; Remy, J.
2006-01-01
One of the major objectives of Laboratory Astrophysics is the optimization of data return from space missions by measuring spectra of atomic and molecular species in laboratory environments that mimic interstellar conditions (WhitePaper (2002, 2006)). Among interstellar species, PAHs are an important and ubiquitous component of carbon-bearing materials that represents a particularly difficult challenge for gas-phase laboratory studies. We present the absorption spectra of jet-cooled neutral and ionized PAHs and discuss the implications for astrophysics. The harsh physical conditions of the interstellar medium have been simulated in the laboratory. We are now, for the first time, in the position to directly compare laboratory spectra of PAHs and carbon nanoparticles with astronomical observations. This new phase offers tremendous opportunities for the data analysis of current and upcoming space missions geared toward the detection of large aromatic systems (HST/COS, FUSE, JWST, Spitzer).
The NASA-Langley Wake Vortex Modelling Effort in Support of an Operational Aircraft Spacing System
NASA Technical Reports Server (NTRS)
Proctor, Fred H.
1998-01-01
Two numerical modelling efforts, one using a large eddy simulation model and the other a numerical weather prediction model, are underway in support of NASA's Terminal Area Productivity program. The large-eddy simulation model (LES) has a meteorological framework and permits the interaction of wake vortices with environments characterized by crosswind shear, stratification, humidity, and atmospheric turbulence. Results from the numerical simulations are being used to assist in the development of algorithms for an operational wake-vortex aircraft spacing system. A mesoscale weather forecast model is being adapted for providing operational forecast of winds, temperature, and turbulence parameters to be used in the terminal area. This paper describes the goals and modelling approach, as well as achievements obtained to date. Simulation results will be presented from the LES model for both two and three dimensions. The 2-D model is found to be generally valid for studying wake vortex transport, while the 3-D approach is necessary for realistic treatment of decay via interaction of wake vortices and atmospheric boundary layer turbulence. Meteorology is shown to have an important affect on vortex transport and decay. Presented are results showing that wake vortex transport is unaffected by uniform fog or rain, but wake vortex transport can be strongly affected by nonlinear vertical change in the ambient crosswind. Both simulation and observations show that atmospheric vortices decay from the outside with minimal expansion of the core. Vortex decay and the onset three-dimensional instabilities are found to be enhanced by the presence of ambient turbulence.
Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Secchi, Simone; Tumeo, Antonino; Villa, Oreste
Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less
A multimodel intercomparison of resolution effects on precipitation: simulations and theory
Rauscher, Sara A.; O?Brien, Travis A.; Piani, Claudio; ...
2016-02-27
An ensemble of six pairs of RCM experiments performed at 25 and 50 km for the period 1961–2000 over a large European domain is examined in order to evaluate the effects of resolution on the simulation of daily precipitation statistics. Application of the non-parametric two-sample Kolmorgorov–Smirnov test, which tests for differences in the location and shape of the probability distributions of two samples, shows that the distribution of daily precipitation differs between the pairs of simulations over most land areas in both summer and winter, with the strongest signal over southern Europe. Two-dimensional histograms reveal that precipitation intensity increases with resolutionmore » over almost the entire domain in both winter and summer. In addition, the 25 km simulations have more dry days than the 50 km simulations. The increase in dry days with resolution is indicative of an improvement in model performance at higher resolution, while the more intense precipitation exceeds observed values. The systematic increase in precipitation extremes with resolution across all models suggests that this response is fundamental to model formulation. Simple theoretical arguments suggest that fluid continuity, combined with the emergent scaling properties of the horizontal wind field, results in an increase in resolved vertical transport as grid spacing decreases. This increase in resolution-dependent vertical mass flux then drives an intensification of convergence and resolvable-scale precipitation as grid spacing decreases. In conclusion, this theoretical result could help explain the increasingly, and often anomalously, large stratiform contribution to total rainfall observed with increasing resolution in many regional and global models.« less
A multimodel intercomparison of resolution effects on precipitation: simulations and theory
NASA Astrophysics Data System (ADS)
Rauscher, Sara A.; O'Brien, Travis A.; Piani, Claudio; Coppola, Erika; Giorgi, Filippo; Collins, William D.; Lawston, Patricia M.
2016-10-01
An ensemble of six pairs of RCM experiments performed at 25 and 50 km for the period 1961-2000 over a large European domain is examined in order to evaluate the effects of resolution on the simulation of daily precipitation statistics. Application of the non-parametric two-sample Kolmorgorov-Smirnov test, which tests for differences in the location and shape of the probability distributions of two samples, shows that the distribution of daily precipitation differs between the pairs of simulations over most land areas in both summer and winter, with the strongest signal over southern Europe. Two-dimensional histograms reveal that precipitation intensity increases with resolution over almost the entire domain in both winter and summer. In addition, the 25 km simulations have more dry days than the 50 km simulations. The increase in dry days with resolution is indicative of an improvement in model performance at higher resolution, while the more intense precipitation exceeds observed values. The systematic increase in precipitation extremes with resolution across all models suggests that this response is fundamental to model formulation. Simple theoretical arguments suggest that fluid continuity, combined with the emergent scaling properties of the horizontal wind field, results in an increase in resolved vertical transport as grid spacing decreases. This increase in resolution-dependent vertical mass flux then drives an intensification of convergence and resolvable-scale precipitation as grid spacing decreases. This theoretical result could help explain the increasingly, and often anomalously, large stratiform contribution to total rainfall observed with increasing resolution in many regional and global models.
Land surface modeling in convection permitting simulations
NASA Astrophysics Data System (ADS)
van Heerwaarden, Chiel; Benedict, Imme
2017-04-01
The next generation of weather and climate models permits convection, albeit at a grid spacing that is not sufficient to resolve all details of the clouds. Whereas much attention is being devoted to the correct simulation of convective clouds and associated precipitation, the role of the land surface has received far less interest. In our view, convective permitting simulations pose a set of problems that need to be solved before accurate weather and climate prediction is possible. The heart of the problem lies at the direct runoff and at the nonlinearity of the surface stress as a function of soil moisture. In coarse resolution simulations, where convection is not permitted, precipitation that reaches the land surface is uniformly distributed over the grid cell. Subsequently, a fraction of this precipitation is intercepted by vegetation or leaves the grid cell via direct runoff, whereas the remainder infiltrates into the soil. As soon as we move to convection permitting simulations, this precipitation falls often locally in large amounts. If the same land-surface model is used as in simulations with parameterized convection, this leads to an increase in direct runoff. Furthermore, spatially non-uniform infiltration leads to a very different surface stress, when scaled up to the course resolution of simulations without convection. Based on large-eddy simulation of realistic convection events at a large domain, this study presents a quantification of the errors made at the land surface in convection permitting simulation. It compares the magnitude of the errors to those made in the convection itself due to the coarse resolution of the simulation. We find that, convection permitting simulations have less evaporation than simulations with parameterized convection, resulting in a non-realistic drying of the atmosphere. We present solutions to resolve this problem.
Intelligent Space Tube Optimization for speeding ground water remedial design.
Kalwij, Ineke M; Peralta, Richard C
2008-01-01
An innovative Intelligent Space Tube Optimization (ISTO) two-stage approach facilitates solving complex nonlinear flow and contaminant transport management problems. It reduces computational effort of designing optimal ground water remediation systems and strategies for an assumed set of wells. ISTO's stage 1 defines an adaptive mobile space tube that lengthens toward the optimal solution. The space tube has overlapping multidimensional subspaces. Stage 1 generates several strategies within the space tube, trains neural surrogate simulators (NSS) using the limited space tube data, and optimizes using an advanced genetic algorithm (AGA) with NSS. Stage 1 speeds evaluating assumed well locations and combinations. For a large complex plume of solvents and explosives, ISTO stage 1 reaches within 10% of the optimal solution 25% faster than an efficient AGA coupled with comprehensive tabu search (AGCT) does by itself. ISTO input parameters include space tube radius and number of strategies used to train NSS per cycle. Larger radii can speed convergence to optimality for optimizations that achieve it but might increase the number of optimizations reaching it. ISTO stage 2 automatically refines the NSS-AGA stage 1 optimal strategy using heuristic optimization (we used AGCT), without using NSS surrogates. Stage 2 explores the entire solution space. ISTO is applicable for many heuristic optimization settings in which the numerical simulator is computationally intensive, and one would like to reduce that burden.
Spacing of Imbricated Thrust Faults and the Strength of Thrust-Belts and Accretionary Wedges
NASA Astrophysics Data System (ADS)
Ito, G.; Regensburger, P. V.; Moore, G. F.
2017-12-01
The pattern of imbricated thrust blocks is a prominent characteristic of the large-scale structure of thrust-belts and accretionary wedges around the world. Mechanical models of these systems have a rich history from laboratory analogs, and more recently from computational simulations, most of which, qualitatively reproduce the regular patterns of imbricated thrusts seen in nature. Despite the prevalence of these patterns in nature and in models, our knowledge of what controls the spacing of the thrusts remains immature at best. We tackle this problem using a finite difference, particle-in-cell method that simulates visco-elastic-plastic deformation with a Mohr-Coulomb brittle failure criterion. The model simulates a horizontal base that moves toward a rigid vertical backstop, carrying with it an overlying layer of crust. The crustal layer has a greater frictional strength than the base, is cohesive, and is initially uniform in thickness. As the layer contracts, a series of thrust blocks immerge sequentially and form a wedge having a mean taper consistent with that predicted by a noncohesive, critical Coulomb wedge. The widths of the thrust blocks (or spacing between adjacent thrusts) are greatest at the front of the wedge, tend to decrease with continued contraction, and then tend toward a pseudo-steady, minimum width. Numerous experiments show that the characteristic spacing of thrusts increases with the brittle strength of the wedge material (cohesion + friction) and decreases with increasing basal friction for low (<8°) taper angles. These relations are consistent with predictions of the elastic stresses forward of the frontal thrust and at what distance the differential stress exceeds the brittle threshold to form a new frontal thrust. Hence the characteristic spacing of the thrusts across the whole wedge is largely inherited at the very front of the wedge. Our aim is to develop scaling laws that will illuminate the basic physical processes controlling systems, as well as allow researchers to use observations of thrust spacing as an independent constraint on the brittle strength of wedges as well as their bases.
Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.
2017-03-27
A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO 2-CH 4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this paper, we present a set ofmore » fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. Finally, the mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.« less
NASA Astrophysics Data System (ADS)
Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.
2017-06-01
A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO2-CH4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this work, we present a set of fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. The mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.
A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO 2-CH 4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this paper, we present a set ofmore » fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. Finally, the mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.« less
Open control/display system for a telerobotics work station
NASA Technical Reports Server (NTRS)
Keslowitz, Saul
1987-01-01
A working Advanced Space Cockpit was developed that integrated advanced control and display devices into a state-of-the-art multimicroprocessor hardware configuration, using window graphics and running under an object-oriented, multitasking real-time operating system environment. This Open Control/Display System supports the idea that the operator should be able to interactively monitor, select, control, and display information about many payloads aboard the Space Station using sets of I/O devices with a single, software-reconfigurable workstation. This is done while maintaining system consistency, yet the system is completely open to accept new additions and advances in hardware and software. The Advanced Space Cockpit, linked to Grumman's Hybrid Computing Facility and Large Amplitude Space Simulator (LASS), was used to test the Open Control/Display System via full-scale simulation of the following tasks: telerobotic truss assembly, RCS and thermal bus servicing, CMG changeout, RMS constrained motion and space constructible radiator assembly, HPA coordinated control, and OMV docking and tumbling satellite retrieval. The proposed man-machine interface standard discussed has evolved through many iterations of the tasks, and is based on feedback from NASA and Air Force personnel who performed those tasks in the LASS.
NASA Technical Reports Server (NTRS)
Banks, P. M.; Raitt, W. J.; Denig, W. F.
1982-01-01
In March, 1981, electron beam experiments were conducted in a large space simulation chamber using equipment destined to be flown aboard NASA's Office of Space Science-1 pallet (OSS-1). Two major flight experiments were involved. They include the Vehicle Charging and Potential (VCAP) experiment and the Plasma Diagnostics Package (PDP). Apparatus connected with VCAP included a Fast Pulse Electron Gun (FPEG), and a Charge and Current Probe (CCP). A preliminary view is provided of the results obtained when the electron emissions were held steady over relatively long periods of time such that steady state conditions could be obtained with respect to the electron beam interaction with the neutral gases and plasma of the vacuum chamber. Of particular interest was the plasma instability feature known as the Beam Plasma Discharge. For the present experiments the FPEG was used in a dc mode with a range of currents of 2 to 80 mA at a beam energy of 970 eV. Attention is given to the emissions of VLF and HF noise associated with the dc beam.
Baryon acoustic oscillations in 2D. II. Redshift-space halo clustering in N-body simulations
NASA Astrophysics Data System (ADS)
Nishimichi, Takahiro; Taruya, Atsushi
2011-08-01
We measure the halo power spectrum in redshift space from cosmological N-body simulations, and test the analytical models of redshift distortions particularly focusing on the scales of baryon acoustic oscillations. Remarkably, the measured halo power spectrum in redshift space exhibits a large-scale enhancement in amplitude relative to the real-space clustering, and the effect becomes significant for the massive or highly biased halo samples. These findings cannot be simply explained by the so-called streaming model frequently used in the literature. By contrast, a physically motivated perturbation theory model developed in the previous paper reproduces the halo power spectrum very well, and the model combining a simple linear scale-dependent bias can accurately characterize the clustering anisotropies of halos in two dimensions, i.e., line-of-sight and its perpendicular directions. The results highlight the significance of nonlinear coupling between density and velocity fields associated with two competing effects of redshift distortions, i.e., Kaiser and Finger-of-God effects, and a proper account of this effect would be important in accurately characterizing the baryon acoustic oscillations in two dimensions.
Gething, Peter W; Patil, Anand P; Hay, Simon I
2010-04-01
Risk maps estimating the spatial distribution of infectious diseases are required to guide public health policy from local to global scales. The advent of model-based geostatistics (MBG) has allowed these maps to be generated in a formal statistical framework, providing robust metrics of map uncertainty that enhances their utility for decision-makers. In many settings, decision-makers require spatially aggregated measures over large regions such as the mean prevalence within a country or administrative region, or national populations living under different levels of risk. Existing MBG mapping approaches provide suitable metrics of local uncertainty--the fidelity of predictions at each mapped pixel--but have not been adapted for measuring uncertainty over large areas, due largely to a series of fundamental computational constraints. Here the authors present a new efficient approximating algorithm that can generate for the first time the necessary joint simulation of prevalence values across the very large prediction spaces needed for global scale mapping. This new approach is implemented in conjunction with an established model for P. falciparum allowing robust estimates of mean prevalence at any specified level of spatial aggregation. The model is used to provide estimates of national populations at risk under three policy-relevant prevalence thresholds, along with accompanying model-based measures of uncertainty. By overcoming previously unchallenged computational barriers, this study illustrates how MBG approaches, already at the forefront of infectious disease mapping, can be extended to provide large-scale aggregate measures appropriate for decision-makers.
NASA Astrophysics Data System (ADS)
Choudhury, Diptyajit; Angeloski, Aleksandar; Ziah, Haseeb; Buchholz, Hilmar; Landsman, Andre; Gupta, Amitava; Mitra, Tiyasa
Lunar explorations often involve use of a lunar lander , a rover [1],[2] and an orbiter which rotates around the moon with a fixed radius. The orbiters are usually lunar satellites orbiting along a polar orbit to ensure visibility with respect to the rover and the Earth Station although with varying latency. Communication in such deep space missions is usually done using a specialized protocol like Proximity-1[3]. MATLAB simulation of Proximity-1 have been attempted by some contemporary researchers[4] to simulate all features like transmission control, delay etc. In this paper it is attempted to simulate, in real time, the communication between a tracking station on earth (earth station), a lunar orbiter and a lunar rover using concepts of Distributed Real-time Simulation(DRTS).The objective of the simulation is to simulate, in real-time, the time varying communication delays associated with the communicating elements with a facility to integrate specific simulation modules to study different aspects e.g. response due to a specific control command from the earth station to be executed by the rover. The hardware platform comprises four single board computers operating as stand-alone real time systems (developed by MATLAB xPC target and inter-networked using UDP-IP protocol). A time triggered DRTS approach is adopted. The earth station, the orbiter and the rover are programmed as three standalone real-time processes representing the communicating elements in the system. Communication from one communicating element to another constitutes an event which passes a state message from one element to another, augmenting the state of the latter. These events are handled by an event scheduler which is the fourth real-time process. The event scheduler simulates the delay in space communication taking into consideration the distance between the communicating elements. A unique time synchronization algorithm is developed which takes into account the large latencies in space communication. The DRTS setup thus developed serves as an important and inexpensive test bench for trying out remote controlled applications on the rover, for example, from an earth station. The simulation is modular and the system is composable. Each of the processes can be aug-mented with relevant simulation modules that handle the events to simulate specific function-alities. With stringent energy saving requirements on most rovers, such a simulation set up, for example, can be used to design optimal rover movement control strategies from the orbiter in conjunction with autonomous systems on the rover itself. References 1. Lunar and Planetary Department, Moscow University, Lunokhod 1, "http://selena.sai.msu.ru/Home/Spa 2. NASA History Office, Guidelines for Advanced Manned Space Vehicle Program, "http://history.nasa.gov 35ann/AMSVPguidelines/top.htm" 3. Consultative Committee For Space Data Systems, "Proximity-1 Space Link Protocol" CCSDS 211.0-B-1 Blue Book. October 2002. 4. Segui, J. and Jennings, E., "Delay Tolerant Networking-Bundle Protocol Simulation", in Proceedings of the 2nd IEEE International Conference on Space Mission Challenges for Infor-mation Technology, 2006.
Muhlbauer, A.; Ackerman, T. P.; Lawson, R. P.; ...
2015-07-14
Cirrus clouds are ubiquitous in the upper troposphere and still constitute one of the largest uncertainties in climate predictions. Our paper evaluates cloud-resolving model (CRM) and cloud system-resolving model (CSRM) simulations of a midlatitude cirrus case with comprehensive observations collected under the auspices of the Atmospheric Radiation Measurements (ARM) program and with spaceborne observations from the National Aeronautics and Space Administration A-train satellites. The CRM simulations are driven with periodic boundary conditions and ARM forcing data, whereas the CSRM simulations are driven by the ERA-Interim product. Vertical profiles of temperature, relative humidity, and wind speeds are reasonably well simulated bymore » the CSRM and CRM, but there are remaining biases in the temperature, wind speeds, and relative humidity, which can be mitigated through nudging the model simulations toward the observed radiosonde profiles. Simulated vertical velocities are underestimated in all simulations except in the CRM simulations with grid spacings of 500 m or finer, which suggests that turbulent vertical air motions in cirrus clouds need to be parameterized in general circulation models and in CSRM simulations with horizontal grid spacings on the order of 1 km. The simulated ice water content and ice number concentrations agree with the observations in the CSRM but are underestimated in the CRM simulations. The underestimation of ice number concentrations is consistent with the overestimation of radar reflectivity in the CRM simulations and suggests that the model produces too many large ice particles especially toward the cloud base. Simulated cloud profiles are rather insensitive to perturbations in the initial conditions or the dimensionality of the model domain, but the treatment of the forcing data has a considerable effect on the outcome of the model simulations. Despite considerable progress in observations and microphysical parameterizations, simulating the microphysical, macrophysical, and radiative properties of cirrus remains challenging. Comparing model simulations with observations from multiple instruments and observational platforms is important for revealing model deficiencies and for providing rigorous benchmarks. But, there still is considerable need for reducing observational uncertainties and providing better observations especially for relative humidity and for the size distribution and chemical composition of aerosols in the upper troposphere.« less
Space Science at Los Alamos National Laboratory
NASA Astrophysics Data System (ADS)
Smith, Karl
2017-09-01
The Space Science and Applications group (ISR-1) in the Intelligence and Space Research (ISR) division at the Los Alamos National Laboratory lead a number of space science missions for civilian and defense-related programs. In support of these missions the group develops sensors capable of detecting nuclear emissions and measuring radiations in space including γ-ray, X-ray, charged-particle, and neutron detection. The group is involved in a number of stages of the lifetime of these sensors including mission concept and design, simulation and modeling, calibration, and data analysis. These missions support monitoring of the atmosphere and near-Earth space environment for nuclear detonations as well as monitoring of the local space environment including space-weather type events. Expertise in this area has been established over a long history of involvement with cutting-edge projects continuing back to the first space based monitoring mission Project Vela. The group's interests cut across a large range of topics including non-proliferation, space situational awareness, nuclear physics, material science, space physics, astrophysics, and planetary physics.
NASA Astrophysics Data System (ADS)
Stegen, Ronald; Gassmann, Matthias
2017-04-01
The use of a broad variation of agrochemicals is essential for the modern industrialized agriculture. During the last decades, the awareness of the side effects of their use has grown and with it the requirement to reproduce, understand and predict the behaviour of these agrochemicals in the environment, in order to optimize their use and minimize the side effects. The modern modelling has made great progress in understanding and predicting these chemicals with digital methods. While the behaviour of the applied chemicals is often investigated and modelled, most studies only simulate parent chemicals, considering total annihilation of the substance. However, due to a diversity of chemical, physical and biological processes, the substances are rather transformed into new chemicals, which themselves are transformed until, at the end of the chain, the substance is completely mineralized. During this process, the fate of each transformation product is determined by its own environmental characteristics and the pathway and results of transformation can differ largely by substance and environmental influences, that can occur in different compartments of the same site. Simulating transformation products introduces additional model uncertainties. Thus, the calibration effort increases compared to simulations of the transport and degradation of the primary substance alone. The simulation of the necessary physical processes needs a lot of calculation time. Due to that, few physically-based models offer the possibility to simulate transformation products at all, mostly at the field scale. The few models available for the catchment scale are not optimized for this duty, i.e. they are only able to simulate a single parent compound and up to two transformation products. Thus, for simulations of large physico-chemical parameter spaces, the enormous calculation time of the underlying hydrological model diminishes the overall performance. In this study, the structure of the model ZIN-AGRITRA is re-designed for the transport and transformation of an unlimited amount of agrochemicals in the soil-water-plant system at catchment scale. The focus is, besides a good hydrological standard, on a flexible variation of transformation processes and the optimization for the use of large numbers of different substances. Due to the new design, a reduction of the calculation time per tested substance is acquired, allowing faster testing of parameter spaces. Additionally, the new concept allows for the consideration of different transformation processes and products in different environmental compartments. A first test of calculation time improvements and flexible transformation pathways was performed in a Mediterranean meso-scale catchment, using the insecticide Chlorpyrifos and two of its transformation products, which emerge from different transformation processes, as test substances.
Computer Science Techniques Applied to Parallel Atomistic Simulation
NASA Astrophysics Data System (ADS)
Nakano, Aiichiro
1998-03-01
Recent developments in parallel processing technology and multiresolution numerical algorithms have established large-scale molecular dynamics (MD) simulations as a new research mode for studying materials phenomena such as fracture. However, this requires large system sizes and long simulated times. We have developed: i) Space-time multiresolution schemes; ii) fuzzy-clustering approach to hierarchical dynamics; iii) wavelet-based adaptive curvilinear-coordinate load balancing; iv) multilevel preconditioned conjugate gradient method; and v) spacefilling-curve-based data compression for parallel I/O. Using these techniques, million-atom parallel MD simulations are performed for the oxidation dynamics of nanocrystalline Al. The simulations take into account the effect of dynamic charge transfer between Al and O using the electronegativity equalization scheme. The resulting long-range Coulomb interaction is calculated efficiently with the fast multipole method. Results for temperature and charge distributions, residual stresses, bond lengths and bond angles, and diffusivities of Al and O will be presented. The oxidation of nanocrystalline Al is elucidated through immersive visualization in virtual environments. A unique dual-degree education program at Louisiana State University will also be discussed in which students can obtain a Ph.D. in Physics & Astronomy and a M.S. from the Department of Computer Science in five years. This program fosters interdisciplinary research activities for interfacing High Performance Computing and Communications with large-scale atomistic simulations of advanced materials. This work was supported by NSF (CAREER Program), ARO, PRF, and Louisiana LEQSF.
Driving Chemical Reactions in Plasmonic Nanogaps with Electrohydrodynamic Flow.
Thrift, William J; Nguyen, Cuong Q; Darvishzadeh-Varcheie, Mahsa; Zare, Siavash; Sharac, Nicholas; Sanderson, Robert N; Dupper, Torin J; Hochbaum, Allon I; Capolino, Filippo; Abdolhosseini Qomi, Mohammad Javad; Ragan, Regina
2017-11-28
Nanoparticles from colloidal solution-with controlled composition, size, and shape-serve as excellent building blocks for plasmonic devices and metasurfaces. However, understanding hierarchical driving forces affecting the geometry of oligomers and interparticle gap spacings is still needed to fabricate high-density architectures over large areas. Here, electrohydrodynamic (EHD) flow is used as a long-range driving force to enable carbodiimide cross-linking between nanospheres and produces oligomers exhibiting sub-nanometer gap spacing over mm 2 areas. Anhydride linkers between nanospheres are observed via surface-enhanced Raman scattering (SERS) spectroscopy. The anhydride linkers are cleavable via nucleophilic substitution and enable placement of nucleophilic molecules in electromagnetic hotspots. Atomistic simulations elucidate that the transient attractive force provided by EHD flow is needed to provide a sufficient residence time for anhydride cross-linking to overcome slow reaction kinetics. This synergistic analysis shows assembly involves an interplay between long-range driving forces increasing nanoparticle-nanoparticle interactions and probability that ligands are in proximity to overcome activation energy barriers associated with short-range chemical reactions. Absorption spectroscopy and electromagnetic full-wave simulations show that variations in nanogap spacing have a greater influence on optical response than variations in close-packed oligomer geometry. The EHD flow-anhydride cross-linking assembly method enables close-packed oligomers with uniform gap spacings that produce uniform SERS enhancement factors. These results demonstrate the efficacy of colloidal driving forces to selectively enable chemical reactions leading to future assembly platforms for large-area nanodevices.
NASA Astrophysics Data System (ADS)
Swinburne, Thomas D.; Perez, Danny
2018-05-01
A massively parallel method to build large transition rate matrices from temperature-accelerated molecular dynamics trajectories is presented. Bayesian Markov model analysis is used to estimate the expected residence time in the known state space, providing crucial uncertainty quantification for higher-scale simulation schemes such as kinetic Monte Carlo or cluster dynamics. The estimators are additionally used to optimize where exploration is performed and the degree of temperature acceleration on the fly, giving an autonomous, optimal procedure to explore the state space of complex systems. The method is tested against exactly solvable models and used to explore the dynamics of C15 interstitial defects in iron. Our uncertainty quantification scheme allows for accurate modeling of the evolution of these defects over timescales of several seconds.
EDIN0613P weight estimating program. [for launch vehicles
NASA Technical Reports Server (NTRS)
Hirsch, G. N.
1976-01-01
The weight estimating relationships and program developed for space power system simulation are described. The program was developed to size a two-stage launch vehicle for the space power system. The program is actually part of an overall simulation technique called EDIN (Engineering Design and Integration) system. The program sizes the overall vehicle, generates major component weights and derives a large amount of overall vehicle geometry. The program is written in FORTRAN V and is designed for use on the Univac Exec 8 (1110). By utilizing the flexibility of this program while remaining cognizant of the limits imposed upon output depth and accuracy by utilization of generalized input, this program concept can be a useful tool for estimating purposes at the conceptual design stage of a launch vehicle.
Impact of subgrid fluid turbulence on inertial particles subject to gravity
NASA Astrophysics Data System (ADS)
Rosa, Bogdan; Pozorski, Jacek
2017-07-01
Two-phase turbulent flows with the dispersed phase in the form of small, spherical particles are increasingly often computed with the large-eddy simulation (LES) of the carrier fluid phase, coupled to the Lagrangian tracking of particles. To enable further model development for LES with inertial particles subject to gravity, we consider direct numerical simulations of homogeneous isotropic turbulence with a large-scale forcing. Simulation results, both without filtering and in the a priori LES setting, are reported and discussed. A full (i.e. a posteriori) LES is also performed with the spectral eddy viscosity. Effects of gravity on the dispersed phase include changes in the average settling velocity due to preferential sweeping, impact on the radial distribution function and radial relative velocity, as well as direction-dependent modification of the particle velocity variance. The filtering of the fluid velocity, performed in spectral space, is shown to have a non-trivial impact on these quantities.
Running SW4 On New Commodity Technology Systems (CTS-1) Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodgers, Arthur J.; Petersson, N. Anders; Pitarka, Arben
We have recently been running earthquake ground motion simulations with SW4 on the new capacity computing systems, called the Commodity Technology Systems - 1 (CTS-1) at Lawrence Livermore National Laboratory (LLNL). SW4 is a fourth order time domain finite difference code developed by LLNL and distributed by the Computational Infrastructure for Geodynamics (CIG). SW4 simulates seismic wave propagation in complex three-dimensional Earth models including anelasticity and surface topography. We are modeling near-fault earthquake strong ground motions for the purposes of evaluating the response of engineered structures, such as nuclear power plants and other critical infrastructure. Engineering analysis of structures requiresmore » the inclusion of high frequencies which can cause damage, but are often difficult to include in simulations because of the need for large memory to model fine grid spacing on large domains.« less
Predicting Instability Timescales in Closely-Packed Planetary Systems
NASA Astrophysics Data System (ADS)
Tamayo, Daniel; Hadden, Samuel; Hussain, Naireen; Silburt, Ari; Gilbertson, Christian; Rein, Hanno; Menou, Kristen
2018-04-01
Many of the multi-planet systems discovered around other stars are maximally packed. This implies that simulations with masses or orbital parameters too far from the actual values will destabilize on short timescales; thus, long-term dynamics allows one to constrain the orbital architectures of many closely packed multi-planet systems. A central challenge in such efforts is the large computational cost of N-body simulations, which preclude a full survey of the high-dimensional parameter space of orbital architectures allowed by observations. I will present our recent successes in training machine learning models capable of reliably predicting orbital stability a million times faster than N-body simulations. By engineering dynamically relevant features that we feed to a gradient-boosted decision tree algorithm (XGBoost), we are able to achieve a precision and recall of 90% on a holdout test set of N-body simulations. This opens a wide discovery space for characterizing new exoplanet discoveries and for elucidating how orbital architectures evolve through time as the next generation of spaceborne exoplanet surveys prepare for launch this year.
The development of composite materials for spacecraft precision reflector panels
NASA Technical Reports Server (NTRS)
Tompkins, Stephen S.; Bowles, David E.; Funk, Joan G.; Towell, Timothy W.; Lavoie, J. A.
1990-01-01
One of the critical technology needs for large precision reflectors required for future astrophysics and optical communications is in the area of structural materials. Therefore, a major area of the Precision Segmented Reflector Program at NASA is to develop lightweight composite reflector panels with durable, space environmentally stable materials which maintain both surface figure and required surface accuracy necessary for space telescope applications. Results from the materials research and development program at NASA Langley Research Center are discussed. Advanced materials that meet the reflector panel requirements are identified. Thermal, mechanical and durability properties of candidate materials after exposure to simulated space environments are compared to the baseline material.
Retrieving the aerosol lidar ratio profile by combining ground- and space-based elastic lidars.
Feiyue, Mao; Wei, Gong; Yingying, Ma
2012-02-15
The aerosol lidar ratio is a key parameter for the retrieval of aerosol optical properties from elastic lidar, which changes largely for aerosols with different chemical and physical properties. We proposed a method for retrieving the aerosol lidar ratio profile by combining simultaneous ground- and space-based elastic lidars. The method was tested by a simulated case and a real case at 532 nm wavelength. The results demonstrated that our method is robust and can obtain accurate lidar ratio and extinction coefficient profiles. Our method can be useful for determining the local and global lidar ratio and validating space-based lidar datasets.
Hyper-Parallel Tempering Monte Carlo Method and It's Applications
NASA Astrophysics Data System (ADS)
Yan, Qiliang; de Pablo, Juan
2000-03-01
A new generalized hyper-parallel tempering Monte Carlo molecular simulation method is presented for study of complex fluids. The method is particularly useful for simulation of many-molecule complex systems, where rough energy landscapes and inherently long characteristic relaxation times can pose formidable obstacles to effective sampling of relevant regions of configuration space. The method combines several key elements from expanded ensemble formalisms, parallel-tempering, open ensemble simulations, configurational bias techniques, and histogram reweighting analysis of results. It is found to accelerate significantly the diffusion of a complex system through phase-space. In this presentation, we demonstrate the effectiveness of the new method by implementing it in grand canonical ensembles for a Lennard-Jones fluid, for the restricted primitive model of electrolyte solutions (RPM), and for polymer solutions and blends. Our results indicate that the new algorithm is capable of overcoming the large free energy barriers associated with phase transitions, thereby greatly facilitating the simulation of coexistence properties. It is also shown that the method can be orders of magnitude more efficient than previously available techniques. More importantly, the method is relatively simple and can be incorporated into existing simulation codes with minor efforts.
Comparing a discrete and continuum model of the intestinal crypt
Murray, Philip J.; Walter, Alex; Fletcher, Alex G.; Edwards, Carina M.; Tindall, Marcus J.; Maini, Philip K.
2011-01-01
The integration of processes at different scales is a key problem in the modelling of cell populations. Owing to increased computational resources and the accumulation of data at the cellular and subcellular scales, the use of discrete, cell-level models, which are typically solved using numerical simulations, has become prominent. One of the merits of this approach is that important biological factors, such as cell heterogeneity and noise, can be easily incorporated. However, it can be difficult to efficiently draw generalisations from the simulation results, as, often, many simulation runs are required to investigate model behaviour in typically large parameter spaces. In some cases, discrete cell-level models can be coarse-grained, yielding continuum models whose analysis can lead to the development of insight into the underlying simulations. In this paper we apply such an approach to the case of a discrete model of cell dynamics in the intestinal crypt. An analysis of the resulting continuum model demonstrates that there is a limited region of parameter space within which steady-state (and hence biologically realistic) solutions exist. Continuum model predictions show good agreement with corresponding results from the underlying simulations and experimental data taken from murine intestinal crypts. PMID:21411869
GAP Noise Computation By The CE/SE Method
NASA Technical Reports Server (NTRS)
Loh, Ching Y.; Chang, Sin-Chung; Wang, Xiao Y.; Jorgenson, Philip C. E.
2001-01-01
A typical gap noise problem is considered in this paper using the new space-time conservation element and solution element (CE/SE) method. Implementation of the computation is straightforward. No turbulence model, LES (large eddy simulation) or a preset boundary layer profile is used, yet the computed frequency agrees well with the experimental one.
Distance between configurations in Markov chain Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Fukuma, Masafumi; Matsumoto, Nobuyuki; Umeda, Naoya
2017-12-01
For a given Markov chain Monte Carlo algorithm we introduce a distance between two configurations that quantifies the difficulty of transition from one configuration to the other configuration. We argue that the distance takes a universal form for the class of algorithms which generate local moves in the configuration space. We explicitly calculate the distance for the Langevin algorithm, and show that it certainly has desired and expected properties as distance. We further show that the distance for a multimodal distribution gets dramatically reduced from a large value by the introduction of a tempering method. We also argue that, when the original distribution is highly multimodal with large number of degenerate vacua, an anti-de Sitter-like geometry naturally emerges in the extended configuration space.
Simulation of Quantum Many-Body Dynamics for Generic Strongly-Interacting Systems
NASA Astrophysics Data System (ADS)
Meyer, Gregory; Machado, Francisco; Yao, Norman
2017-04-01
Recent experimental advances have enabled the bottom-up assembly of complex, strongly interacting quantum many-body systems from individual atoms, ions, molecules and photons. These advances open the door to studying dynamics in isolated quantum systems as well as the possibility of realizing novel out-of-equilibrium phases of matter. Numerical studies provide insight into these systems; however, computational time and memory usage limit common numerical methods such as exact diagonalization to relatively small Hilbert spaces of dimension 215 . Here we present progress toward a new software package for dynamical time evolution of large generic quantum systems on massively parallel computing architectures. By projecting large sparse Hamiltonians into a much smaller Krylov subspace, we are able to compute the evolution of strongly interacting systems with Hilbert space dimension nearing 230. We discuss and benchmark different design implementations, such as matrix-free methods and GPU based calculations, using both pre-thermal time crystals and the Sachdev-Ye-Kitaev model as examples. We also include a simple symbolic language to describe generic Hamiltonians, allowing simulation of diverse quantum systems without any modification of the underlying C and Fortran code.
A Biologically Inspired Cooperative Multi-Robot Control Architecture
NASA Technical Reports Server (NTRS)
Howsman, Tom; Craft, Mike; ONeil, Daniel; Howell, Joe T. (Technical Monitor)
2002-01-01
A prototype cooperative multi-robot control architecture suitable for the eventual construction of large space structures has been developed. In nature, there are numerous examples of complex architectures constructed by relatively simple insects, such as termites and wasps, which cooperatively assemble their nests. The prototype control architecture emulates this biological model. Actions of each of the autonomous robotic construction agents are only indirectly coordinated, thus mimicking the distributed construction processes of various social insects. The robotic construction agents perform their primary duties stigmergically i.e., without direct inter-agent communication and without a preprogrammed global blueprint of the final design. Communication and coordination between individual agents occurs indirectly through the sensed modifications that each agent makes to the structure. The global stigmergic building algorithm prototyped during the initial research assumes that the robotic builders only perceive the current state of the structure under construction. Simulation studies have established that an idealized form of the proposed architecture was indeed capable of producing representative large space structures with autonomous robots. This paper will explore the construction simulations in order to illustrate the multi-robot control architecture.
A Stigmergic Cooperative Multi-Robot Control Architecture
NASA Technical Reports Server (NTRS)
Howsman, Thomas G.; O'Neil, Daniel; Craft, Michael A.
2004-01-01
In nature, there are numerous examples of complex architectures constructed by relatively simple insects, such as termites and wasps, which cooperatively assemble their nests. A prototype cooperative multi-robot control architecture which may be suitable for the eventual construction of large space structures has been developed which emulates this biological model. Actions of each of the autonomous robotic construction agents are only indirectly coordinated, thus mimicking the distributed construction processes of various social insects. The robotic construction agents perform their primary duties stigmergically, i.e., without direct inter-agent communication and without a preprogrammed global blueprint of the final design. Communication and coordination between individual agents occurs indirectly through the sensed modifications that each agent makes to the structure. The global stigmergic building algorithm prototyped during the initial research assumes that the robotic builders only perceive the current state of the structure under construction. Simulation studies have established that an idealized form of the proposed architecture was indeed capable of producing representative large space structures with autonomous robots. This paper will explore the construction simulations in order to illustrate the multi-robot control architecture.
Space simulation facilities providing a stable thermal vacuum facility
NASA Technical Reports Server (NTRS)
Tellalian, Martin L.
1990-01-01
CBI has recently constructed the Intermediate Thermal Vacuum Facility. Built as a corporate facility, the installation will first be used on the Boost Surveillance and Tracking System (BSTS) program. It will also be used to develop and test other sensor systems. The horizontal chamber has a horseshoe shaped cross section and is supported on pneumatic isolators for vibration isolation. The chamber structure was designed to meet stability and stiffness requirements. The design process included measurement of the ambient ground vibrations, analysis of various foundation test article support configurations, design and analysis of the chamber shell and modal testing of the chamber shell. A detailed 3-D finite element analysis was made in the design stage to predict the lowest three natural frequencies and mode shapes and to identify local vibrating components. The design process is described and the results are compared of the finite element analysis to the results of the field modal testing and analysis for the 3 lowest natural frequencies and mode shapes. Concepts are also presented for stiffening large steel structures along with methods to improve test article stability in large space simulation facilities.
Chen, Changjun
2016-03-31
The free energy landscape is the most important information in the study of the reaction mechanisms of the molecules. However, it is difficult to calculate. In a large collective variable space, a molecule must take a long time to obtain the sufficient sampling during the simulation. To save the calculation quantity, decreasing the sampling region and constructing the local free energy landscape is required in practice. However, the restricted region in the collective variable space may have an irregular shape. Simply restricting one or more collective variables of the molecule cannot satisfy the requirement. In this paper, we propose a modified tomographic method to perform the simulation. First, it divides the restricted region by some hyperplanes and connects the centers of hyperplanes together by a curve. Second, it forces the molecule to sample on the curve and the hyperplanes in the simulation and calculates the free energy data on them. Finally, all the free energy data are combined together to form the local free energy landscape. Without consideration of the area outside the restricted region, this free energy calculation can be more efficient. By this method, one can further optimize the path quickly in the collective variable space.
NASA Astrophysics Data System (ADS)
Venkatachari, Balaji Shankar; Chang, Chau-Lyan
2016-11-01
The focus of this study is scale-resolving simulations of the canonical normal shock- isotropic turbulence interaction using unstructured tetrahedral meshes and the space-time conservation element solution element (CESE) method. Despite decades of development in unstructured mesh methods and its potential benefits of ease of mesh generation around complex geometries and mesh adaptation, direct numerical or large-eddy simulations of turbulent flows are predominantly carried out using structured hexahedral meshes. This is due to the lack of consistent multi-dimensional numerical formulations in conventional schemes for unstructured meshes that can resolve multiple physical scales and flow discontinuities simultaneously. The CESE method - due to its Riemann-solver-free shock capturing capabilities, non-dissipative baseline schemes, and flux conservation in time as well as space - has the potential to accurately simulate turbulent flows using tetrahedral meshes. As part of the study, various regimes of the shock-turbulence interaction (wrinkled and broken shock regimes) will be investigated along with a study on how adaptive refinement of tetrahedral meshes benefits this problem. The research funding for this paper has been provided by Revolutionary Computational Aerosciences (RCA) subproject under the NASA Transformative Aeronautics Concepts Program (TACP).
Strategies for global optimization in photonics design.
Vukovic, Ana; Sewell, Phillip; Benson, Trevor M
2010-10-01
This paper reports on two important issues that arise in the context of the global optimization of photonic components where large problem spaces must be investigated. The first is the implementation of a fast simulation method and associated matrix solver for assessing particular designs and the second, the strategies that a designer can adopt to control the size of the problem design space to reduce runtimes without compromising the convergence of the global optimization tool. For this study an analytical simulation method based on Mie scattering and a fast matrix solver exploiting the fast multipole method are combined with genetic algorithms (GAs). The impact of the approximations of the simulation method on the accuracy and runtime of individual design assessments and the consequent effects on the GA are also examined. An investigation of optimization strategies for controlling the design space size is conducted on two illustrative examples, namely, 60° and 90° waveguide bends based on photonic microstructures, and their effectiveness is analyzed in terms of a GA's ability to converge to the best solution within an acceptable timeframe. Finally, the paper describes some particular optimized solutions found in the course of this work.
Artificial Hip Simulator with Crystal Models
1966-06-21
Robert Johnson, top, sets the lubricant flow while Donald Buckley adjusts the bearing specimen on an artificial hip simulator at the National Aeronautics and Space Administration (NASA) Lewis Research Center. The simulator was supplemented by large crystal lattice models to demonstrate the composition of different bearing alloys. This this image by NASA photographer Paul Riedel was used for the cover of the August 15, 1966 edition of McGraw-Hill Product Engineering. Johnson was chief of Lubrication Branch and Buckley head of the Space Environment Lubrication Section in the Fluid System Components Division. In 1962 they began studying the molecular structure of metals. Their friction and wear testing revealed that the optimal structure for metal bearings was a hexagonal crystal structure with proper molecular space. Bearing manufacturers traditionally preferred cubic structures over hexagonal arrangements. Buckley and Johnson found that even though the hexagonal structural was not as inherently strong as its cubic counterpart, it was less likely to cause a catastrophic failure. The Lewis researchers concentrated their efforts on cobalt-molybdenum and titanium alloys for high temperatures applications. The alloys had a number of possible uses, included prosthetics. The alloys were similar in composition to the commercial alloys used for prosthetics, but employed the longer lasting hexagonal structure.
Early Results from Solar Dynamic Space Power System Testing
NASA Technical Reports Server (NTRS)
Shaltens, Richard K.; Mason, Lee S.
1996-01-01
A government/industry team designed, built and tested a 2-kWe solar dynamic space power system in a large thermal vacuum facility with a simulated Sun at the NASA Lewis Research Center. The Lewis facility provides an accurate simulation of temperatures, high vacuum and solar flux as encountered in low-Earth orbit. The solar dynamic system includes a Brayton power conversion unit integrated with a solar receiver which is designed to store energy for continuous power operation during the eclipse phase of the orbit. This paper reviews the goals and status of the Solar Dynamic Ground Test Demonstration project and describes the initial testing, including both operational and performance data. System testing to date has accumulated over 365 hours of power operation (ranging from 400 watts to 2.0-W(sub e)), including 187 simulated orbits, 16 ambient starts and 2 hot restarts. Data are shown for an orbital startup, transient and steady-state orbital operation and shutdown. System testing with varying insolation levels and operating speeds is discussed. The solar dynamic ground test demonstration is providing the experience and confidence toward a successful flight demonstration of the solar dynamic technologies on the Space Station Mir in 1997.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagos, Samson M.; Feng, Zhe; Burleyson, Casey D.
Regional cloud permitting model simulations of cloud populations observed during the 2011 ARM Madden Julian Oscillation Investigation Experiment/ Dynamics of Madden-Julian Experiment (AMIE/DYNAMO) field campaign are evaluated against radar and ship-based measurements. Sensitivity of model simulated surface rain rate statistics to parameters and parameterization of hydrometeor sizes in five commonly used WRF microphysics schemes are examined. It is shown that at 2 km grid spacing, the model generally overestimates rain rate from large and deep convective cores. Sensitivity runs involving variation of parameters that affect rain drop or ice particle size distribution (more aggressive break-up process etc) generally reduce themore » bias in rain-rate and boundary layer temperature statistics as the smaller particles become more vulnerable to evaporation. Furthermore significant improvement in the convective rain-rate statistics is observed when the horizontal grid-spacing is reduced to 1 km and 0.5 km, while it is worsened when run at 4 km grid spacing as increased turbulence enhances evaporation. The results suggest modulation of evaporation processes, through parameterization of turbulent mixing and break-up of hydrometeors may provide a potential avenue for correcting cloud statistics and associated boundary layer temperature biases in regional and global cloud permitting model simulations.« less
Impact of Ice Morphology on Design Space of Pharmaceutical Freeze-Drying.
Goshima, Hiroshika; Do, Gabsoo; Nakagawa, Kyuya
2016-06-01
It has been known that the sublimation kinetics of a freeze-drying product is affected by its internal ice crystal microstructures. This article demonstrates the impact of the ice morphologies of a frozen formulation in a vial on the design space for the primary drying of a pharmaceutical freeze-drying process. Cross-sectional images of frozen sucrose-bovine serum albumin aqueous solutions were optically observed and digital pictures were acquired. Binary images were obtained from the optical data to extract the geometrical parameters (i.e., ice crystal size and tortuosity) that relate to the mass-transfer resistance of water vapor during the primary drying step. A mathematical model was used to simulate the primary drying kinetics and provided the design space for the process. The simulation results predicted that the geometrical parameters of frozen solutions significantly affect the design space, with large and less tortuous ice morphologies resulting in wide design spaces and vice versa. The optimal applicable drying conditions are influenced by the ice morphologies. Therefore, owing to the spatial distributions of the geometrical parameters of a product, the boundary curves of the design space are variable and could be tuned by controlling the ice morphologies. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
CCMC: bringing space weather awareness to the next generation
NASA Astrophysics Data System (ADS)
Chulaki, A.; Muglach, K.; Zheng, Y.; Mays, M. L.; Kuznetsova, M. M.; Taktakishvili, A.; Collado-Vega, Y. M.; Rastaetter, L.; Mendoza, A. M. M.; Thompson, B. J.; Pulkkinen, A. A.; Pembroke, A. D.
2017-12-01
Making space weather an element of core education is critical for the future of the young field of space weather. Community Coordinated Modeling Center (CCMC) is an interagency partnership established to aid the transition of modern space science models into space weather forecasting while supporting space science research. Additionally, over the past ten years it has established itself as a global space science education resource supporting undergraduate and graduate education and research, and spreading space weather awareness worldwide. A unique combination of assets, capabilities and close ties to the scientific and educational communities enable our small group to serve as a hub for rising generations of young space scientists and engineers. CCMC offers a variety of educational tools and resources publicly available online and providing access to the largest collection of modern space science models developed by the international research community. CCMC has revolutionized the way these simulations are utilized in classrooms settings, student projects, and scientific labs. Every year, this online system serves hundreds of students, educators and researchers worldwide. Another major CCMC asset is an expert space weather prototyping team primarily serving NASA's interplanetary space weather needs. Capitalizing on its unique capabilities and experiences, the team also provides in-depth space weather training to hundreds of students and professionals. One training module offers undergraduates an opportunity to actively engage in real-time space weather monitoring, analysis, forecasting, tools development and research, eventually serving remotely as NASA space weather forecasters. In yet another project, CCMC is collaborating with Hayden Planetarium and Linkoping University on creating a visualization platform for planetariums (and classrooms) to provide simulations of dynamic processes in the large domain stretching from the solar corona to the Earth's upper atmosphere, for near real-time and historical space weather events.
NASA Astrophysics Data System (ADS)
Kendall, A. D.; Deines, J. M.; Hyndman, D. W.
2017-12-01
Irrigation technologies are changing: becoming more efficient, better managed, and capable of more precise targeting. Widespread adoption of these technologies is shifting water balances and significantly altering the hydrologic cycle in some of the largest irrigated regions in the world, such as the High Plains Aquifer of the USA. There, declining groundwater resources, increased competition from alternate uses, changing surface water supplies, and increased subsidies and incentives are pushing farmers to adopt these new technologies. Their decisions about adoption, irrigation extent, and total water use are largely unrecorded, limiting critical data for what is the single largest consumptive water use globally. Here, we present a novel data fusion of an annual water use and technology database in Kansas with our recent remotely-sensed Annual Irrigation Maps (AIM) dataset to produce a spatially and temporally complete record of these decisions. We then use this fusion to drive the Landscape Hydrologic Model (LHM), which simulates the full terrestrial water cycle at hourly timesteps for large regions. The irrigation module within LHM explicitly simulates each major irrigation technology, allowing for a comprehensive evaluation of changes in irrigation water use over time and space. Here we simulate 2000 - 2016, a period which includes a major increase in the use of modern efficient irrigation technology (such as Low Energy Precision Application, LEPA) as well as both drought and relative wet periods. Impacts on water use are presented through time and space, along with implications for adopting these technologies across the USA and globally.
NASA Astrophysics Data System (ADS)
Huang, Jun-Wei; Bellefleur, Gilles; Milkereit, Bernd
2012-02-01
We present a conditional simulation algorithm to parameterize three-dimensional heterogeneities and construct heterogeneous petrophysical reservoir models. The models match the data at borehole locations, simulate heterogeneities at the same resolution as borehole logging data elsewhere in the model space, and simultaneously honor the correlations among multiple rock properties. The model provides a heterogeneous environment in which a variety of geophysical experiments can be simulated. This includes the estimation of petrophysical properties and the study of geophysical response to the heterogeneities. As an example, we model the elastic properties of a gas hydrate accumulation located at Mallik, Northwest Territories, Canada. The modeled properties include compressional and shear-wave velocities that primarily depend on the saturation of hydrate in the pore space of the subsurface lithologies. We introduce the conditional heterogeneous petrophysical models into a finite difference modeling program to study seismic scattering and attenuation due to multi-scale heterogeneity. Similarities between resonance scattering analysis of synthetic and field Vertical Seismic Profile data reveal heterogeneity with a horizontal-scale of approximately 50 m in the shallow part of the gas hydrate interval. A cross-borehole numerical experiment demonstrates that apparent seismic energy loss can occur in a pure elastic medium without any intrinsic attenuation of hydrate-bearing sediments. This apparent attenuation is largely attributed to attenuative leaky mode propagation of seismic waves through large-scale gas hydrate occurrence as well as scattering from patchy distribution of gas hydrate.
NASA Astrophysics Data System (ADS)
Mousavi, Seyed Hosein; Nazemi, Ali; Hafezalkotob, Ashkan
2015-03-01
With the formation of the competitive electricity markets in the world, optimization of bidding strategies has become one of the main discussions in studies related to market designing. Market design is challenged by multiple objectives that need to be satisfied. The solution of those multi-objective problems is searched often over the combined strategy space, and thus requires the simultaneous optimization of multiple parameters. The problem is formulated analytically using the Nash equilibrium concept for games composed of large numbers of players having discrete and large strategy spaces. The solution methodology is based on a characterization of Nash equilibrium in terms of minima of a function and relies on a metaheuristic optimization approach to find these minima. This paper presents some metaheuristic algorithms to simulate how generators bid in the spot electricity market viewpoint of their profit maximization according to the other generators' strategies, such as genetic algorithm (GA), simulated annealing (SA) and hybrid simulated annealing genetic algorithm (HSAGA) and compares their results. As both GA and SA are generic search methods, HSAGA is also a generic search method. The model based on the actual data is implemented in a peak hour of Tehran's wholesale spot market in 2012. The results of the simulations show that GA outperforms SA and HSAGA on computing time, number of function evaluation and computing stability, as well as the results of calculated Nash equilibriums by GA are less various and different from each other than the other algorithms.
Coded-aperture Compton camera for gamma-ray imaging
NASA Astrophysics Data System (ADS)
Farber, Aaron M.
This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.
Preliminary Study Using Forward Reaction Control System Jets During Space Shuttle Entry
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; Valasek, John
2006-01-01
Failure or degradation of the flight control system, or hull damage, can lead to loss of vehicle control during entry. Possible failure scenarios are debris impact and wing damage that could result in a large aerodynamic asymmetry which cannot be trimmed out without additional yaw control. Currently the space shuttle uses aerodynamic control surfaces and Reaction Control System jets to control attitude. The forward jets are used for orbital maneuvering only, while the aft jets are used for yaw control during entry. This paper develops a controller for using the forward reaction control system jets as an additional control during entry, and assesses its value and feasibility during failure situations. Forward-aft jet blending logic is created, and implemented on a simplified model of the space shuttle entry flight control system. The model is validated and verified on the nonlinear, six degree-of-freedom Shuttle Engineering Simulator. A rudimentary human factors study was undertaken using the forward cockpit simulator at Johnson Space Center, to assess flying qualities of the new system and pilot workload. Results presented in the paper show that the combination of forward and aft jets provides useful additional yaw control, in addition to potential fuel savings and the ability to balance the use of the fuel in the forward and aft tanks to meet availability constraints of both forward and aft fuel tanks. Piloted simulation studies indicated that using both sets of jets while flying a damaged space shuttle reduces pilot workload, and makes the vehicle more responsive.
Output-Based Adaptive Meshing Applied to Space Launch System Booster Separation Analysis
NASA Technical Reports Server (NTRS)
Dalle, Derek J.; Rogers, Stuart E.
2015-01-01
This paper presents details of Computational Fluid Dynamic (CFD) simulations of the Space Launch System during solid-rocket booster separation using the Cart3D inviscid code with comparisons to Overflow viscous CFD results and a wind tunnel test performed at NASA Langley Research Center's Unitary PlanWind Tunnel. The Space Launch System (SLS) launch vehicle includes two solid-rocket boosters that burn out before the primary core stage and thus must be discarded during the ascent trajectory. The main challenges for creating an aerodynamic database for this separation event are the large number of basis variables (including orientation of the core, relative position and orientation of the boosters, and rocket thrust levels) and the complex flow caused by the booster separation motors. The solid-rocket boosters are modified from their form when used with the Space Shuttle Launch Vehicle, which has a rich flight history. However, the differences between the SLS core and the Space Shuttle External Tank result in the boosters separating with much narrower clearances, and so reducing aerodynamic uncertainty is necessary to clear the integrated system for flight. This paper discusses an approach that has been developed to analyze about 6000 wind tunnel simulations and 5000 flight vehicle simulations using Cart3D in adaptive-meshing mode. In addition, a discussion is presented of Overflow viscous CFD runs used for uncertainty quantification. Finally, the article presents lessons learned and improvements that will be implemented in future separation databases.
A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma
Ku, S.; Hager, R.; Chang, C. S.; ...
2016-04-01
In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. In conclusion, the numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less
A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ku, S.; Hager, R.; Chang, C. S.
In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. In conclusion, the numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less
A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ku, S., E-mail: sku@pppl.gov; Hager, R.; Chang, C.S.
In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. The numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less
Fifteenth Space Simulation Conference: Support the Highway to Space Through Testing
NASA Technical Reports Server (NTRS)
Stecher, Joseph (Editor)
1988-01-01
The Institute of Environmental Sciences Fifteenth Space Simulation Conference, Support the Highway to Space Through Testing, provided participants a forum to acquire and exchange information on the state-of-the-art in space simulation, test technology, thermal simulation and protection, contamination, and techniques of test measurements.
Fourteenth Space Simulation Conference: Testing for a Permanent Presence in Space
NASA Technical Reports Server (NTRS)
Stecher, Joseph L., III (Editor)
1986-01-01
The Institute of Environmental Sciences Fourteenth Space Simulation Conference, Testing for a Permanent Presence in Space, provided participants a forum to acquire and exchange information on the state-of-the-art in space simulation, test technology, thermal simulation, and protection, contamination, and techniques of test measurements.
The Strata-1 Regolith Dynamics Experiment: Class 1E Science on ISS
NASA Technical Reports Server (NTRS)
Fries, Marc; Graham, Lee; John, Kristen
2016-01-01
The Strata-1 experiment studies the evolution of small body regolith through long-duration exposure of simulant materials to the microgravity environment on the International Space Station (ISS). This study will record segregation and mechanical dynamics of regolith simulants in a microgravity and vibration environment similar to that experienced by regolith on small Solar System bodies. Strata-1 will help us understand regolith dynamics and will inform design and procedures for landing and setting anchors, safely sampling and moving material on asteroidal surfaces, processing large volumes of material for in situ resource utilization (ISRU) purposes, and, in general, predicting the behavior of large and small particles on disturbed asteroid surfaces. This experiment is providing new insights into small body surface evolution.
An Electrostatic Precipitator System for the Martian Environment
NASA Technical Reports Server (NTRS)
Calle, C. I.; Mackey, P. J.; Hogue, M. D.; Johansen, M. R.; Phillips, J. R., III; Clements, J. S.
2012-01-01
Human exploration missions to Mars will require the development of technologies for the utilization of the planet's own resources for the production of commodities. However, the Martian atmosphere contains large amounts of dust. The extraction of commodities from this atmosphere requires prior removal of this dust. We report on our development of an electrostatic precipitator able to collect Martian simulated dust particles in atmospheric conditions approaching those of Mars. Extensive experiments with an initial prototype in a simulated Martian atmosphere showed efficiencies of 99%. The design of a second prototype with aerosolized Martian simulated dust in a flow-through is described. Keywords: Space applications, electrostatic precipitator, particle control, particle charging
Rapid prototyping and AI programming environments applied to payload modeling
NASA Technical Reports Server (NTRS)
Carnahan, Richard S., Jr.; Mendler, Andrew P.
1987-01-01
This effort focused on using artificial intelligence (AI) programming environments and rapid prototyping to aid in both space flight manned and unmanned payload simulation and training. Significant problems addressed are the large amount of development time required to design and implement just one of these payload simulations and the relative inflexibility of the resulting model to accepting future modification. Results of this effort have suggested that both rapid prototyping and AI programming environments can significantly reduce development time and cost when applied to the domain of payload modeling for crew training. The techniques employed are applicable to a variety of domains where models or simulations are required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halavanau, A.; Piot, P.
2015-06-01
In a cascaded longitudinal space-charge amplifier (LSCA), initial density noise in a relativistic e-beam is amplified via the interplay of longitudinal space charge forces and properly located dispersive sections. This type of amplification process was shown to potentially result in large final density modulations [1] compatible with the production of broadband electromagnetic radiation. The technique was recently demonstrated in the optical domain [2]. In this paper we investigate, via numerical simulations, the performances of a cascaded LSCA beamline at the Fermilab’s Advanced Superconducting Test Accelerator (ASTA). We especially explore the properties of the produced broadband radiation. Our studies have beenmore » conducted with a grid-less three-dimensional space-charge algorithm.« less
Design and fabrication of giant micromirrors using electroplating-based technology
NASA Astrophysics Data System (ADS)
Ilias, Samir; Topart, Patrice A.; Larouche, Carl; Leclair, Sebastien; Jerominek, Hubert
2005-01-01
Giant micromirrors with large scanning deflection and good flatness are required for many space and terrestrial applications. A novel approach to manufacturing this category of micromirrors is proposed. The approach combines selective electroplating and flip-chip based technologies. It allows for large air gaps, flat and smooth active micromirror surfaces and permits independent fabrication of the micromirrors and control electronics, avoiding temperature and sacrificial layer incompatibilities between them. In this work, electrostatically actuated piston and torsion micromirrors were designed and simulated. The simulated structures were designed to allow large deflection, i.e. piston displacement larger than 10 um and torsional deflection up to 35°. To achieve large micromirror deflections, up to seventy micron-thick resists were used as a micromold for nickel and solder electroplating. Smooth micromirror surfaces (roughness lower than 5 nm rms) and large radius of curvature (R as large as 23 cm for a typical 1000x1000 um2 micromirror fabricated without address circuits) were achieved. A detailed fabrication process is presented. First piston mirror prototypes were fabricated and a preliminary evaluation of static deflection of a piston mirror is presented.
NASA Astrophysics Data System (ADS)
Khoei, A. R.; Samimi, M.; Azami, A. R.
2007-02-01
In this paper, an application of the reproducing kernel particle method (RKPM) is presented in plasticity behavior of pressure-sensitive material. The RKPM technique is implemented in large deformation analysis of powder compaction process. The RKPM shape function and its derivatives are constructed by imposing the consistency conditions. The essential boundary conditions are enforced by the use of the penalty approach. The support of the RKPM shape function covers the same set of particles during powder compaction, hence no instability is encountered in the large deformation computation. A double-surface plasticity model is developed in numerical simulation of pressure-sensitive material. The plasticity model includes a failure surface and an elliptical cap, which closes the open space between the failure surface and hydrostatic axis. The moving cap expands in the stress space according to a specified hardening rule. The cap model is presented within the framework of large deformation RKPM analysis in order to predict the non-uniform relative density distribution during powder die pressing. Numerical computations are performed to demonstrate the applicability of the algorithm in modeling of powder forming processes and the results are compared to those obtained from finite element simulation to demonstrate the accuracy of the proposed model.
Enabling parallel simulation of large-scale HPC network systems
Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; ...
2016-04-07
Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less
Enabling parallel simulation of large-scale HPC network systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.
Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less
PLATSIM: An efficient linear simulation and analysis package for large-order flexible systems
NASA Technical Reports Server (NTRS)
Maghami, Periman; Kenny, Sean P.; Giesy, Daniel P.
1995-01-01
PLATSIM is a software package designed to provide efficient time and frequency domain analysis of large-order generic space platforms implemented with any linear time-invariant control system. Time domain analysis provides simulations of the overall spacecraft response levels due to either onboard or external disturbances. The time domain results can then be processed by the jitter analysis module to assess the spacecraft's pointing performance in a computationally efficient manner. The resulting jitter analysis algorithms have produced an increase in speed of several orders of magnitude over the brute force approach of sweeping minima and maxima. Frequency domain analysis produces frequency response functions for uncontrolled and controlled platform configurations. The latter represents an enabling technology for large-order flexible systems. PLATSIM uses a sparse matrix formulation for the spacecraft dynamics model which makes both the time and frequency domain operations quite efficient, particularly when a large number of modes are required to capture the true dynamics of the spacecraft. The package is written in MATLAB script language. A graphical user interface (GUI) is included in the PLATSIM software package. This GUI uses MATLAB's Handle graphics to provide a convenient way for setting simulation and analysis parameters.
Wedge Experiment Modeling and Simulation for Reactive Flow Model Calibration
NASA Astrophysics Data System (ADS)
Maestas, Joseph T.; Dorgan, Robert J.; Sutherland, Gerrit T.
2017-06-01
Wedge experiments are a typical method for generating pop-plot data (run-to-detonation distance versus input shock pressure), which is used to assess an explosive material's initiation behavior. Such data can be utilized to calibrate reactive flow models by running hydrocode simulations and successively tweaking model parameters until a match between experiment is achieved. Typical simulations are performed in 1D and typically use a flyer impact to achieve the prescribed shock loading pressure. In this effort, a wedge experiment performed at the Army Research Lab (ARL) was modeled using CTH (SNL hydrocode) in 1D, 2D, and 3D space in order to determine if there was any justification in using simplified models. A simulation was also performed using the BCAT code (CTH companion tool) that assumes a plate impact shock loading. Results from the simulations were compared to experimental data and show that the shock imparted into an explosive specimen is accurately captured with 2D and 3D simulations, but changes significantly in 1D space and with the BCAT tool. The difference in shock profile is shown to only affect numerical predictions for large run distances. This is attributed to incorrectly capturing the energy fluence for detonation waves versus flat shock loading. Portions of this work were funded through the Joint Insensitive Munitions Technology Program.
Probing Massive Black Hole Populations and Their Environments with LISA
NASA Astrophysics Data System (ADS)
Katz, Michael; Larson, Shane
2018-01-01
With the adoption of the LISA Mission Proposal by the European Space Agency in response to its call for L3 mission concepts, gravitational wave measurements from space are on the horizon. With data from the Illustris large-scale cosmological simulation, we provide analysis of LISA detection rates accompanied by characterization of the merging Massive Black Holes (MBH) and their host galaxies. MBHs of total mass $\\sim10^6-10^9 M_\\odot$ are the main focus of this study. Using a precise treatment of the dynamical friction evolutionary process prior to gravitational wave emission, we evolve MBH simulation particle mergers from $\\sim$kpc scales until coalescence to achieve a merger distribution. Using the statistical basis of the Illustris output, we Monte-carlo synthesize many realizations of the merging massive black hole population across space and time. We use those realizations to build mock LISA detection catalogs to understand the impact of LISA mission configurations on our ability to probe massive black hole merger populations and their environments throughout the visible Universe.
Eight critical factors in creating and implementing a successful simulation program.
Lazzara, Elizabeth H; Benishek, Lauren E; Dietz, Aaron S; Salas, Eduardo; Adriansen, David J
2014-01-01
Recognizing the need to minimize human error and adverse events, clinicians, researchers, administrators, and educators have strived to enhance clinicians' knowledge, skills, and attitudes through training. Given the risks inherent in learning new skills or advancing underdeveloped skills on actual patients, simulation-based training (SBT) has become an invaluable tool across the medical education spectrum. The large simulation, training, and learning literature was used to provide a synthesized yet innovative and "memorable" heuristic of the important facets of simulation program creation and implementation, as represented by eight critical "S" factors-science, staff, supplies, space, support, systems, success, and sustainability. These critical factors advance earlier work that primarily focused on the science of SBT success, to also include more practical, perhaps even seemingly obvious but significantly challenging components of SBT, such as resources, space, and supplies. SYSTEMS: One of the eight critical factors-systems-refers to the need to match fidelity requirements to training needs and ensure that technological infrastructure is in place. The type of learning objectives that the training is intended to address should determine these requirements. For example, some simulators emphasize physical fidelity to enable clinicians to practice technical and nontechnical skills in a safe environment that mirrors real-world conditions. Such simulators are most appropriate when trainees are learning how to use specific equipment or conduct specific procedures. The eight factors-science, staff, supplies, space, support, systems, success, and sustainability-represent a synthesis of the most critical elements necessary for successful simulation programs. The order of the factors does not represent a deliberate prioritization or sequence, and the factors' relative importance may change as the program evolves.
Laboratory observation of electron phase-space holes during magnetic reconnection.
Fox, W; Porkolab, M; Egedal, J; Katz, N; Le, A
2008-12-19
We report the observation of large-amplitude, nonlinear electrostatic structures, identified as electron phase-space holes, during magnetic reconnection experiments on the Versatile Toroidal Facility at MIT. The holes are positive electric potential spikes, observed on high-bandwidth ( approximately 2 GHz) Langmuir probes. Investigations with multiple probes establish that the holes travel at or above the electron thermal speed and have a three-dimensional, approximately spherical shape, with a scale size approximately 2 mm. This corresponds to a few electron gyroradii, or many tens of Debye lengths, which is large compared to holes considered in simulations and observed by satellites, whose length scale is typically only a few Debye lengths. Finally, a statistical study over many discharges confirms that the holes appear in conjunction with the large inductive electric fields and the creation of energetic electrons associated with the magnetic energy release.
SiMon: Simulation Monitor for Computational Astrophysics
NASA Astrophysics Data System (ADS)
Xuran Qian, Penny; Cai, Maxwell Xu; Portegies Zwart, Simon; Zhu, Ming
2017-09-01
Scientific discovery via numerical simulations is important in modern astrophysics. This relatively new branch of astrophysics has become possible due to the development of reliable numerical algorithms and the high performance of modern computing technologies. These enable the analysis of large collections of observational data and the acquisition of new data via simulations at unprecedented accuracy and resolution. Ideally, simulations run until they reach some pre-determined termination condition, but often other factors cause extensive numerical approaches to break down at an earlier stage. In those cases, processes tend to be interrupted due to unexpected events in the software or the hardware. In those cases, the scientist handles the interrupt manually, which is time-consuming and prone to errors. We present the Simulation Monitor (SiMon) to automatize the farming of large and extensive simulation processes. Our method is light-weight, it fully automates the entire workflow management, operates concurrently across multiple platforms and can be installed in user space. Inspired by the process of crop farming, we perceive each simulation as a crop in the field and running simulation becomes analogous to growing crops. With the development of SiMon we relax the technical aspects of simulation management. The initial package was developed for extensive parameter searchers in numerical simulations, but it turns out to work equally well for automating the computational processing and reduction of observational data reduction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varble, Adam; Zipser, Edward J.; Fridlind, Ann M.
2014-12-18
Ten 3D cloud-resolving model (CRM) simulations and four 3D limited area model (LAM) simulations of an intense mesoscale convective system observed on 23-24 January 2006 during the Tropical Warm Pool – International Cloud Experiment (TWP-ICE) are compared with each other and with observed radar reflectivity fields and dual-Doppler retrievals of vertical wind speeds in an attempt to explain published results showing a high bias in simulated convective radar reflectivity aloft. This high bias results from ice water content being large, which is a product of large, strong convective updrafts, although hydrometeor size distribution assumptions modulate the size of this bias.more » Making snow mass more realistically proportional to D2 rather than D3 eliminates unrealistically large snow reflectivities over 40 dBZ in some simulations. Graupel, unlike snow, produces high biased reflectivity in all simulations, which is partly a result of parameterized microphysics, but also partly a result of overly intense simulated updrafts. Peak vertical velocities in deep convective updrafts are greater than dual-Doppler retrieved values, especially in the upper troposphere. Freezing of liquid condensate, often rain, lofted above the freezing level in simulated updraft cores greatly contributes to these excessive upper tropospheric vertical velocities. The strongest simulated updraft cores are nearly undiluted, with some of the strongest showing supercell characteristics during the multicellular (pre-squall) stage of the event. Decreasing horizontal grid spacing from 900 to 100 meters slightly weakens deep updraft vertical velocity and moderately decreases the amount of condensate aloft, but not enough to match observational retrievals. Therefore, overly intense simulated updrafts may additionally be a product of unrealistic interactions between convective dynamics, parameterized microphysics, and the large-scale model forcing that promote different convective strengths than observed.« less
Simulating the cold dark matter-neutrino dipole with TianNu
Inman, Derek; Yu, Hao-Ran; Zhu, Hong-Ming; ...
2017-04-20
Measurements of neutrino mass in cosmological observations rely on two-point statistics that are hindered by significant degeneracies with the optical depth and galaxy bias. The relative velocity effect between cold dark matter and neutrinos induces a large scale dipole in the matter density field and may be able to provide orthogonal constraints to standard techniques. In this paper, we numerically investigate this dipole in the TianNu simulation, which contains cold dark matter and 50 meV neutrinos. We first compute the dipole using a new linear response technique where we treat the displacement caused by the relative velocity as a phasemore » in Fourier space and then integrate the matter power spectrum over redshift. Then, we compute the dipole numerically in real space using the simulation density and velocity fields. We find excellent agreement between the linear response and N-body methods. Finally, utilizing the dipole as an observational tool requires two tracers of the matter distribution that are differently biased with respect to the neutrino density.« less
NASA Technical Reports Server (NTRS)
Kennedy, Robert S.; Drexler, Julie M.; Compton, Daniel E.; Stanney, Kay M.; Lanham, Susan; Harm, Deborah L.
2001-01-01
From a survey of ten U.S. Navy flight simulators a large number (N > 1,600 exposures) of self-reports of motion sickness symptomatology were obtained. Using these data, scoring algorithms were derived, which permit examination of groups of individuals that can be scored either for 1) their total sickness experience in a particular device; or, 2) according to three separable symptom clusters which emerged from a Factor Analysis. Scores from this total score are found to be proportional to other global motion sickness symptom checklist scores with which they correlate (r = 0.82). The factors that surfaced from the analysis include clusters of symptoms referable as nausea, oculomotor disturbances, and disorientation (N, 0, and D). The factor scores may have utility in differentiating the source of symptoms in different devices. The present chapter describes our experience with the use of both of these types of scores and illustrates their use with examples from flight simulators, space sickness and virtual environments.
Simulations of radiation-damaged 3D detectors for the Super-LHC
NASA Astrophysics Data System (ADS)
Pennicard, D.; Pellegrini, G.; Fleta, C.; Bates, R.; O'Shea, V.; Parkes, C.; Tartoni, N.
2008-07-01
Future high-luminosity colliders, such as the Super-LHC at CERN, will require pixel detectors capable of withstanding extremely high radiation damage. In this article, the performances of various 3D detector structures are simulated with up to 1×1016 1 MeV- neq/cm2 radiation damage. The simulations show that 3D detectors have higher collection efficiency and lower depletion voltages than planar detectors due to their small electrode spacing. When designing a 3D detector with a large pixel size, such as an ATLAS sensor, different electrode column layouts are possible. Using a small number of n+ readout electrodes per pixel leads to higher depletion voltages and lower collection efficiency, due to the larger electrode spacing. Conversely, using more electrodes increases both the insensitive volume occupied by the electrode columns and the capacitive noise. Overall, the best performance after 1×1016 1 MeV- neq/cm2 damage is achieved by using 4-6 n+ electrodes per pixel.
On the relevance of chaos for halo stars in the solar neighbourhood II
NASA Astrophysics Data System (ADS)
Maffione, Nicolas P.; Gómez, Facundo A.; Cincotta, Pablo M.; Giordano, Claudia M.; Grand, Robert J. J.; Marinacci, Federico; Pakmor, Rüdiger; Simpson, Christine M.; Springel, Volker; Frenk, Carlos S.
2018-05-01
In a previous paper based on dark matter only simulations we show that, in the approximation of an analytic and static potential describing the strongly triaxial and cuspy shape of Milky Way-sized haloes, diffusion due to chaotic mixing in the neighbourhood of the Sun does not efficiently erase phase space signatures of past accretion events. In this second paper we further explore the effect of chaotic mixing using multicomponent Galactic potential models and solar neighbourhood-like volumes extracted from fully cosmological hydrodynamic simulations, thus naturally accounting for the gravitational potential associated with baryonic components, such as the bulge and disc. Despite the strong change in the global Galactic potentials with respect to those obtained in dark matter only simulations, our results confirm that a large fraction of halo particles evolving on chaotic orbits exhibit their chaotic behaviour after periods of time significantly larger than a Hubble time. In addition, significant diffusion in phase space is not observed on those particles that do exhibit chaotic behaviour within a Hubble time.
NASA Technical Reports Server (NTRS)
Shie, C.-L.; Tao, W.-K.; Hou, A.; Lin, X.
2006-01-01
The GCE (Goddard Cumulus Ensemble) model, which has been developed and improved at NASA Goddard Space Flight Center over the past two decades, is considered as one of the finer and state-of-the-art CRMs (Cloud Resolving Models) in the research community. As the chosen CRM for a NASA Interdisciplinary Science (IDS) Project, GCE has recently been successfully upgraded into an MPI (Message Passing Interface) version with which great improvement has been achieved in computational efficiency, scalability, and portability. By basically using the large-scale temperature and moisture advective forcing, as well as the temperature, water vapor and wind fields obtained from TRMM (Tropical Rainfall Measuring Mission) field experiments such as SCSMEX (South China Sea Monsoon Experiment) and KWAJEX (Kwajalein Experiment), our recent 2-D and 3-D GCE simulations were able to capture detailed convective systems typical of the targeted (simulated) regions. The GEOS-3 [Goddard EOS (Earth Observing System) Version-3] reanalysis data have also been proposed and successfully implemented for usage in the proposed/performed GCE long-term simulations (i.e., aiming at producing massive simulated cloud data -- Cloud Library) in compensating the scarcity of real field experimental data in both time and space (location). Preliminary 2-D or 3-D pilot results using GEOS-3 data have generally showed good qualitative agreement (yet some quantitative difference) with the respective numerical results using the SCSMEX observations. The first objective of this paper is to ensure the GEOS-3 data quality by comparing the model results obtained from several pairs of simulations using the real observations and GEOS-3 reanalysis data. The different large-scale advective forcing obtained from these two kinds of resources (i.e., sounding observations and GEOS-3 reanalysis) has been considered as a major critical factor in producing various model results. The second objective of this paper is therefore to investigate and present such an impact of large-scale forcing on various modeled quantities (such as hydrometeors, rainfall, and etc.). A third objective is to validate the overall GCE 3-D model performance by comparing the numerical results with sounding observations, as well as available satellite retrievals.
Deep convolutional neural networks as strong gravitational lens detectors
NASA Astrophysics Data System (ADS)
Schaefer, C.; Geiger, M.; Kuntzer, T.; Kneib, J.-P.
2018-03-01
Context. Future large-scale surveys with high-resolution imaging will provide us with approximately 105 new strong galaxy-scale lenses. These strong-lensing systems will be contained in large data amounts, however, which are beyond the capacity of human experts to visually classify in an unbiased way. Aim. We present a new strong gravitational lens finder based on convolutional neural networks (CNNs). The method was applied to the strong-lensing challenge organized by the Bologna Lens Factory. It achieved first and third place, respectively, on the space-based data set and the ground-based data set. The goal was to find a fully automated lens finder for ground-based and space-based surveys that minimizes human inspection. Methods: We compared the results of our CNN architecture and three new variations ("invariant" "views" and "residual") on the simulated data of the challenge. Each method was trained separately five times on 17 000 simulated images, cross-validated using 3000 images, and then applied to a test set with 100 000 images. We used two different metrics for evaluation, the area under the receiver operating characteristic curve (AUC) score, and the recall with no false positive (Recall0FP). Results: For ground-based data, our best method achieved an AUC score of 0.977 and a Recall0FP of 0.50. For space-based data, our best method achieved an AUC score of 0.940 and a Recall0FP of 0.32. Adding dihedral invariance to the CNN architecture diminished the overall score on space-based data, but achieved a higher no-contamination recall. We found that using committees of five CNNs produced the best recall at zero contamination and consistently scored better AUC than a single CNN. Conclusions: We found that for every variation of our CNN lensfinder, we achieved AUC scores close to 1 within 6%. A deeper network did not outperform simpler CNN models either. This indicates that more complex networks are not needed to model the simulated lenses. To verify this, more realistic lens simulations with more lens-like structures (spiral galaxies or ring galaxies) are needed to compare the performance of deeper and shallower networks.
Li, Zhifei; Qin, Dongliang
2014-01-01
In defense related programs, the use of capability-based analysis, design, and acquisition has been significant. In order to confront one of the most challenging features of a huge design space in capability based analysis (CBA), a literature review of design space exploration was first examined. Then, in the process of an aerospace system of systems design space exploration, a bilayer mapping method was put forward, based on the existing experimental and operating data. Finally, the feasibility of the foregoing approach was demonstrated with an illustrative example. With the data mining RST (rough sets theory) and SOM (self-organized mapping) techniques, the alternative to the aerospace system of systems architecture was mapping from P-space (performance space) to C-space (configuration space), and then from C-space to D-space (design space), respectively. Ultimately, the performance space was mapped to the design space, which completed the exploration and preliminary reduction of the entire design space. This method provides a computational analysis and implementation scheme for large-scale simulation. PMID:24790572
Li, Zhifei; Qin, Dongliang; Yang, Feng
2014-01-01
In defense related programs, the use of capability-based analysis, design, and acquisition has been significant. In order to confront one of the most challenging features of a huge design space in capability based analysis (CBA), a literature review of design space exploration was first examined. Then, in the process of an aerospace system of systems design space exploration, a bilayer mapping method was put forward, based on the existing experimental and operating data. Finally, the feasibility of the foregoing approach was demonstrated with an illustrative example. With the data mining RST (rough sets theory) and SOM (self-organized mapping) techniques, the alternative to the aerospace system of systems architecture was mapping from P-space (performance space) to C-space (configuration space), and then from C-space to D-space (design space), respectively. Ultimately, the performance space was mapped to the design space, which completed the exploration and preliminary reduction of the entire design space. This method provides a computational analysis and implementation scheme for large-scale simulation.
Tang, M X; Zhang, Y Y; E, J C; Luo, S N
2018-05-01
Polychromatic synchrotron undulator X-ray sources are useful for ultrafast single-crystal diffraction under shock compression. Here, simulations of X-ray diffraction of shock-compressed single-crystal tantalum with realistic undulator sources are reported, based on large-scale molecular dynamics simulations. Purely elastic deformation, elastic-plastic two-wave structure, and severe plastic deformation under different impact velocities are explored, as well as an edge release case. Transmission-mode diffraction simulations consider crystallographic orientation, loading direction, incident beam direction, X-ray spectrum bandwidth and realistic detector size. Diffraction patterns and reciprocal space nodes are obtained from atomic configurations for different loading (elastic and plastic) and detection conditions, and interpretation of the diffraction patterns is discussed.
Stochastic annealing simulations of defect interactions among subcascades
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinisch, H.L.; Singh, B.N.
1997-04-01
The effects of the subcascade structure of high energy cascades on the temperature dependencies of annihilation, clustering and free defect production are investigated. The subcascade structure is simulated by closely spaced groups of lower energy MD cascades. The simulation results illustrate the strong influence of the defect configuration existing in the primary damage state on subsequent intracascade evolution. Other significant factors affecting the evolution of the defect distribution are the large differences in mobility and stability of vacancy and interstitial defects and the rapid one-dimensional diffusion of small, glissile interstitial loops produced directly in cascades. Annealing simulations are also performedmore » on high-energy, subcascade-producing cascades generated with the binary collision approximation and calibrated to MD results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, M. X.; Zhang, Y. Y.; E, J. C.
Polychromatic synchrotron undulator X-ray sources are useful for ultrafast single-crystal diffraction under shock compression. Here, simulations of X-ray diffraction of shock-compressed single-crystal tantalum with realistic undulator sources are reported, based on large-scale molecular dynamics simulations. Purely elastic deformation, elastic–plastic two-wave structure, and severe plastic deformation under different impact velocities are explored, as well as an edge release case. Transmission-mode diffraction simulations consider crystallographic orientation, loading direction, incident beam direction, X-ray spectrum bandwidth and realistic detector size. Diffraction patterns and reciprocal space nodes are obtained from atomic configurations for different loading (elastic and plastic) and detection conditions, and interpretation of themore » diffraction patterns is discussed.« less
Shao, Yu; Wang, Shumin
2016-12-01
The numerical simulation of acoustic scattering from elastic objects near a water-sand interface is critical to underwater target identification. Frequency-domain methods are computationally expensive, especially for large-scale broadband problems. A numerical technique is proposed to enable the efficient use of finite-difference time-domain method for broadband simulations. By incorporating a total-field/scattered-field boundary, the simulation domain is restricted inside a tightly bounded region. The incident field is further synthesized by the Fourier transform for both subcritical and supercritical incidences. Finally, the scattered far field is computed using a half-space Green's function. Numerical examples are further provided to demonstrate the accuracy and efficiency of the proposed technique.
Quantum Chess: Making Quantum Phenomena Accessible
NASA Astrophysics Data System (ADS)
Cantwell, Christopher
Quantum phenomena have remained largely inaccessible to the general public. There tends to be a scare factor associated with the word ``Quantum''. This is in large part due to the alien nature of phenomena such as superposition and entanglement. However, Quantum Computing is a very active area of research and one day we will have games that run on those quantum computers. Quantum phenomena such as superposition and entanglement will seem as normal as gravity. Is it possible to create such games today? Can we make games that are built on top of a realistic quantum simulation and introduce players of any background to quantum concepts in a fun and mentally stimulating way? One of the difficulties with any quantum simulation run on a classical computer is that the Hilbert space grows exponentially, making simulations of an appreciable size physically impossible due largely to memory restrictions. Here we will discuss the conception and development of Quantum Chess, and how to overcome some of the difficulties faced. We can then ask the question, ``What's next?'' What are some of the difficulties Quantum Chess still faces, and what is the future of quantum games?
A new approach for data acquisition at the JPL space simulators
NASA Technical Reports Server (NTRS)
Fisher, Terry C.
1992-01-01
In 1990, a personal computer based data acquisition system was put into service for the Space Simulators and Environmental Test Laboratory at the Jet Propulsion Laboratory (JPL) in Pasadena, California. The new system replaced an outdated minicomputer system which had been in use since 1980. This new data acquisition system was designed and built by JPL for the specific task of acquiring thermal test data in support of space simulation and thermal vacuum testing at JPL. The data acquisition system was designed using powerful personal computers and local-area-network (LAN) technology. Reliability, expandability, and maintainability were some of the most important criteria in the design of the data system and in the selection of hardware and software components. The data acquisition system is used to record both test chamber operational data and thermal data from the unit under test. Tests are conducted in numerous small thermal vacuum chambers and in the large solar simulator and range in size from individual components using only 2 or 3 thermocouples to entire planetary spacecraft requiring in excess of 1200 channels of test data. The system supports several of these tests running concurrently. The previous data system is described along with reasons for its replacement, the types of data acquired, the new data system, and the benefits obtained from the new system including information on tests performed to date.
1D-3D hybrid modeling-from multi-compartment models to full resolution models in space and time.
Grein, Stephan; Stepniewski, Martin; Reiter, Sebastian; Knodel, Markus M; Queisser, Gillian
2014-01-01
Investigation of cellular and network dynamics in the brain by means of modeling and simulation has evolved into a highly interdisciplinary field, that uses sophisticated modeling and simulation approaches to understand distinct areas of brain function. Depending on the underlying complexity, these models vary in their level of detail, in order to cope with the attached computational cost. Hence for large network simulations, single neurons are typically reduced to time-dependent signal processors, dismissing the spatial aspect of each cell. For single cell or networks with relatively small numbers of neurons, general purpose simulators allow for space and time-dependent simulations of electrical signal processing, based on the cable equation theory. An emerging field in Computational Neuroscience encompasses a new level of detail by incorporating the full three-dimensional morphology of cells and organelles into three-dimensional, space and time-dependent, simulations. While every approach has its advantages and limitations, such as computational cost, integrated and methods-spanning simulation approaches, depending on the network size could establish new ways to investigate the brain. In this paper we present a hybrid simulation approach, that makes use of reduced 1D-models using e.g., the NEURON simulator-which couples to fully resolved models for simulating cellular and sub-cellular dynamics, including the detailed three-dimensional morphology of neurons and organelles. In order to couple 1D- and 3D-simulations, we present a geometry-, membrane potential- and intracellular concentration mapping framework, with which graph- based morphologies, e.g., in the swc- or hoc-format, are mapped to full surface and volume representations of the neuron and computational data from 1D-simulations can be used as boundary conditions for full 3D simulations and vice versa. Thus, established models and data, based on general purpose 1D-simulators, can be directly coupled to the emerging field of fully resolved, highly detailed 3D-modeling approaches. We present the developed general framework for 1D/3D hybrid modeling and apply it to investigate electrically active neurons and their intracellular spatio-temporal calcium dynamics.
Pressure Oscillations and Structural Vibrations in Space Shuttle RSRM and ETM-3 Motors
NASA Technical Reports Server (NTRS)
Mason, D. R.; Morstadt, R. A.; Cannon, S. M.; Gross, E. G.; Nielsen, D. B.
2004-01-01
The complex interactions between internal motor pressure oscillations resulting from vortex shedding, the motor's internal acoustic modes, and the motor's structural vibration modes were assessed for the Space Shuttle four-segment booster Reusable Solid Rocket Motor and for the five-segment engineering test motor ETM-3. Two approaches were applied 1) a predictive procedure based on numerically solving modal representations of a solid rocket motor s acoustic equations of motion and 2) a computational fluid dynamics two-dimensional axi-symmetric large eddy simulation at discrete motor burn times.
NASA Astrophysics Data System (ADS)
Couderc, F.; Duran, A.; Vila, J.-P.
2017-08-01
We present an explicit scheme for a two-dimensional multilayer shallow water model with density stratification, for general meshes and collocated variables. The proposed strategy is based on a regularized model where the transport velocity in the advective fluxes is shifted proportionally to the pressure potential gradient. Using a similar strategy for the potential forces, we show the stability of the method in the sense of a discrete dissipation of the mechanical energy, in general multilayer and non-linear frames. These results are obtained at first-order in space and time and extended using a second-order MUSCL extension in space and a Heun's method in time. With the objective of minimizing the diffusive losses in realistic contexts, sufficient conditions are exhibited on the regularizing terms to ensure the scheme's linear stability at first and second-order in time and space. The other main result stands in the consistency with respect to the asymptotics reached at small and large time scales in low Froude regimes, which governs large-scale oceanic circulation. Additionally, robustness and well-balanced results for motionless steady states are also ensured. These stability properties tend to provide a very robust and efficient approach, easy to implement and particularly well suited for large-scale simulations. Some numerical experiments are proposed to highlight the scheme efficiency: an experiment of fast gravitational modes, a smooth surface wave propagation, an initial propagating surface water elevation jump considering a non-trivial topography, and a last experiment of slow Rossby modes simulating the displacement of a baroclinic vortex subject to the Coriolis force.
A Fast Method for Embattling Optimization of Ground-Based Radar Surveillance Network
NASA Astrophysics Data System (ADS)
Jiang, H.; Cheng, H.; Zhang, Y.; Liu, J.
A growing number of space activities have created an orbital debris environment that poses increasing impact risks to existing space systems and human space flight. For the safety of in-orbit spacecraft, a lot of observation facilities are needed to catalog space objects, especially in low earth orbit. Surveillance of Low earth orbit objects are mainly rely on ground-based radar, due to the ability limitation of exist radar facilities, a large number of ground-based radar need to build in the next few years in order to meet the current space surveillance demands. How to optimize the embattling of ground-based radar surveillance network is a problem to need to be solved. The traditional method for embattling optimization of ground-based radar surveillance network is mainly through to the detection simulation of all possible stations with cataloged data, and makes a comprehensive comparative analysis of various simulation results with the combinational method, and then selects an optimal result as station layout scheme. This method is time consuming for single simulation and high computational complexity for the combinational analysis, when the number of stations increases, the complexity of optimization problem will be increased exponentially, and cannot be solved with traditional method. There is no better way to solve this problem till now. In this paper, target detection procedure was simplified. Firstly, the space coverage of ground-based radar was simplified, a space coverage projection model of radar facilities in different orbit altitudes was built; then a simplified objects cross the radar coverage model was established according to the characteristics of space objects orbit motion; after two steps simplification, the computational complexity of the target detection was greatly simplified, and simulation results shown the correctness of the simplified results. In addition, the detection areas of ground-based radar network can be easily computed with the simplified model, and then optimized the embattling of ground-based radar surveillance network with the artificial intelligent algorithm, which can greatly simplifies the computational complexities. Comparing with the traditional method, the proposed method greatly improved the computational efficiency.
Neutral Buoyancy Simulator: MSFC-Langley joint test of large space structures component assembly:
NASA Technical Reports Server (NTRS)
1979-01-01
Once the United States' space program had progressed from Earth's orbit into outerspace, the prospect of building and maintaining a permanent presence in space was realized. To accomplish this feat, NASA launched a temporary workstation, Skylab, to discover the effects of low gravity and weightlessness on the human body, and also to develop tools and equipment that would be needed in the future to build and maintain a more permanent space station. The structures, techniques, and work schedules had to be carefully designed to fit this unique construction site. The components had to be lightweight for transport into orbit, yet durable. The station also had to be made with removable parts for easy servicing and repairs by astronauts. All of the tools necessary for service and repairs had to be designed for easy manipulation by a suited astronaut. And construction methods had to be efficient due to limited time the astronauts could remain outside their controlled environment. In lieu of all the specific needs for this project, an environment on Earth had to be developed that could simulate a low gravity atmosphere. A Neutral Buoyancy Simulator (NBS) was constructed by NASA Marshall Space Flight Center (MSFC) in 1968. Since then, NASA scientists have used this facility to understand how humans work best in low gravity and also provide information about the different kinds of structures that can be built. With the help of the NBS, building a space station became more of a reality. In a joint venture between NASA/Langley Research Center in Hampton, VA and MSFC, the Assembly Concept for Construction of Erectable Space Structures (ACCESS) was developed and demonstrated at MSFC's NBS. The primary objective of this experiment was to test the ACCESS structural assembly concept for suitability as the framework for larger space structures and to identify ways to improve the productivity of space construction. Pictured is a demonstration of ACCESS.
Perspective: Simulation and transformational change: the paradox of expertise.
Kneebone, Roger
2009-07-01
Simulation is widely seen as a space where procedural skills can be practiced in safety, free from the pressures and complexities of clinical care. Central to this approach is the notion of simplification, a stripping down of skills into their component parts. Yet the definition of simplicity is contestable, often determined by experts without reference to those they teach.The author uses the ha-ha, a hidden ditch around a large country house used by 18th-century English landscape gardeners to create an illusion that the house is surrounded by untamed nature, as a metaphor for the differing perspectives of expert and novice. The author proposes that this difference of perspective lies at the heart of many current problems with simulation and simulators.This article challenges the philosophy of simplification, arguing that procedural skills should not be divorced from their clinical context and that oversimplification of a complex process can interfere with deep understanding. The author draws on Meyer and Land's notions of threshold concepts and troublesome knowledge and on his own experience with patient-focused simulation to propose an alternative view of simulation, framing it as a safe space which can reflect the uncertainties of clinical practice and recreate the conditions of real-world learning. By reintroducing complexity and human unpredictability, simulation can provide a safe environment for assisting the transformational change that is essential to becoming a competent clinician.
Smoldyn on graphics processing units: massively parallel Brownian dynamics simulations.
Dematté, Lorenzo
2012-01-01
Space is a very important aspect in the simulation of biochemical systems; recently, the need for simulation algorithms able to cope with space is becoming more and more compelling. Complex and detailed models of biochemical systems need to deal with the movement of single molecules and particles, taking into consideration localized fluctuations, transportation phenomena, and diffusion. A common drawback of spatial models lies in their complexity: models can become very large, and their simulation could be time consuming, especially if we want to capture the systems behavior in a reliable way using stochastic methods in conjunction with a high spatial resolution. In order to deliver the promise done by systems biology to be able to understand a system as whole, we need to scale up the size of models we are able to simulate, moving from sequential to parallel simulation algorithms. In this paper, we analyze Smoldyn, a widely diffused algorithm for stochastic simulation of chemical reactions with spatial resolution and single molecule detail, and we propose an alternative, innovative implementation that exploits the parallelism of Graphics Processing Units (GPUs). The implementation executes the most computational demanding steps (computation of diffusion, unimolecular, and bimolecular reaction, as well as the most common cases of molecule-surface interaction) on the GPU, computing them in parallel on each molecule of the system. The implementation offers good speed-ups and real time, high quality graphics output
NASA Technical Reports Server (NTRS)
Pisaich, Gregory; Flueckiger, Lorenzo; Neukom, Christian; Wagner, Mike; Buchanan, Eric; Plice, Laura
2007-01-01
The Mission Simulation Toolkit (MST) is a flexible software system for autonomy research. It was developed as part of the Mission Simulation Facility (MSF) project that was started in 2001 to facilitate the development of autonomous planetary robotic missions. Autonomy is a key enabling factor for robotic exploration. There has been a large gap between autonomy software (at the research level), and software that is ready for insertion into near-term space missions. The MST bridges this gap by providing a simulation framework and a suite of tools for supporting research and maturation of autonomy. MST uses a distributed framework based on the High Level Architecture (HLA) standard. A key feature of the MST framework is the ability to plug in new models to replace existing ones with the same services. This enables significant simulation flexibility, particularly the mixing and control of fidelity level. In addition, the MST provides automatic code generation from robot interfaces defined with the Unified Modeling Language (UML), methods for maintaining synchronization across distributed simulation systems, XML-based robot description, and an environment server. Finally, the MSF supports a number of third-party products including dynamic models and terrain databases. Although the communication objects and some of the simulation components that are provided with this toolkit are specifically designed for terrestrial surface rovers, the MST can be applied to any other domain, such as aerial, aquatic, or space.
Handling qualities criteria for the space shuttle orbiter during the terminal phase of flight
NASA Technical Reports Server (NTRS)
Stapleford, R. L.; Klein, R. H.; Hob, R. H.
1972-01-01
It was found that large portions of the military handling qualities specification are directly applicable. However a number of additional and substitute criteria are recommended for areas not covered or inadequately covered in the military specification. Supporting pilot/vehicle analyses and simulation experiments were conducted and are described. Results are also presented of analytical and simulator evaluations of three specific interim Orbiter designs which provided a test of the proposed handling qualities criteria. The correlations between the analytical and experimental evaluations were generally excellent.
Effects of cosmic rays on single event upsets
NASA Technical Reports Server (NTRS)
Venable, D. D.; Zajic, V.; Lowe, C. W.; Olidapupo, A.; Fogarty, T. N.
1989-01-01
Assistance was provided to the Brookhaven Single Event Upset (SEU) Test Facility. Computer codes were developed for fragmentation and secondary radiation affecting Very Large Scale Integration (VLSI) in space. A computer controlled CV (HP4192) test was developed for Terman analysis. Also developed were high speed parametric tests which are independent of operator judgment and a charge pumping technique for measurement of D(sub it) (E). The X-ray secondary effects, and parametric degradation as a function of dose rate were simulated. The SPICE simulation of static RAMs with various resistor filters was tested.
Oelerich, Jan Oliver; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D; Volz, Kerstin
2017-06-01
We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space. Copyright © 2017 Elsevier B.V. All rights reserved.
NAS (Numerical Aerodynamic Simulation Program) technical summaries, March 1989 - February 1990
NASA Technical Reports Server (NTRS)
1990-01-01
Given here are selected scientific results from the Numerical Aerodynamic Simulation (NAS) Program's third year of operation. During this year, the scientific community was given access to a Cray-2 and a Cray Y-MP supercomputer. Topics covered include flow field analysis of fighter wing configurations, large-scale ocean modeling, the Space Shuttle flow field, advanced computational fluid dynamics (CFD) codes for rotary-wing airloads and performance prediction, turbulence modeling of separated flows, airloads and acoustics of rotorcraft, vortex-induced nonlinearities on submarines, and standing oblique detonation waves.
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1991-01-01
Human productivity was studied for extravehicular tasks performed in microgravity, particularly including in-space assembly of truss structures and other large objects. Human factors research probed the anthropometric constraints imposed on microgravity task performance and the associated workstation design requirements. Anthropometric experiments included reach envelope tests conducted using the 3-D Acoustic Positioning System (3DAPS), which permitted measuring the range of reach possible for persons using foot restraints in neutral buoyancy, both with and without space suits. Much neutral buoyancy research was conducted using the support of water to simulate the weightlessness environment of space. It became clear over time that the anticipated EVA requirement associated with the Space Station and with in-space construction of interplanetary probes would heavily burden astronauts, and remotely operated robots (teleoperators) were increasingly considered to absorb the workload. Experience in human EVA productivity led naturally to teleoperation research into the remote performance of tasks through human controlled robots.
NASA Technical Reports Server (NTRS)
Morgenthaler, George W.
1989-01-01
The ability to launch-on-time and to send payloads into space has progressed dramatically since the days of the earliest missile and space programs. Causes for delay during launch, i.e., unplanned 'holds', are attributable to several sources: weather, range activities, vehicle conditions, human performance, etc. Recent developments in space program, particularly the need for highly reliable logistic support of space construction and the subsequent planned operation of space stations, large unmanned space structures, lunar and Mars bases, and the necessity of providing 'guaranteed' commercial launches have placed increased emphasis on understanding and mastering every aspect of launch vehicle operations. The Center of Space Construction has acquired historical launch vehicle data and is applying these data to the analysis of space launch vehicle logistic support of space construction. This analysis will include development of a better understanding of launch-on-time capability and simulation of required support systems for vehicle assembly and launch which are necessary to support national space program construction schedules. In this paper, the author presents actual launch data on unscheduled 'hold' distributions of various launch vehicles. The data have been supplied by industrial associate companies of the Center for Space Construction. The paper seeks to determine suitable probability models which describe these historical data and that can be used for several purposes such as: inputs to broader simulations of launch vehicle logistic space construction support processes and the determination of which launch operations sources cause the majority of the unscheduled 'holds', and hence to suggest changes which might improve launch-on-time. In particular, the paper investigates the ability of a compound distribution probability model to fit actual data, versus alternative models, and recommends the most productive avenues for future statistical work.
1D-3D hybrid modeling—from multi-compartment models to full resolution models in space and time
Grein, Stephan; Stepniewski, Martin; Reiter, Sebastian; Knodel, Markus M.; Queisser, Gillian
2014-01-01
Investigation of cellular and network dynamics in the brain by means of modeling and simulation has evolved into a highly interdisciplinary field, that uses sophisticated modeling and simulation approaches to understand distinct areas of brain function. Depending on the underlying complexity, these models vary in their level of detail, in order to cope with the attached computational cost. Hence for large network simulations, single neurons are typically reduced to time-dependent signal processors, dismissing the spatial aspect of each cell. For single cell or networks with relatively small numbers of neurons, general purpose simulators allow for space and time-dependent simulations of electrical signal processing, based on the cable equation theory. An emerging field in Computational Neuroscience encompasses a new level of detail by incorporating the full three-dimensional morphology of cells and organelles into three-dimensional, space and time-dependent, simulations. While every approach has its advantages and limitations, such as computational cost, integrated and methods-spanning simulation approaches, depending on the network size could establish new ways to investigate the brain. In this paper we present a hybrid simulation approach, that makes use of reduced 1D-models using e.g., the NEURON simulator—which couples to fully resolved models for simulating cellular and sub-cellular dynamics, including the detailed three-dimensional morphology of neurons and organelles. In order to couple 1D- and 3D-simulations, we present a geometry-, membrane potential- and intracellular concentration mapping framework, with which graph- based morphologies, e.g., in the swc- or hoc-format, are mapped to full surface and volume representations of the neuron and computational data from 1D-simulations can be used as boundary conditions for full 3D simulations and vice versa. Thus, established models and data, based on general purpose 1D-simulators, can be directly coupled to the emerging field of fully resolved, highly detailed 3D-modeling approaches. We present the developed general framework for 1D/3D hybrid modeling and apply it to investigate electrically active neurons and their intracellular spatio-temporal calcium dynamics. PMID:25120463
Andrianakis, Ioannis; Vernon, Ian R.; McCreesh, Nicky; McKinley, Trevelyan J.; Oakley, Jeremy E.; Nsubuga, Rebecca N.; Goldstein, Michael; White, Richard G.
2015-01-01
Advances in scientific computing have allowed the development of complex models that are being routinely applied to problems in disease epidemiology, public health and decision making. The utility of these models depends in part on how well they can reproduce empirical data. However, fitting such models to real world data is greatly hindered both by large numbers of input and output parameters, and by long run times, such that many modelling studies lack a formal calibration methodology. We present a novel method that has the potential to improve the calibration of complex infectious disease models (hereafter called simulators). We present this in the form of a tutorial and a case study where we history match a dynamic, event-driven, individual-based stochastic HIV simulator, using extensive demographic, behavioural and epidemiological data available from Uganda. The tutorial describes history matching and emulation. History matching is an iterative procedure that reduces the simulator's input space by identifying and discarding areas that are unlikely to provide a good match to the empirical data. History matching relies on the computational efficiency of a Bayesian representation of the simulator, known as an emulator. Emulators mimic the simulator's behaviour, but are often several orders of magnitude faster to evaluate. In the case study, we use a 22 input simulator, fitting its 18 outputs simultaneously. After 9 iterations of history matching, a non-implausible region of the simulator input space was identified that was times smaller than the original input space. Simulator evaluations made within this region were found to have a 65% probability of fitting all 18 outputs. History matching and emulation are useful additions to the toolbox of infectious disease modellers. Further research is required to explicitly address the stochastic nature of the simulator as well as to account for correlations between outputs. PMID:25569850
Rare event simulation in radiation transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kollman, Craig
1993-10-01
This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved,more » even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiple by the likelihood ratio between the true and simulated probabilities so as to keep the estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive ``learning`` algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give with probability one, a sequence of estimates converging exponentially fast to the true solution.« less
Aeroacoustics of Space Vehicles
NASA Technical Reports Server (NTRS)
Panda, Jayanta
2014-01-01
While for airplanes the subject of aeroacoustics is associated with community noise, for space vehicles it is associated with vibro-acoustics and structural dynamics. Surface pressure fluctuations encountered during launch and travel through lower part of the atmosphere create intense vibro-acoustics environment for the payload, electronics, navigational equipment, and a large number of subsystems. All of these components have to be designed and tested for flight-certification. This presentation will cover all three major sources encountered in manned and unmanned space vehicles: launch acoustics, ascent acoustics and abort acoustics. Launch pads employ elaborate acoustic suppression systems to mitigate the ignition pressure waves and rocket plume generated noise during the early part of the liftoff. Recently we have used large microphone arrays to identify the noise sources during liftoff and found that the standard model by Eldred and Jones (NASA SP-8072) to be grossly inadequate. As the vehicle speeds up and reaches transonic speed in relatively denser part of the atmosphere, various shock waves and flow separation events create unsteady pressure fluctuations that can lead to high vibration environment, and occasional coupling with the structural modes, which may lead to buffet. Examples of wind tunnel tests and computational simulations to optimize the outer mold line to quantify and reduce the surface pressure fluctuations will be presented. Finally, a manned space vehicle needs to be designed for crew safety during malfunctioning of the primary rocket vehicle. This brings the subject of acoustic environment during abort. For NASAs Multi-Purpose Crew Vehicle (MPCV), abort will be performed by lighting rocket motors atop the crew module. The severe aeroacoustics environments during various abort scenarios were measured for the first time by using hot helium to simulate rocket plumes in the Ames unitary plan wind tunnels. Various considerations used for the helium simulation and the final confirmation from a flight test will be presented.
NASA Has Joined America True's Design Mission for 2000
NASA Technical Reports Server (NTRS)
Steele, Gynelle C.
1999-01-01
Engineers at the NASA Lewis Research Center will support the America True design team led by America s Cup innovator Phil Kaiko. The joint effort between NASA and America True is encouraged by Mission HOME, the official public awareness campaign of the U.S. space community. NASA Lewis and America True have entered into a Space Act Agreement to focus on the interaction between the airfoil and the large deformation of the pretensioned sails and rigs along with the dynamic motions related to the boat motions. This work will require a coupled fluid and structural simulation. Included in the simulation will be both a steadystate capability, to capture the quasi-state interactions between the air loads and sail geometry and the lift and drag on the boat, and a transient capability, to capture the sail/mast pumping effects resulting from hull motions.
Development of an automated electrical power subsystem testbed for large spacecraft
NASA Technical Reports Server (NTRS)
Hall, David K.; Lollar, Louis F.
1990-01-01
The NASA Marshall Space Flight Center (MSFC) has developed two autonomous electrical power system breadboards. The first breadboard, the autonomously managed power system (AMPS), is a two power channel system featuring energy generation and storage and 24-kW of switchable loads, all under computer control. The second breadboard, the space station module/power management and distribution (SSM/PMAD) testbed, is a two-bus 120-Vdc model of the Space Station power subsystem featuring smart switchgear and multiple knowledge-based control systems. NASA/MSFC is combining these two breadboards to form a complete autonomous source-to-load power system called the large autonomous spacecraft electrical power system (LASEPS). LASEPS is a high-power, intelligent, physical electrical power system testbed which can be used to derive and test new power system control techniques, new power switching components, and new energy storage elements in a more accurate and realistic fashion. LASEPS has the potential to be interfaced with other spacecraft subsystem breadboards in order to simulate an entire space vehicle. The two individual systems, the combined systems (hardware and software), and the current and future uses of LASEPS are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
P. Arnold, Lutz Decker, D. Howe, J. Urbin, Jonathan Homan, Carl Reis, J. Creel, V. Ganni, P. Knudsen, A. Sidi-Yekhlef
The James Webb Telescope is the successor to the Hubble Telescope and will be placed in an orbit of 1.5 million km from earth. Before launch in 2014, the telescope will be tested in NASA Johnson Space Center's (JSC) space simulation chamber, Chamber A. The tests will be conducted at deep space conditions. Chamber A's helium cryo-panels are currently cooled down to 20 K by two Linde 3.5 kW helium refrigerators. The new 12.5 kW, 20-K helium coldbox described in this paper is part of the upgrade to the chamber systems for this large test program. The Linde coldbox willmore » provide refrigeration in several operating modes where the temperature of the chamber is being controlled with a high accuracy due to the demanding NASA test requirements. The implementation of two parallel expansion turbine strings and the Ganni cycle—Floating Pressure process results in a highly efficient and flexible process that minimizes the electrical input power. This paper will describe the collaboration and execution of the coldbox project.« less
Sensitivity analysis for future space missions with segmented telescopes for high-contrast imaging
NASA Astrophysics Data System (ADS)
Leboulleux, Lucie; Pueyo, Laurent; Sauvage, Jean-François; Mazoyer, Johan; Soummer, Remi; Fusco, Thierry; Sivaramakrishnan, Anand
2018-01-01
The detection and analysis of biomarkers on earth-like planets using direct-imaging will require both high-contrast imaging and spectroscopy at very close angular separation (10^10 star to planet flux ratio at a few 0.1”). This goal can only be achieved with large telescopes in space to overcome atmospheric turbulence, often combined with a coronagraphic instrument with wavefront control. Large segmented space telescopes such as studied for the LUVOIR mission will generate segment-level instabilities and cophasing errors in addition to local mirror surface errors and other aberrations of the overall optical system. These effects contribute directly to the degradation of the final image quality and contrast. We present an analytical model that produces coronagraphic images of a segmented pupil telescope in the presence of segment phasing aberrations expressed as Zernike polynomials. This model relies on a pair-based projection of the segmented pupil and provides results that match an end-to-end simulation with an rms error on the final contrast of ~3%. This analytical model can be applied both to static and dynamic modes, and either in monochromatic or broadband light. It retires the need for end-to-end Monte-Carlo simulations that are otherwise needed to build a rigorous error budget, by enabling quasi-instantaneous analytical evaluations. The ability to invert directly the analytical model provides direct constraints and tolerances on all segments-level phasing and aberrations.
Ultracool dwarf benchmarks with Gaia primaries
NASA Astrophysics Data System (ADS)
Marocco, F.; Pinfield, D. J.; Cook, N. J.; Zapatero Osorio, M. R.; Montes, D.; Caballero, J. A.; Gálvez-Ortiz, M. C.; Gromadzki, M.; Jones, H. R. A.; Kurtev, R.; Smart, R. L.; Zhang, Z.; Cabrera Lavers, A. L.; García Álvarez, D.; Qi, Z. X.; Rickard, M. J.; Dover, L.
2017-10-01
We explore the potential of Gaia for the field of benchmark ultracool/brown dwarf companions, and present the results of an initial search for metal-rich/metal-poor systems. A simulated population of resolved ultracool dwarf companions to Gaia primary stars is generated and assessed. Of the order of ˜24 000 companions should be identifiable outside of the Galactic plane (|b| > 10 deg) with large-scale ground- and space-based surveys including late M, L, T and Y types. Our simulated companion parameter space covers 0.02 ≤ M/M⊙ ≤ 0.1, 0.1 ≤ age/Gyr ≤ 14 and -2.5 ≤ [Fe/H] ≤ 0.5, with systems required to have a false alarm probability <10-4, based on projected separation and expected constraints on common distance, common proper motion and/or common radial velocity. Within this bulk population, we identify smaller target subsets of rarer systems whose collective properties still span the full parameter space of the population, as well as systems containing primary stars that are good age calibrators. Our simulation analysis leads to a series of recommendations for candidate selection and observational follow-up that could identify ˜500 diverse Gaia benchmarks. As a test of the veracity of our methodology and simulations, our initial search uses UKIRT Infrared Deep Sky Survey and Sloan Digital Sky Survey to select secondaries, with the parameters of primaries taken from Tycho-2, Radial Velocity Experiment, Large sky Area Multi-Object fibre Spectroscopic Telescope and Tycho-Gaia Astrometric Solution. We identify and follow up 13 new benchmarks. These include M8-L2 companions, with metallicity constraints ranging in quality, but robust in the range -0.39 ≤ [Fe/H] ≤ +0.36, and with projected physical separation in the range 0.6 < s/kau < 76. Going forward, Gaia offers a very high yield of benchmark systems, from which diverse subsamples may be able to calibrate a range of foundational ultracool/sub-stellar theory and observation.
Implementation of an Open-Scenario, Long-Term Space Debris Simulation Approach
NASA Technical Reports Server (NTRS)
Nelson, Bron; Yang Yang, Fan; Carlino, Roberto; Dono Perez, Andres; Faber, Nicolas; Henze, Chris; Karacalioglu, Arif Goktug; O'Toole, Conor; Swenson, Jason; Stupl, Jan
2015-01-01
This paper provides a status update on the implementation of a flexible, long-term space debris simulation approach. The motivation is to build a tool that can assess the long-term impact of various options for debris-remediation, including the LightForce space debris collision avoidance concept that diverts objects using photon pressure [9]. State-of-the-art simulation approaches that assess the long-term development of the debris environment use either completely statistical approaches, or they rely on large time steps on the order of several days if they simulate the positions of single objects over time. They cannot be easily adapted to investigate the impact of specific collision avoidance schemes or de-orbit schemes, because the efficiency of a collision avoidance maneuver can depend on various input parameters, including ground station positions and orbital and physical parameters of the objects involved in close encounters (conjunctions). Furthermore, maneuvers take place on timescales much smaller than days. For example, LightForce only changes the orbit of a certain object (aiming to reduce the probability of collision), but it does not remove entire objects or groups of objects. In the same sense, it is also not straightforward to compare specific de-orbit methods in regard to potential collision risks during a de-orbit maneuver. To gain flexibility in assessing interactions with objects, we implement a simulation that includes every tracked space object in Low Earth Orbit (LEO) and propagates all objects with high precision and variable time-steps as small as one second. It allows the assessment of the (potential) impact of physical or orbital changes to any object. The final goal is to employ a Monte Carlo approach to assess the debris evolution during the simulation time-frame of 100 years and to compare a baseline scenario to debris remediation scenarios or other scenarios of interest. To populate the initial simulation, we use the entire space-track object catalog in LEO. We then use a high precision propagator to propagate all objects over the entire simulation duration. If collisions are detected, the appropriate number of debris objects are created and inserted into the simulation framework. Depending on the scenario, further objects, e.g. due to new launches, can be added. At the end of the simulation, the total number of objects above a cut-off size and the number of detected collisions provide benchmark parameters for the comparison between scenarios. The simulation approach is computationally intensive as it involves tens of thousands of objects; hence we use a highly parallel approach employing up to a thousand cores on the NASA Pleiades supercomputer for a single run. This paper describes our simulation approach, the status of its implementation, the approach to developing scenarios and examples of first test runs.
Synergia: an accelerator modeling tool with 3-D space charge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amundson, James F.; Spentzouris, P.; /Fermilab
2004-07-01
High precision modeling of space-charge effects, together with accurate treatment of single-particle dynamics, is essential for designing future accelerators as well as optimizing the performance of existing machines. We describe Synergia, a high-fidelity parallel beam dynamics simulation package with fully three dimensional space-charge capabilities and a higher order optics implementation. We describe the computational techniques, the advanced human interface, and the parallel performance obtained using large numbers of macroparticles. We also perform code benchmarks comparing to semi-analytic results and other codes. Finally, we present initial results on particle tune spread, beam halo creation, and emittance growth in the Fermilab boostermore » accelerator.« less
Stabilization of exact nonlinear Timoshenko beams in space by boundary feedback
NASA Astrophysics Data System (ADS)
Do, K. D.
2018-05-01
Boundary feedback controllers are designed to stabilize Timoshenko beams with large translational and rotational motions in space under external disturbances. The exact nonlinear partial differential equations governing motion of the beams are derived and used in the control design. The designed controllers guarantee globally practically asymptotically (and locally practically exponentially) stability of the beam motions at the reference state. The control design, well-posedness and stability analysis are based on various relationships between the earth-fixed and body-fixed coordinates, Sobolev embeddings, and a Lyapunov-type theorem developed to study well-posedness and stability for a class of evolution systems in Hilbert space. Simulation results are included to illustrate the effectiveness of the proposed control design.
Mesoscale Dynamical Regimes in the Midlatitudes
NASA Astrophysics Data System (ADS)
Craig, G. C.; Selz, T.
2018-01-01
The atmospheric mesoscales are characterized by a complex variety of meteorological phenomena that defy simple classification. Here a full space-time spectral analysis is carried out, based on a 7 day convection-permitting simulation of springtime midlatitude weather on a large domain. The kinetic energy is largest at synoptic scales, and on the mesoscale it is largely confined to an "advective band" where space and time scales are related by a constant of proportionality which corresponds to a velocity scale of about 10 m s-1. Computing the relative magnitude of different terms in the governing equations allows the identification of five dynamical regimes. These are tentatively identified as quasi-geostrophic flow, propagating gravity waves, stationary gravity waves related to orography, acoustic modes, and a weak temperature gradient regime, where vertical motions are forced by diabatic heating.
Adaptive sampling strategies with high-throughput molecular dynamics
NASA Astrophysics Data System (ADS)
Clementi, Cecilia
Despite recent significant hardware and software developments, the complete thermodynamic and kinetic characterization of large macromolecular complexes by molecular simulations still presents significant challenges. The high dimensionality of these systems and the complexity of the associated potential energy surfaces (creating multiple metastable regions connected by high free energy barriers) does not usually allow to adequately sample the relevant regions of their configurational space by means of a single, long Molecular Dynamics (MD) trajectory. Several different approaches have been proposed to tackle this sampling problem. We focus on the development of ensemble simulation strategies, where data from a large number of weakly coupled simulations are integrated to explore the configurational landscape of a complex system more efficiently. Ensemble methods are of increasing interest as the hardware roadmap is now mostly based on increasing core counts, rather than clock speeds. The main challenge in the development of an ensemble approach for efficient sampling is in the design of strategies to adaptively distribute the trajectories over the relevant regions of the systems' configurational space, without using any a priori information on the system global properties. We will discuss the definition of smart adaptive sampling approaches that can redirect computational resources towards unexplored yet relevant regions. Our approaches are based on new developments in dimensionality reduction for high dimensional dynamical systems, and optimal redistribution of resources. NSF CHE-1152344, NSF CHE-1265929, Welch Foundation C-1570.
2002 Computing and Interdisciplinary Systems Office Review and Planning Meeting
NASA Technical Reports Server (NTRS)
Lytle, John; Follen, Gregory; Lopez, Isaac; Veres, Joseph; Lavelle, Thomas; Sehra, Arun; Freeh, Josh; Hah, Chunill
2003-01-01
The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with NASA Glenn s Propulsion program, NASA Ames, industry, academia and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This year s review meeting describes the current status of the NPSS and the Object Oriented Development Kit with specific emphasis on the progress made over the past year on air breathing propulsion applications for aeronautics and space transportation applications. Major accomplishments include the first 3-D simulation of the primary flow path of a large turbofan engine in less than 15 hours, and the formal release of the NPSS Version 1.5 that includes elements of rocket engine systems and a visual based syntax layer. NPSS and the Development Kit are managed by the Computing and Interdisciplinary Systems Office (CISO) at the NASA Glenn Research Center and financially supported in fiscal year 2002 by the Computing, Networking and Information Systems (CNIS) project managed at NASA Ames, the Glenn Aerospace Propulsion and Power Program and the Advanced Space Transportation Program.
Viewpoints: Interactive Exploration of Large Multivariate Earth and Space Science Data Sets
NASA Astrophysics Data System (ADS)
Levit, C.; Gazis, P. R.
2006-05-01
Analysis and visualization of extremely large and complex data sets may be one of the most significant challenges facing earth and space science investigators in the forthcoming decades. While advances in hardware speed and storage technology have roughly kept up with (indeed, have driven) increases in database size, the same is not of our abilities to manage the complexity of these data. Current missions, instruments, and simulations produce so much data of such high dimensionality that they outstrip the capabilities of traditional visualization and analysis software. This problem can only be expected to get worse as data volumes increase by orders of magnitude in future missions and in ever-larger supercomputer simulations. For large multivariate data (more than 105 samples or records with more than 5 variables per sample) the interactive graphics response of most existing statistical analysis, machine learning, exploratory data analysis, and/or visualization tools such as Torch, MLC++, Matlab, S++/R, and IDL stutters, stalls, or stops working altogether. Fortunately, the graphics processing units (GPUs) built in to all professional desktop and laptop computers currently on the market are capable of transforming, filtering, and rendering hundreds of millions of points per second. We present a prototype open-source cross-platform application which leverages much of the power latent in the GPU to enable smooth interactive exploration and analysis of large high- dimensional data using a variety of classical and recent techniques. The targeted application is the interactive analysis of large, complex, multivariate data sets, with dimensionalities that may surpass 100 and sample sizes that may exceed 106-108.
Technology transfer of operator-in-the-loop simulation
NASA Technical Reports Server (NTRS)
Yae, K. H.; Lin, H. C.; Lin, T. C.; Frisch, H. P.
1994-01-01
The technology developed for operator-in-the-loop simulation in space teleoperation has been applied to Caterpillar's backhoe, wheel loader, and off-highway truck. On an SGI workstation, the simulation integrates computer modeling of kinematics and dynamics, real-time computational and visualization, and an interface with the operator through the operator's console. The console is interfaced with the workstation through an IBM-PC in which the operator's commands were digitized and sent through an RS-232 serial port. The simulation gave visual feedback adequate for the operator in the loop, with the camera's field of vision projected on a large screen in multiple view windows. The view control can emulate either stationary or moving cameras. This simulator created an innovative engineering design environment by integrating computer software and hardware with the human operator's interactions. The backhoe simulation has been adopted by Caterpillar in building a virtual reality tool for backhoe design.
Honeycomblike large area LaB6 plasma source for Multi-Purpose Plasma facility
NASA Astrophysics Data System (ADS)
Woo, Hyun-Jong; Chung, Kyu-Sun; You, Hyun-Jong; Lee, Myoung-Jae; Lho, Taihyeop; Choh, Kwon Kook; Yoon, Jung-Sik; Jung, Yong Ho; Lee, Bongju; Yoo, Suk Jae; Kwon, Myeon
2007-10-01
A Multi-Purpose Plasma (MP2) facility has been renovated from Hanbit mirror device [Kwon et al., Nucl. Fusion 43, 686 (2003)] by adopting the same philosophy of diversified plasma simulator (DiPS) [Chung et al., Contrib. Plasma Phys. 46, 354 (2006)] by installing two plasma sources: LaB6 (dc) and helicon (rf) plasma sources; and making three distinct simulators: divertor plasma simulator, space propulsion simulator, and astrophysics simulator. During the first renovation stage, a honeycomblike large area LaB6 (HLA-LaB6) cathode was developed for the divertor plasma simulator to improve the resistance against the thermal shock fragility for large and high density plasma generation. A HLA-LaB6 cathode is composed of the one inner cathode with 4in. diameter and the six outer cathodes with 2in. diameter along with separate graphite heaters. The first plasma is generated with Ar gas and its properties are measured by the electric probes with various discharge currents and magnetic field configurations. Plasma density at the middle of central cell reaches up to 2.6×1012 cm-3, while the electron temperature remains around 3-3.5eV at the low discharge current of less than 45A, and the magnetic field intensity of 870G. Unique features of electric property of heaters, plasma density profiles, is explained comparing with those of single LaB6 cathode with 4in. diameter in DiPS.
DynaSim: A MATLAB Toolbox for Neural Modeling and Simulation
Sherfey, Jason S.; Soplata, Austin E.; Ardid, Salva; Roberts, Erik A.; Stanley, David A.; Pittman-Polletta, Benjamin R.; Kopell, Nancy J.
2018-01-01
DynaSim is an open-source MATLAB/GNU Octave toolbox for rapid prototyping of neural models and batch simulation management. It is designed to speed up and simplify the process of generating, sharing, and exploring network models of neurons with one or more compartments. Models can be specified by equations directly (similar to XPP or the Brian simulator) or by lists of predefined or custom model components. The higher-level specification supports arbitrarily complex population models and networks of interconnected populations. DynaSim also includes a large set of features that simplify exploring model dynamics over parameter spaces, running simulations in parallel using both multicore processors and high-performance computer clusters, and analyzing and plotting large numbers of simulated data sets in parallel. It also includes a graphical user interface (DynaSim GUI) that supports full functionality without requiring user programming. The software has been implemented in MATLAB to enable advanced neural modeling using MATLAB, given its popularity and a growing interest in modeling neural systems. The design of DynaSim incorporates a novel schema for model specification to facilitate future interoperability with other specifications (e.g., NeuroML, SBML), simulators (e.g., NEURON, Brian, NEST), and web-based applications (e.g., Geppetto) outside MATLAB. DynaSim is freely available at http://dynasimtoolbox.org. This tool promises to reduce barriers for investigating dynamics in large neural models, facilitate collaborative modeling, and complement other tools being developed in the neuroinformatics community. PMID:29599715
Honeycomblike large area LaB6 plasma source for Multi-Purpose Plasma facility.
Woo, Hyun-Jong; Chung, Kyu-Sun; You, Hyun-Jong; Lee, Myoung-Jae; Lho, Taihyeop; Choh, Kwon Kook; Yoon, Jung-Sik; Jung, Yong Ho; Lee, Bongju; Yoo, Suk Jae; Kwon, Myeon
2007-10-01
A Multi-Purpose Plasma (MP(2)) facility has been renovated from Hanbit mirror device [Kwon et al., Nucl. Fusion 43, 686 (2003)] by adopting the same philosophy of diversified plasma simulator (DiPS) [Chung et al., Contrib. Plasma Phys. 46, 354 (2006)] by installing two plasma sources: LaB(6) (dc) and helicon (rf) plasma sources; and making three distinct simulators: divertor plasma simulator, space propulsion simulator, and astrophysics simulator. During the first renovation stage, a honeycomblike large area LaB(6) (HLA-LaB(6)) cathode was developed for the divertor plasma simulator to improve the resistance against the thermal shock fragility for large and high density plasma generation. A HLA-LaB(6) cathode is composed of the one inner cathode with 4 in. diameter and the six outer cathodes with 2 in. diameter along with separate graphite heaters. The first plasma is generated with Ar gas and its properties are measured by the electric probes with various discharge currents and magnetic field configurations. Plasma density at the middle of central cell reaches up to 2.6 x 10(12) cm(-3), while the electron temperature remains around 3-3.5 eV at the low discharge current of less than 45 A, and the magnetic field intensity of 870 G. Unique features of electric property of heaters, plasma density profiles, is explained comparing with those of single LaB(6) cathode with 4 in. diameter in DiPS.
DynaSim: A MATLAB Toolbox for Neural Modeling and Simulation.
Sherfey, Jason S; Soplata, Austin E; Ardid, Salva; Roberts, Erik A; Stanley, David A; Pittman-Polletta, Benjamin R; Kopell, Nancy J
2018-01-01
DynaSim is an open-source MATLAB/GNU Octave toolbox for rapid prototyping of neural models and batch simulation management. It is designed to speed up and simplify the process of generating, sharing, and exploring network models of neurons with one or more compartments. Models can be specified by equations directly (similar to XPP or the Brian simulator) or by lists of predefined or custom model components. The higher-level specification supports arbitrarily complex population models and networks of interconnected populations. DynaSim also includes a large set of features that simplify exploring model dynamics over parameter spaces, running simulations in parallel using both multicore processors and high-performance computer clusters, and analyzing and plotting large numbers of simulated data sets in parallel. It also includes a graphical user interface (DynaSim GUI) that supports full functionality without requiring user programming. The software has been implemented in MATLAB to enable advanced neural modeling using MATLAB, given its popularity and a growing interest in modeling neural systems. The design of DynaSim incorporates a novel schema for model specification to facilitate future interoperability with other specifications (e.g., NeuroML, SBML), simulators (e.g., NEURON, Brian, NEST), and web-based applications (e.g., Geppetto) outside MATLAB. DynaSim is freely available at http://dynasimtoolbox.org. This tool promises to reduce barriers for investigating dynamics in large neural models, facilitate collaborative modeling, and complement other tools being developed in the neuroinformatics community.
A Semi-Structured MODFLOW-USG Model to Evaluate Local Water Sources to Wells for Decision Support.
Feinstein, Daniel T; Fienen, Michael N; Reeves, Howard W; Langevin, Christian D
2016-07-01
In order to better represent the configuration of the stream network and simulate local groundwater-surface water interactions, a version of MODFLOW with refined spacing in the topmost layer was applied to a Lake Michigan Basin (LMB) regional groundwater-flow model developed by the U.S. Geological. Regional MODFLOW models commonly use coarse grids over large areas; this coarse spacing precludes model application to local management issues (e.g., surface-water depletion by wells) without recourse to labor-intensive inset models. Implementation of an unstructured formulation within the MODFLOW framework (MODFLOW-USG) allows application of regional models to address local problems. A "semi-structured" approach (uniform lateral spacing within layers, different lateral spacing among layers) was tested using the LMB regional model. The parent 20-layer model with uniform 5000-foot (1524-m) lateral spacing was converted to 4 layers with 500-foot (152-m) spacing in the top glacial (Quaternary) layer, where surface water features are located, overlying coarser resolution layers representing deeper deposits. This semi-structured version of the LMB model reproduces regional flow conditions, whereas the finer resolution in the top layer improves the accuracy of the simulated response of surface water to shallow wells. One application of the semi-structured LMB model is to provide statistical measures of the correlation between modeled inputs and the simulated amount of water that wells derive from local surface water. The relations identified in this paper serve as the basis for metamodels to predict (with uncertainty) surface-water depletion in response to shallow pumping within and potentially beyond the modeled area, see Fienen et al. (2015a). Published 2016. This article is a U.S. Government work and is in the public domain in the USA.
Neutral Buoyancy Simulator: MSFC-Langley joint test of large space structures component assembly:
NASA Technical Reports Server (NTRS)
1978-01-01
Once the United States' space program had progressed from Earth's orbit into outerspace, the prospect of building and maintaining a permanent presence in space was realized. To accomplish this feat, NASA launched a temporary workstation, Skylab, to discover the effects of low gravity and weightlessness on the human body, and also to develop tools and equipment that would be needed in the future to build and maintain a more permanent space station. The structures, techniques, and work schedules had to be carefully designed to fit this unique construction site. The components had to be lightweight for transport into orbit, yet durable. The station also had to be made with removable parts for easy servicing and repairs by astronauts. All of the tools necessary for service and repairs had to be designed for easy manipulation by a suited astronaut. And construction methods had to be efficient due to limited time the astronauts could remain outside their controlled environment. In lieu of all the specific needs for this project, an environment on Earth had to be developed that could simulate a low gravity atmosphere. A Neutral Buoyancy Simulator (NBS) was constructed by NASA Marshall Space Flight Center (MSFC) in 1968. Since then, NASA scientists have used this facility to understand how humans work best in low gravity and also provide information about the different kinds of structures that can be built. Another facet of the space station would be electrical cornectors which would be used for powering tools the astronauts would need for construction, maintenance and repairs. Shown is an astronaut training during an underwater electrical connector test in the NBS.
Neutral Buoyancy Simulator-NB32-Assembly of Large Space Structure
NASA Technical Reports Server (NTRS)
1980-01-01
Once the United States' space program had progressed from Earth's orbit into outerspace, theprospect of building and maintaining a permanent presence in space was realized. To accomplish this feat, NASA launched a temporary workstation, Skylab, to discover the effects of low gravity and weightlessness on the human body, and also to develop tools and equipment that would be needed in the future to build and maintain a more permanent space station. The structures, techniques, and work schedules had to be carefully designed to fit this unique construction site. The components had to be lightweight for transport into orbit, yet durable. The station also had to be made with removable parts for easy servicing and repairs by astronauts. All of the tools necessary for service and repairs had to be designed for easy manipulation by a suited astronaut. Construction methods had to be efficient due to the limited time the astronauts could remain outside their controlled environment. In lieu of all the specific needs for this project, an environment on Earth had to be developed that could simulate a low gravity atmosphere. A Neutral Buoyancy Simulator (NBS) was constructed by NASA's Marshall Space Flight Center (MSFC) in 1968. Since then, NASA scientists have used this facility to understand how humans work best in low gravity and also provide information about the different kinds of structures that can be built. Pictured is a Massachusetts Institute of Technology (MIT) student working in a spacesuit on the Experimental Assembly of Structures in Extravehicular Activity (EASE) project which was developed as a joint effort between MFSC and MIT. The EASE experiment required that crew members assemble small components to form larger components, working from the payload bay of the space shuttle. The MIT student in this photo is assembling two six-beam tetrahedrons.
A semi-structured MODFLOW-USG model to evaluate local water sources to wells for decision support
Feinstein, Daniel T.; Fienen, Michael N.; Reeves, Howard W.; Langevin, Christian D.
2016-01-01
In order to better represent the configuration of the stream network and simulate local groundwater-surface water interactions, a version of MODFLOW with refined spacing in the topmost layer was applied to a Lake Michigan Basin (LMB) regional groundwater-flow model developed by the U.S. Geological. Regional MODFLOW models commonly use coarse grids over large areas; this coarse spacing precludes model application to local management issues (e.g., surface-water depletion by wells) without recourse to labor-intensive inset models. Implementation of an unstructured formulation within the MODFLOW framework (MODFLOW-USG) allows application of regional models to address local problems. A “semi-structured” approach (uniform lateral spacing within layers, different lateral spacing among layers) was tested using the LMB regional model. The parent 20-layer model with uniform 5000-foot (1524-m) lateral spacing was converted to 4 layers with 500-foot (152-m) spacing in the top glacial (Quaternary) layer, where surface water features are located, overlying coarser resolution layers representing deeper deposits. This semi-structured version of the LMB model reproduces regional flow conditions, whereas the finer resolution in the top layer improves the accuracy of the simulated response of surface water to shallow wells. One application of the semi-structured LMB model is to provide statistical measures of the correlation between modeled inputs and the simulated amount of water that wells derive from local surface water. The relations identified in this paper serve as the basis for metamodels to predict (with uncertainty) surface-water depletion in response to shallow pumping within and potentially beyond the modeled area, see Fienen et al. (2015a).
Large Eddy Simulation of Heat Entrainment Under Arctic Sea Ice
NASA Astrophysics Data System (ADS)
Ramudu, Eshwan; Gelderloos, Renske; Yang, Di; Meneveau, Charles; Gnanadesikan, Anand
2018-01-01
Arctic sea ice has declined rapidly in recent decades. The faster than projected retreat suggests that free-running large-scale climate models may not be accurately representing some key processes. The small-scale turbulent entrainment of heat from the mixed layer could be one such process. To better understand this mechanism, we model the Arctic Ocean's Canada Basin, which is characterized by a perennial anomalously warm Pacific Summer Water (PSW) layer residing at the base of the mixed layer and a summertime Near-Surface Temperature Maximum (NSTM) within the mixed layer trapping heat from solar radiation. We use large eddy simulation (LES) to investigate heat entrainment for different ice-drift velocities and different initial temperature profiles. The value of LES is that the resolved turbulent fluxes are greater than the subgrid-scale fluxes for most of our parameter space. The results show that the presence of the NSTM enhances heat entrainment from the mixed layer. Additionally there is no PSW heat entrained under the parameter space considered. We propose a scaling law for the ocean-to-ice heat flux which depends on the initial temperature anomaly in the NSTM layer and the ice-drift velocity. A case study of "The Great Arctic Cyclone of 2012" gives a turbulent heat flux from the mixed layer that is approximately 70% of the total ocean-to-ice heat flux estimated from the PIOMAS model often used for short-term predictions. Present results highlight the need for large-scale climate models to account for the NSTM layer.
Modeling the Structure and Dynamics of Dwarf Spheroidal Galaxies with Dark Matter and Tides
NASA Astrophysics Data System (ADS)
Muñoz, Ricardo R.; Majewski, Steven R.; Johnston, Kathryn V.
2008-05-01
We report the results of N-body simulations of disrupting satellites aimed at exploring whether the observed features of dSphs can be accounted for with simple, mass-follows-light (MFL) models including tidal disruption. As a test case, we focus on the Carina dwarf spheroidal (dSph), which presently is the dSph system with the most extensive data at large radius. We find that previous N-body, MFL simulations of dSphs did not sufficiently explore the parameter space of satellite mass, density, and orbital shape to find adequate matches to Galactic dSph systems, whereas with a systematic survey of parameter space we are able to find tidally disrupting, MFL satellite models that rather faithfully reproduce Carina's velocity profile, velocity dispersion profile, and projected density distribution over its entire sampled radius. The successful MFL model satellites have very eccentric orbits, currently favored by CDM models, and central velocity dispersions that still yield an accurate representation of the bound mass and observed central M/L ~ 40 of Carina, despite inflation of the velocity dispersion outside the dSph core by unbound debris. Our survey of parameter space also allows us to address a number of commonly held misperceptions of tidal disruption and its observable effects on dSph structure and dynamics. The simulations suggest that even modest tidal disruption can have a profound effect on the observed dynamics of dSph stars at large radii. Satellites that are well described by tidally disrupting MFL models could still be fully compatible with ΛCDM if, for example, they represent a later stage in the evolution of luminous subhalos.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Florinski, V.; Heerikhuisen, J.; Niemiec, J.
2016-08-01
The nearly circular band of energetic neutral atom emission dominating the field of view of the Interplanetary Boundary Explorer ( IBEX ) satellite, is most commonly attributed to the effect of charge exchange of secondary pickup ions (PUIs) gyrating about the magnetic field in the outer heliosheath and the interstellar space beyond. Several models for the PUI dynamics of this mechanism have been proposed, each requiring either strong or weak scattering of the initial pitch angle. Conventional wisdom states that ring distributions tend to generate waves and scatter onto a shell on timescales too short for charge exchange to occur.more » We performed a careful study of ring and thin shell proton distribution stability using theoretical tools and hybrid plasma simulations. We show that the kinetic behavior of a freshly injected proton ring is a far more complicated process than previously thought. In the presence of a warm Maxwellian core, narrower rings could be more stable than broader toroidal distributions. The scattered rings possess a fine structure that can only be revealed using very large numbers of macroparticles in a simulation. It is demonstrated that a “stability gap” in ring temperature exists where the protons could retain large gyrating anisotropies for years, and the wave activity could remain below the level of the ambient magnetic fluctuations in interstellar space. In the directions away from the ribbon, however, a partial shell distribution is more likely to be unstable, leading to significant scattering into one hemisphere in velocity space. The process is accompanied by turbulence production, which is puzzling given the very low level of magnetic fluctuations measured in the outer heliosheath by Voyager 1 .« less
The Application of High Energy Resolution Green's Functions to Threat Scenario Simulation
NASA Astrophysics Data System (ADS)
Thoreson, Gregory G.; Schneider, Erich A.
2012-04-01
Radiation detectors installed at key interdiction points provide defense against nuclear smuggling attempts by scanning vehicles and traffic for illicit nuclear material. These hypothetical threat scenarios may be modeled using radiation transport simulations. However, high-fidelity models are computationally intensive. Furthermore, the range of smuggler attributes and detector technologies create a large problem space not easily overcome by brute-force methods. Previous research has demonstrated that decomposing the scenario into independently simulated components using Green's functions can simulate photon detector signals with coarse energy resolution. This paper extends this methodology by presenting physics enhancements and numerical treatments which allow for an arbitrary level of energy resolution for photon transport. As a result, spectroscopic detector signals produced from full forward transport simulations can be replicated while requiring multiple orders of magnitude less computation time.
Ionizing Radiation Environments and Exposure Risks
NASA Astrophysics Data System (ADS)
Kim, M. H. Y.
2015-12-01
Space radiation environments for historically large solar particle events (SPE) and galactic cosmic rays (GCR) are simulated to characterize exposures to radio-sensitive organs for missions to low-Earth orbit (LEO), moon, near-Earth asteroid, and Mars. Primary and secondary particles for SPE and GCR are transported through the respective atmospheres of Earth or Mars, space vehicle, and astronaut's body tissues using NASA's HZETRN/QMSFRG computer code. Space radiation protection methods, which are derived largely from ground-based methods recommended by the National Council on Radiation Protection and Measurements (NCRP) or International Commission on Radiological Protections (ICRP), are built on the principles of risk justification, limitation, and ALARA (as low as reasonably achievable). However, because of the large uncertainties in high charge and energy (HZE) particle radiobiology and the small population of space crews, NASA develops distinct methods to implement a space radiation protection program. For the fatal cancer risks, which have been considered the dominant risk for GCR, the NASA Space Cancer Risk (NSCR) model has been developed from recommendations by NCRP; and undergone external review by the National Research Council (NRC), NCRP, and through peer-review publications. The NSCR model uses GCR environmental models, particle transport codes describing the GCR modification by atomic and nuclear interactions in atmospheric shielding coupled with spacecraft and tissue shielding, and NASA-defined quality factors for solid cancer and leukemia risk estimates for HZE particles. By implementing the NSCR model, the exposure risks from various heliospheric conditions are assessed for the radiation environments for various-class mission types to understand architectures and strategies of human exploration missions and ultimately to contribute to the optimization of radiation safety and well-being of space crewmembers participating in long-term space missions.
NASA Astrophysics Data System (ADS)
Chang, Dongil; Tavoularis, Stavros
2013-03-01
Unsteady numerical simulations have been conducted to investigate the effect of axial spacing between the stator vanes and the rotor blades on the performance of a transonic, single-stage, high-pressure, axial turbine. Three cases were considered, the normal case, which is based on the geometry of a commercial jet engine and has an axial spacing at 50% blade span equal to 42% of the vane axial chord, as well as two other cases with axial spacings equal to 31 and 52% vane axial chords, respectively. Present interest has focused on the effect of axial gap size on the instantaneous and time-averaged flows as well as on the blade loading and the turbine performance. Decreasing the gap size reduced the pressure and increased the Mach number in the core flows in the gap region. However, the flows near the two endwalls did not follow monotonic trends with the gap size change; instead, the Mach numbers for both the small gap and the large gap cases were lower than that for the normal case. This Mach number decrease was attributed to increased turbulence due to the increased wake strength for the small gap case and an increased wake width for the large gap case. In all considered cases, large pressure fluctuations were observed in the front region of the blade suction side. These pressure fluctuations were strongest for the smaller spacing. The turbine efficiencies of the cases with the larger and smaller spacings were essentially the same, but both were lower than that of the normal case. The stator loss for the smaller spacing case was lower than the one for the larger spacing case, whereas the opposite was true for the rotor loss.
Hybrid Vlasov simulations for alpha particles heating in the solar wind
NASA Astrophysics Data System (ADS)
Perrone, Denise; Valentini, Francesco; Veltri, Pierluigi
2011-06-01
Heating and acceleration of heavy ions in the solar wind and corona represent a long-standing theoretical problem in space physics and are distinct experimental signatures of kinetic processes occurring in collisionless plasmas. To address this problem, we propose the use of a low-noise hybrid-Vlasov code in four dimensional phase space (1D in physical space and 3D in velocity space) configuration. We trigger a turbulent cascade injecting the energy at large wavelengths and analyze the role of kinetic effects along the development of the energy spectra. Following the evolution of both proton and α distribution functions shows that both the ion species significantly depart from the maxwellian equilibrium, with the appearance of beams of accelerated particles in the direction parallel to the background magnetic field.
Hybrid simulations of a parallel collisionless shock in the large plasma device
Weidl, Martin S.; Winske, Dan; Jenko, Frank; ...
2016-12-01
We present two-dimensional hybrid kinetic/magnetohydrodynamic simulations of planned laser-ablation experiments in the Large Plasma Device (LAPD). Our results, based on parameters which have been validated in previous experiments, show that a parallel collisionless shock can begin forming within the available space. Carbon-debris ions that stream along the magnetic- eld direction with a blow-o speed of four times the Alfv en velocity excite strong magnetic uctuations, eventually transfering part of their kinetic energy to the surrounding hydrogen ions. This acceleration and compression of the background plasma creates a shock front, which satis es the Rankine{Hugoniot conditions and can therefore propagate onmore » its own. Furthermore, we analyze the upstream turbulence and show that it is dominated by the right-hand resonant instability.« less
NASA Technical Reports Server (NTRS)
Stern, Boris E.; Svensson, Roland; Begelman, Mitchell C.; Sikora, Marek
1995-01-01
High-energy radiation processes in compact cosmic objects are often expected to have a strongly non-linear behavior. Such behavior is shown, for example, by electron-positron pair cascades and the time evolution of relativistic proton distributions in dense radiation fields. Three independent techniques have been developed to simulate these non-linear problems: the kinetic equation approach; the phase-space density (PSD) Monte Carlo method; and the large-particle (LP) Monte Carlo method. In this paper, we present the latest version of the LP method and compare it with the other methods. The efficiency of the method in treating geometrically complex problems is illustrated by showing results of simulations of 1D, 2D and 3D systems. The method is shown to be powerful enough to treat non-spherical geometries, including such effects as bulk motion of the background plasma, reflection of radiation from cold matter, and anisotropic distributions of radiating particles. It can therefore be applied to simulate high-energy processes in such astrophysical systems as accretion discs with coronae, relativistic jets, pulsar magnetospheres and gamma-ray bursts.
NASA Technical Reports Server (NTRS)
Pollack, James B.; Rind, David; Lacis, Andrew; Hansen, James E.; Sato, Makiko; Ruedy, Reto
1993-01-01
The response of the climate system to a temporally and spatially constant amount of volcanic particles is simulated using a general circulation model (GCM). The optical depth of the aerosols is chosen so as to produce approximately the same amount of forcing as results from doubling the present CO2 content of the atmosphere and from the boundary conditions associated with the peak of the last ice age. The climate changes produced by long-term volcanic aerosol forcing are obtained by differencing this simulation and one made for the present climate with no volcanic aerosol forcing. The simulations indicate that a significant cooling of the troposphere and surface can occur at times of closely spaced multiple sulfur-rich volcanic explosions that span time scales of decades to centuries. The steady-state climate response to volcanic forcing includes a large expansion of sea ice, especially in the Southern Hemisphere; a resultant large increase in surface and planetary albedo at high latitudes; and sizable changes in the annually and zonally averaged air temperature.
Aliasing errors in measurements of beam position and ellipticity
NASA Astrophysics Data System (ADS)
Ekdahl, Carl
2005-09-01
Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.
Blakes, Jonathan; Twycross, Jamie; Romero-Campero, Francisco Jose; Krasnogor, Natalio
2011-12-01
The Infobiotics Workbench is an integrated software suite incorporating model specification, simulation, parameter optimization and model checking for Systems and Synthetic Biology. A modular model specification allows for straightforward creation of large-scale models containing many compartments and reactions. Models are simulated either using stochastic simulation or numerical integration, and visualized in time and space. Model parameters and structure can be optimized with evolutionary algorithms, and model properties calculated using probabilistic model checking. Source code and binaries for Linux, Mac and Windows are available at http://www.infobiotics.org/infobiotics-workbench/; released under the GNU General Public License (GPL) version 3. Natalio.Krasnogor@nottingham.ac.uk.
Massive data compression for parameter-dependent covariance matrices
NASA Astrophysics Data System (ADS)
Heavens, Alan F.; Sellentin, Elena; de Mijolla, Damien; Vianello, Alvise
2017-12-01
We show how the massive data compression algorithm MOPED can be used to reduce, by orders of magnitude, the number of simulated data sets which are required to estimate the covariance matrix required for the analysis of Gaussian-distributed data. This is relevant when the covariance matrix cannot be calculated directly. The compression is especially valuable when the covariance matrix varies with the model parameters. In this case, it may be prohibitively expensive to run enough simulations to estimate the full covariance matrix throughout the parameter space. This compression may be particularly valuable for the next generation of weak lensing surveys, such as proposed for Euclid and Large Synoptic Survey Telescope, for which the number of summary data (such as band power or shear correlation estimates) is very large, ∼104, due to the large number of tomographic redshift bins which the data will be divided into. In the pessimistic case where the covariance matrix is estimated separately for all points in an Monte Carlo Markov Chain analysis, this may require an unfeasible 109 simulations. We show here that MOPED can reduce this number by a factor of 1000, or a factor of ∼106 if some regularity in the covariance matrix is assumed, reducing the number of simulations required to a manageable 103, making an otherwise intractable analysis feasible.
NASA Astrophysics Data System (ADS)
Reilly, Stephanie
2017-04-01
The energy budget of the entire global climate is significantly influenced by the presence of boundary layer clouds. The main aim of the High Definition Clouds and Precipitation for Advancing Climate Prediction (HD(CP)2) project is to improve climate model predictions by means of process studies of clouds and precipitation. This study makes use of observed elevated moisture layers as a proxy of future changes in tropospheric humidity. The associated impact on radiative transfer triggers fast responses in boundary layer clouds, providing a framework for investigating this phenomenon. The investigation will be carried out using data gathered during the Next-generation Aircraft Remote-sensing for VALidation (NARVAL) South campaigns. Observational data will be combined with ECMWF reanalysis data to derive the large scale forcings for the Large Eddy Simulations (LES). Simulations will be generated for a range of elevated moisture layers, spanning a multi-dimensional phase space in depth, amplitude, elevation, and cloudiness. The NARVAL locations will function as anchor-points. The results of the large eddy simulations and the observations will be studied and compared in an attempt to determine how simulated boundary layer clouds react to changes in radiative transfer from the free troposphere. Preliminary LES results will be presented and discussed.
Chandramouli, Balasubramanian; Mancini, Giordano
2016-01-01
Classical Molecular Dynamics (MD) simulations can provide insights at the nanoscopic scale into protein dynamics. Currently, simulations of large proteins and complexes can be routinely carried out in the ns-μs time regime. Clustering of MD trajectories is often performed to identify selective conformations and to compare simulation and experimental data coming from different sources on closely related systems. However, clustering techniques are usually applied without a careful validation of results and benchmark studies involving the application of different algorithms to MD data often deal with relatively small peptides instead of average or large proteins; finally clustering is often applied as a means to analyze refined data and also as a way to simplify further analysis of trajectories. Herein, we propose a strategy to classify MD data while carefully benchmarking the performance of clustering algorithms and internal validation criteria for such methods. We demonstrate the method on two showcase systems with different features, and compare the classification of trajectories in real and PCA space. We posit that the prototype procedure adopted here could be highly fruitful in clustering large trajectories of multiple systems or that resulting especially from enhanced sampling techniques like replica exchange simulations. Copyright: © 2016 by Fabrizio Serra editore, Pisa · Roma.
Basconi, Joseph E; Carta, Giorgio; Shirts, Michael R
2015-04-14
Multiscale simulation is used to study the adsorption of lysozyme onto ion exchangers obtained by grafting charged polymers into a porous matrix, in systems with various polymer properties and strengths of electrostatic interaction. Molecular dynamics simulations show that protein partitioning into the polymer-filled pore space increases with the overall charge content of the polymers, while the diffusivity in the pore space decreases. However, the combination of greatly increased partitioning and modestly decreased diffusion results in macroscopic transport rates that increase as a function of charge content, as the large concentration driving force due to enhanced pore space partitioning outweighs the reduction in the pore space diffusivity. Matrices having greater charge associated with the grafted polymers also exhibit more diffuse intraparticle concentration profiles during transient adsorption. In systems with a high charge content per polymer and a low protein loading, the polymers preferentially partition toward the surface due to favorable interactions with the surface-bound protein. These results demonstrate the potential of multiscale modeling to illuminate qualitative trends between molecular properties and the adsorption equilibria and kinetic properties observable on macroscopic scales.
A sparse grid based method for generative dimensionality reduction of high-dimensional data
NASA Astrophysics Data System (ADS)
Bohn, Bastian; Garcke, Jochen; Griebel, Michael
2016-03-01
Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.
Simulation and control for telerobots in space medicine
NASA Astrophysics Data System (ADS)
Haidegger, Tamás; Kovács, Levente; Precup, Radu-Emil; Benyó, Balázs; Benyó, Zoltán; Preitl, Stefan
2012-12-01
Human space exploration is continuously advancing despite the current financial difficulties, and the new missions are targeting the Moon and the Mars with more effective human-robot collaborative systems. The continuous development of robotic technology should lead to the advancement of automated technology, including space medicine. Telesurgery has already proved its effectiveness through various telemedicine procedures on Earth, and it has the potential to provide medical assistance in space as well. Aeronautical agencies have already conducted numerous experiments and developed various setups to push the boundaries of teleoperation under extreme conditions. Different control schemes have been proposed and tested to facilitate and enhance telepresence and to ensure transparency, sufficient bandwidth and latency-tolerance. This paper focuses on the modeling of a generic telesurgery setup, supported by a cascade control approach. The minimalistic models were tested with linear and PID-fuzzy control options to provide a simple, universal and scalable solution for the challenges of telesurgery over large distances. In our simulations, the control structures were capable of providing good dynamic performance indices and robustness with respect to the gain in the human operator model. This is a promising result towards the support of future teleoperational missions.
Churchfield, Matthew J; Li, Ye; Moriarty, Patrick J
2013-02-28
This paper presents our initial work in performing large-eddy simulations of tidal turbine array flows. First, a horizontally periodic precursor simulation is performed to create turbulent flow data. Then those data are used as inflow into a tidal turbine array two rows deep and infinitely wide. The turbines are modelled using rotating actuator lines, and the finite-volume method is used to solve the governing equations. In studying the wakes created by the turbines, we observed that the vertical shear of the inflow combined with wake rotation causes lateral wake asymmetry. Also, various turbine configurations are simulated, and the total power production relative to isolated turbines is examined. We found that staggering consecutive rows of turbines in the simulated configurations allows the greatest efficiency using the least downstream row spacing. Counter-rotating consecutive downstream turbines in a non-staggered array shows a small benefit. This work has identified areas for improvement. For example, using a larger precursor domain would better capture elongated turbulent structures, and including salinity and temperature equations would account for density stratification and its effect on turbulence. Additionally, the wall shear stress modelling could be improved, and more array configurations could be examined.
Compactified cosmological simulations of the infinite universe
NASA Astrophysics Data System (ADS)
Rácz, Gábor; Szapudi, István; Csabai, István; Dobos, László
2018-06-01
We present a novel N-body simulation method that compactifies the infinite spatial extent of the Universe into a finite sphere with isotropic boundary conditions to follow the evolution of the large-scale structure. Our approach eliminates the need for periodic boundary conditions, a mere numerical convenience which is not supported by observation and which modifies the law of force on large scales in an unrealistic fashion. We demonstrate that our method outclasses standard simulations executed on workstation-scale hardware in dynamic range, it is balanced in following a comparable number of high and low k modes and, its fundamental geometry and topology match observations. Our approach is also capable of simulating an expanding, infinite universe in static coordinates with Newtonian dynamics. The price of these achievements is that most of the simulated volume has smoothly varying mass and spatial resolution, an approximation that carries different systematics than periodic simulations. Our initial implementation of the method is called StePS which stands for Stereographically projected cosmological simulations. It uses stereographic projection for space compactification and naive O(N^2) force calculation which is nevertheless faster to arrive at a correlation function of the same quality than any standard (tree or P3M) algorithm with similar spatial and mass resolution. The N2 force calculation is easy to adapt to modern graphics cards, hence our code can function as a high-speed prediction tool for modern large-scale surveys. To learn about the limits of the respective methods, we compare StePS with GADGET-2 running matching initial conditions.
Architecture for an integrated real-time air combat and sensor network simulation
NASA Astrophysics Data System (ADS)
Criswell, Evans A.; Rushing, John; Lin, Hong; Graves, Sara
2007-04-01
An architecture for an integrated air combat and sensor network simulation is presented. The architecture integrates two components: a parallel real-time sensor fusion and target tracking simulation, and an air combat simulation. By integrating these two simulations, it becomes possible to experiment with scenarios in which one or both sides in a battle have very large numbers of primitive passive sensors, and to assess the likely effects of those sensors on the outcome of the battle. Modern Air Power is a real-time theater-level air combat simulation that is currently being used as a part of the USAF Air and Space Basic Course (ASBC). The simulation includes a variety of scenarios from the Vietnam war to the present day, and also includes several hypothetical future scenarios. Modern Air Power includes a scenario editor, an order of battle editor, and full AI customization features that make it possible to quickly construct scenarios for any conflict of interest. The scenario editor makes it possible to place a wide variety of sensors including both high fidelity sensors such as radars, and primitive passive sensors that provide only very limited information. The parallel real-time sensor network simulation is capable of handling very large numbers of sensors on a computing cluster of modest size. It can fuse information provided by disparate sensors to detect and track targets, and produce target tracks.
NASA Astrophysics Data System (ADS)
Plettemeier, D.; Hahnel, R.; Hegler, S.; Safaeinili, A.; Orosei, R.; Cicchetti, A.; Plaut, J.; Picardi, G.
2009-04-01
MARSIS (Mars Advanced Radar for Subsurface and Ionosphere Sounding) on board MarsExpress is the first and so far the only space borne radar that observed the Martian moon Phobos. Radar echoes were measured for different flyby trajectories. The primary aim of the low frequency sounding of Phobos is to prove the feasibility of deep sounding, into the crust of Phobos. In this poster we present a numerical method that allows a very precise computation of radar echoes backscattered from the surface of large objects. The software is based on a combination of physical optics calculation of surface scattering of the radar target, and Method of Moments to calculate the radiation pattern of the whole space borne radar system. The calculation of the frequency dependent radiation pattern takes into account all relevant gain variations and coupling effects aboard the space craft. Based on very precise digital elevation models of Phobos, patch models in the resolution of lambda/10 were generated. Simulation techniques will be explained and a comparison of simulations and measurements will be shown. SURFACE BACKSCATTERING SIMULATOR FOR LARGE OBJECTS The computation of surface scattering of the electromagnetic wave incident on Phobos is based on the Physical Optics method. The scattered field can be expressed by the induced equivalent surface currents on the target. The Algorithm: The simulation program itself is split into three phases. In the first phase, an illumination test checks whether a patch will be visible from the position of the space craft. If this is not the case, the patch will be excluded from the simulation. The second phase serves as a preparation stage for the third phase. Amongst other tasks, the dyadic products for the Js and Ms surface currents are calculated. This is a time-memory trade-off: the simulation will need additional 144 bytes of RAM for every patch that passes phase one. However, the calculation of the dyads is expensive, so that considerable savings in computation time can be achieved by pre-calculating the frequency independent parts. In the third phase, the main part of the calculation is executed. This involves calculating the backscattered field for every frequency step, with the selected frequency range and resolution, and source type. Requirements for the Simulation of Phobos: The model of Phobos contains more than 104 million patches, occupying about 12GiB of HD space. The model is saved as an HDF5 container file, allowing easy cross-platform portability. During the calculation, for every patch that passes the ray tracing test, nearly 400 bytes of RAM will be needed. That adds up to 40GB RAM, considering the worst case (computational-wise), making the simulation very memory intensive. This number is already an optimized case, due to memory reuse strategies. RESULTS The simulations were performed with a very high discretization based on a high resolution digital elevation model. In the results of the simulations the signatures in the radargrams are caused by the illuminated surface topography of Phobos, so that the precession of position and orientation of MarsExpress related to Phobos has a significant influence on the radargrams. Parameter studies have shown that a permittivity change causes only a brightness change in the radargrams, while a radial distance change will jolt the signatures of the radargrams along the time axis. That means that the small differences detected between simulations and measurements are probably caused by inaccuracies in the trajectory calculations regarding the position and orientation of Phobos. This interpretation is in line with the difference observed in the drop of bright lines in the measured and simulated radargrams during the gap in measurements, e.g. around closest approach for orbit 5851. Some other interesting aspect seen in the measurements can perhaps be explained by simulations. CONCLUSIONS We successfully implemented a Radar-Backscattering simulator, using a hybrid Physical Optics and Method of Moments approach. The software runs on a large scale cluster installation, and is able to produce precise results with a high resolution in a reasonable amount of time. We used this software to simulate the measurements of the MARSIS instrument aboard MarsExpress, during flybys over the Martian moon Phobos, with varying parameters regarding the antenna orientation and polarization. We have compared these results with actual measurements. These comparisons provide explanations for some unexpected effects seen in the measurements.
Afzalian, A; Vasen, T; Ramvall, P; Shen, T-M; Wu, J; Passlack, M
2018-06-27
We report the capability to simulate in a quantum-mechanical atomistic fashion record-large nanowire devices, featuring several hundred to millions of atoms and a diameter up to 18.2 nm. We have employed a tight-binding mode-space NEGF technique demonstrating by far the fastest (up to 10 000 × faster) but accurate (error < 1%) atomistic simulations to date. Such technique and capability opens new avenues to explore and understand the physics of nanoscale and mesoscopic devices dominated by quantum effects. In particular, our method addresses in an unprecedented way the technologically-relevant case of band-to-band tunneling (BTBT) in III-V nanowire broken-gap heterojunction tunnel-FETs (HTFETs). We demonstrate an accurate match of simulated BTBT currents to experimental measurements in a 12 nm diameter InAs NW and in an InAs/GaSb Esaki tunneling diode. We apply our TB MS simulations and report the first in-depth atomistic study of the scaling potential of III-V GAA nanowire HTFETs including the effect of electron-phonon scattering and discrete dopant impurity band tails, quantifying the benefits of this technology for low-power low-voltage CMOS applications.
NASA Astrophysics Data System (ADS)
Afzalian, A.; Vasen, T.; Ramvall, P.; Shen, T.-M.; Wu, J.; Passlack, M.
2018-06-01
We report the capability to simulate in a quantum-mechanical atomistic fashion record-large nanowire devices, featuring several hundred to millions of atoms and a diameter up to 18.2 nm. We have employed a tight-binding mode-space NEGF technique demonstrating by far the fastest (up to 10 000 × faster) but accurate (error < 1%) atomistic simulations to date. Such technique and capability opens new avenues to explore and understand the physics of nanoscale and mesoscopic devices dominated by quantum effects. In particular, our method addresses in an unprecedented way the technologically-relevant case of band-to-band tunneling (BTBT) in III–V nanowire broken-gap heterojunction tunnel-FETs (HTFETs). We demonstrate an accurate match of simulated BTBT currents to experimental measurements in a 12 nm diameter InAs NW and in an InAs/GaSb Esaki tunneling diode. We apply our TB MS simulations and report the first in-depth atomistic study of the scaling potential of III–V GAA nanowire HTFETs including the effect of electron–phonon scattering and discrete dopant impurity band tails, quantifying the benefits of this technology for low-power low-voltage CMOS applications.
NASA Technical Reports Server (NTRS)
Winglee, Robert M.
1991-01-01
The objective was to conduct large scale simulations of electron beams injected into space. The study of the active injection of electron beams from spacecraft is important, as it provides valuable insight into the plasma beam interactions and the development of current systems in the ionosphere. However, the beam injection itself is not simple, being constrained by the ability of the spacecraft to draw current from the ambient plasma. The generation of these return currents is dependent on several factors, including the density of the ambient plasma relative to the beam density, the presence of neutrals around the spacecraft, the configuration of the spacecraft, and the motion of the spacecraft through the plasma. Two dimensional (three velocity) particle simulations with collisional processes included are used to show how these different and often coupled processes can be used to enhance beam propagation from the spacecraft. To understand the radial expansion mechanism of an electron beam injected from a highly charged spacecraft, two dimensional particle-in-cell simulations were conducted for a high density electron beam injected parallel to magnetic fields from an isolated equipotential conductor into a cold background plasma. The simulations indicate that charge build-up at the beam stagnation point causes the beam to expand radially to the beam electron gyroradius.
NASA Technical Reports Server (NTRS)
1991-01-01
The object was to conduct large scale simulations of electron beams injected into space. The study of active injection of electron beams from spacecraft is important since it provides valuable insight into beam-plasma interactions and the development of current systems in the ionosphere. However, the beam injection itself is not simple, being constrained by the ability of the spacecraft to draw return current from the ambient plasma. The generation of these return currents is dependent on several factors, including the density of the ambient plasma relative to the beam density, the presence of neutrals around the spacecraft, the configuration of the spacecraft, and the motion of the spacecraft through the plasma. Two dimensional particle simulations with collisional processes included are used to show how these different and often coupled processes can be utilized to enhance beam propagation from the spacecraft. To understand the radical expansion of mechanism of an electron beam from a highly charged spacecraft, two dimensional particle in cell simulations were conducted for a high density electron beam injected parallel to magnetic fields from an isolated equipotential conductor into a cold background plasma. The simulations indicate that charge buildup at the beam stagnation point causes the beam to expand radially to the beam electron gyroradius.
Population Synthesis of Radio & Gamma-Ray Millisecond Pulsars
NASA Astrophysics Data System (ADS)
Frederick, Sara; Gonthier, P. L.; Harding, A. K.
2014-01-01
In recent years, the number of known gamma-ray millisecond pulsars (MSPs) in the Galactic disk has risen substantially thanks to confirmed detections by Fermi Gamma-ray Space Telescope (Fermi). We have developed a new population synthesis of gamma-ray and radio MSPs in the galaxy which uses Markov Chain Monte Carlo techniques to explore the large and small worlds of the model parameter space and allows for comparisons of the simulated and detected MSP distributions. The simulation employs empirical radio and gamma-ray luminosity models that are dependent upon the pulsar period and period derivative with freely varying exponents. Parameters associated with the birth distributions are also free to vary. The computer code adjusts the magnitudes of the model luminosities to reproduce the number of MSPs detected by a group of ten radio surveys, thus normalizing the simulation and predicting the MSP birth rates in the Galaxy. Computing many Markov chains leads to preferred sets of model parameters that are further explored through two statistical methods. Marginalized plots define confidence regions in the model parameter space using maximum likelihood methods. A secondary set of confidence regions is determined in parallel using Kuiper statistics calculated from comparisons of cumulative distributions. These two techniques provide feedback to affirm the results and to check for consistency. Radio flux and dispersion measure constraints have been imposed on the simulated gamma-ray distributions in order to reproduce realistic detection conditions. The simulated and detected distributions agree well for both sets of radio and gamma-ray pulsar characteristics, as evidenced by our various comparisons.
Study of wind retrieval from space-borne infrared coherent lidar in cloudy atmosphere.
NASA Astrophysics Data System (ADS)
Baron, Philippe; Ishii, Shoken; Mizutani, Kohei; Okamoto, Kozo; Ochiai, Satoshi
2015-04-01
Future spaceborne tropospheric wind missions using infrared coherent lidar are currently being studied in Japan and in the United States [1,2]. The line-of-sight wind velocity is retrieved from the Doppler shift frequency of the signal returned by aerosol particles. However a large percentage (70-80%) of the measured single-shot intensity profiles are expected to be contaminated by clouds [3]. A large number of cloud contaminated profiles (>40%) will be characterized by a cloud-top signal intensity stronger than the aerosol signal by a factor of one order of magnitude, and by a strong attenuation of the signal backscattered from below the clouds. Profiles including more than one cloud layer are also expected. This work is a simulation study dealing with the impacts of clouds on wind retrieval. We focus on the three following points: 1) definition of an algorithm for optimizing the wind retrieval from the cloud-top signal, 2) assessment of the clouds impact on the measurement performance and, 3) definition of a method for averaging the measurements before the retrieval. The retrieval simulations are conducted considering the instrumental characteristics selected for the Japanese study: wavelength at 2 µm, PRF of 30 Hz, pulse power of 0.125 mJ and platform altitude between 200-400 km. Liquid and ice clouds are considered. The analysis uses data from atmospheric models and statistics of cloud effects derived from CALIPSO measurements such as in [3]. A special focus is put on the average method of the measurements before retrieval. Good retrievals in the mid-upper troposphere implie the average of measured single-range power spectra over large horizontal (100 km) and vertical (1 km) ranges. Large differences of signal intensities due to the presence of clouds and the clouds non-uniform distribution have to be taken into account when averaging the data to optimize the measurement performances. References: [1] S. Ishii, T. Iwasaki, M. Sato, R. Oki, K. Okamoto, T. Ishibashi, P. Baron, and T. Nishizawa: Future Doppler lidar wind measurement from space in Japan, Proc. of SPIE Vol. 8529, 2012 [2] D. Wu, J. Tang, Z. Liu, and Y. Hu: Simulation of coherent doppler wind lidar measurement from space based on CALIPSO lidar global aerosol observations. Journal of Quantitative Spectroscopy and Radiative Transfer, 122(0), 79-86, 2013 [3] G.D Emmitt: CFLOS and cloud statistics from satellite and their impact on future space-based Doppler Wind Lidar development. Symposium on Recent Developments in Atmospheric Applications of Radar and Lidar, 2008
Application of the ADAMS program to deployable space truss structures
NASA Technical Reports Server (NTRS)
Calleson, R. E.
1985-01-01
The need for a computer program to perform kinematic and dynamic analyses of large truss structures while deploying from a packaged configuration in space led to the evaluation of several existing programs. ADAMS (automatic dynamic analysis of mechanical systems), a generalized program from performing the dynamic simulation of mechanical systems undergoing large displacements, is applied to two concepts of deployable space antenna units. One concept is a one cube folding unit of Martin Marietta's Box Truss Antenna and the other is a tetrahedral truss unit of a Tetrahedral Truss Antenna. Adequate evaluation of dynamic forces during member latch-up into the deployed configuration is not yet available from the present version of ADAMS since it is limited to the assembly of rigid bodies. Included is a method for estimating the maximum bending stress in a surface member at latch-up. Results include member displacement and velocity responses during extension and an example of member bending stresses at latch-up.
Test Frame for Gravity Offload Systems
NASA Technical Reports Server (NTRS)
Murray, Alexander R.
2005-01-01
Advances in space telescope and aperture technology have created a need to launch larger structures into space. Traditional truss structures will be too heavy and bulky to be effectively used in the next generation of space-based structures. Large deployable structures are a possible solution. By packaging deployable trusses, the cargo volume of these large structures greatly decreases. The ultimate goal is to three dimensionally measure a boom's deployment in simulated microgravity. This project outlines the construction of the test frame that supports a gravity offload system. The test frame is stable enough to hold the gravity offload system and does not interfere with deployment of, or vibrations in, the deployable test boom. The natural frequencies and stability of the frame were engineered in FEMAP. The test frame was developed to have natural frequencies that would not match the first two modes of the deployable beam. The frame was then modeled in Solidworks and constructed. The test frame constructed is a stable base to perform studies on deployable structures.
Large Field of View PIV Measurements of Air Entrainment by SLS SMAT Water Sound Suppression System
NASA Astrophysics Data System (ADS)
Stegmeir, Matthew; Pothos, Stamatios; Bissell, Dan
2015-11-01
Water-based sound suppressions systems have been used to reduce the acoustic impact of space vehicle launches. Water flows at a high rate during launch in order to suppress Engine Generated Acoustics and other potentially damaging sources of noise. For the Space Shuttle, peak flow rates exceeded 900,000 gallons per minute. Such large water flow rates have the potential to induce substantial entrainment of the surrounding air, affecting the launch conditions and generating airflow around the launch vehicle. Validation testing is necessary to quantify this impact for future space launch systems. In this study, PIV measurements were performed to map the flow field above the SMAT sub-scale launch vehicle scaled launch stand. Air entrainment effects generated by a water-based sound suppression system were studied. Mean and fluctuating fluid velocities were mapped up to 1m above the test stand deck and compared to simulation results. Measurements performed with NASA MSFC.
Recent developments in deployment analysis simulation using a multi-body computer code
NASA Technical Reports Server (NTRS)
Housner, Jerrold M.
1989-01-01
Deployment is a candidate mode for construction of structural space systems components. By its very nature, deployment is a dynamic event, often involving large angle unfolding of flexible beam members. Validation of proposed designs and conceptual deployment mechanisms is enhanced through analysis. Analysis may be used to determine member loads thus helping to establish deployment rates and deployment control requirements for a given concept. Futhermore, member flexibility, joint free-play, manufacturing tolerances, and imperfections can affect the reliability of deployment. Analyses which include these effects can aid in reducing risks associated with a particular concept. Ground tests which can play a similar role to that of analyses are difficult and expensive to perform. Suspension systems just for vibration ground tests of large space structures in a 1 g environment present many challenges. Suspension of a structure which spatially expands is even more challenging. Analysis validation through experimental confirmation on relatively small simple models would permit analytical extrapolation to larger more complex space structures.
MAPPING GROWTH AND GRAVITY WITH ROBUST REDSHIFT SPACE DISTORTIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwan, Juliana; Lewis, Geraint F.; Linder, Eric V.
2012-04-01
Redshift space distortions (RSDs) caused by galaxy peculiar velocities provide a window onto the growth rate of large-scale structure and a method for testing general relativity. We investigate through a comparison of N-body simulations to various extensions of perturbation theory beyond the linear regime, the robustness of cosmological parameter extraction, including the gravitational growth index {gamma}. We find that the Kaiser formula and some perturbation theory approaches bias the growth rate by 1{sigma} or more relative to the fiducial at scales as large as k > 0.07 h Mpc{sup -1}. This bias propagates to estimates of the gravitational growth indexmore » as well as {Omega}{sub m} and the equation-of-state parameter and presents a significant challenge to modeling RSDs. We also determine an accurate fitting function for a combination of line-of-sight damping and higher order angular dependence that allows robust modeling of the redshift space power spectrum to substantially higher k.« less
NASA Technical Reports Server (NTRS)
Roman, Juan A.; Stitt, George F.; Roman, Felix R.
1997-01-01
This paper will provide a general overview of the molecular contamination philosophy of the Space Simulation Test Engineering Section and how the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) space simulation laboratory controls and maintains the cleanliness of all its facilities, thereby, minimizing down time between tests. It will also briefly cover the proper selection and safety precautions needed when using some chemical solvents for wiping, washing, or spraying thermal shrouds when molecular contaminants increase to unacceptable background levels.
21st Space Simulation Conference: The Future of Space Simulation Testing in the 21st Century
NASA Technical Reports Server (NTRS)
Stecher, Joseph L., III (Compiler)
2000-01-01
The Institute of Environmental Sciences and Technology's Twenty-first Space Simulation Conference, "The Future of Space Testing in the 21st Century" provided participants with a forum to acquire and exchange information on the state-of-the-art in space simulation, test technology, atomic oxygen, programs/system testing, dynamics testing, contamination, and materials. The papers presented at this conference and the resulting discussions carried out the conference theme "The Future of Space Testing in the 21st Century."